Automating Cisco ACL changes

I’ve recently started taking on more network management tasks to help our short staffed networking team. I’m very comfortable with network theory and have configured an number of IOS devices, but am hardly a IOS guru.

One of the first large tasks I was assigned was to add some ACL rules to a hundred plus ACLs we have. Coming from the sys admin side of things, I was baffled to find out that these changes are all made by hand.

Is there not a way to automate these types of configuration issues? What tools should I be learning to use for changing configurations in a scripted fashion across many devices/ACLs? So far my Googlefu has only pointed to Python with pexpect. Just seems like this is such a common task that there would be better tools already setup for it.

I understand that this could be a fairly broad question, but I’m just looking for a starting place to work from.

Note: If there is a commercial tool that is a perfect fit for this case, just assume that we didn’t pay for it. That is normally how it goes.

Related:

How to get less to seek faster with large log files?

I am often dealing with incredibly large log files (>3 GB). I’ve noticed the performance of less is terrible with these files. Often I want to jump do the middle of the file, but when I tell less to jump forward 15 M lines it takes minutes..

The problem I imagine is that less needs to scan the file for ‘\n’ characters, but that takes too long.

Is there a way to make it just seek to an explicit offset? e.g. seek to byte offset 1.5 billion in the file. This operation should be orders of magnitude faster. If less does not provide such an ability, is there another tool that does?

Related:

irqbalance on linux and dropped packets

I am investigating dropped packets on a dual core, quad XEON box running Linux. One thing I see is irqbalance running on the system. I have a couple of questions. Reading the docs here I think I understand how it is supposed to work, but one thing that seems confusing is this line – “The current Linux irqbalance program is several years old in design, and is blissfully unaware of the ideas of Quad (or even Dual) core or even power usage. The program is conceptually closer to the naive balancing than to the smart interrupt balancer.” This seems to indicate that there is an old and a new version of irqbalance. Is this the case? How can you tell which is running on the machine.

Also, if my goal is to optimize packet processing during bursts, do I want to run irqbalance, or should I manually bind the network card to a set of CPUs?

Related:

Is it possible to get some type of performance report on Windows Server 2008 R2

I have a server running Windows Server 2008 R2 and I would like to know how well it handles a certain load.

Is there any built in way of telling the server to monitor performance for a specific period and then produce some sort of report conveying the load on CPU, ram, disk etc.

Related:

Restart single uWSGI application (when it’s in emperor mode)

I’m running uWSGI in emperor mode to host a bunch of Django sites based on their individual configs. These are supposed to update when it detects a change in the config file and this largely works when I just touch uwsgi.ini the relevant file.

But occasionally I’ll mess something up in the Django site and the server won’t load. Yeah, yeah, I should be testing better but that’s not really the point. When this happens, uWSGI seems to mark the site as dead and stops trying to run it (seems to make sense). Even after I fix the underlying issue, no amount of touching will get that site’s uWSGI process up and running. I have to reload the whole uWSGI server (knocking dozens of sites out at once for a few seconds).

Is there a way to force uWSGI to just reload one of its sites?

Related:

  • No Related Posts

Nginx SSL redirect for one specific page only

I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration:

...

ssl_certificate      example.net.crt;
ssl_certificate_key  example.key;

server {
    listen 80 default;
    listen 443 ssl;

    server_name www.example.net example.net;
    access_log /srv/www/example.net/logs/access.log;
    error_log /srv/www/example.net/logs/error.log;

    root /srv/www/example.net/public_html;

    location / {
        if ( $scheme = https ){
            return 301 http://example.net$request_uri;
        }
        try_files $uri $uri/ /index.php?$uri&$args;
        index index.php index.html;
    }

    location ^~ /admin.php {
        if ( $scheme = http ) {
            return 301 https://example.net$request_uri;
        }
        include fastcgi_params;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS on;
    }

    location ~ \.php$ {
        try_files $uri =404;
        include fastcgi_params;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

...

It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files.

Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem.

Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance!

EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

Related:

Error while bringing up eth1

I’m getting this error while bringing up my network card:

(process:2550): WARNING **: _nm_object_get_property: Error getting
‘State’ for /org/freedesktop/NetworkManager/ActiveConnection/3: (19)
Method “Get” with signature “ss” on interface
“org.freedesktop.DBus.Properties” doesn’t exist

I’m using the following commands:

1. ifup eth1
2. /etc/init.d/network restart

I have installed a fresh copy of Centos 6.2 and configured the network card.

Related:

How do Windows DFS referrals work for laptops away from the office?

How do Windows DFS referrals work for laptops away from the office?

[Edit 2014-11-26] I already understand how sites are defined by subnets, etc. But what I want to know is what happens to users that are not fitting within any sites at all? Home users via VPN, for example. The VPN doesn’t supply an IP that is within any of the sites either due to how the VPN system works, so that’s not going to help define a site. Does it treat all DFS namespace targets as exactly equal since the client is in zero sites at the moment? Is there any way to encourage DFSN to treat the DFSN targets in one site as preferred based on proximity or the user’s usual site? (I know how to say “treat X server as preferred when all is equal” but that’s not what I’m asking for here.)


The rest of this text is just background for your interest or entertainment.

I was under the impression that the laptop would retain whatever last site it was at or the site of the last DC it connected to. And then that site would determine which site that laptop is presently at for the purposes of referral ranking.

The reason I ask is because we’re getting some laptop users getting some strange referrals to an office that they’ve never actually been to. And those laptops are having a great connection to the servers in the other sites that they should be connecting to. The strange referral target in this case is a server that isn’t known for being reliable. If it were basing it on some speed or reliability metric, it’s got it completely wrong. The question is just that first line up there, the rest of this text is just background.

The hardware/software involved is: Windows 2008 R2 servers for all things; Windows 7 64-bit and Windows XP 32-bit laptops.

And this is a question about DFS Namespaces, not about DFS Replication.

How DFS Works: DFS Processes and Interactions states the following:

When least expensive target selection is enabled, DFS places targets
in the referral in the following order:

Targets in the same site as the client are listed in random order at
the top of the referral.

Targets outside of the client’s site are listed in order of lowest
cost to highest cost. Referrals with the same cost are grouped
together and within each group the targets are listed in random order.

If the laptop is at a client office, hotel, home, etc, is it still considered to be within a site? If so, I assume that site would be used in the first group of targets above. If it isn’t within a site, I’d assume it skips that first group altogether and just randomly assigns the targets. But the behavior we’re seeing is that it is more like the latter than the former.

Edit: (5/21/2012 3am PT)
It seems that the folders with “Override referral ordering” set to “First among targets of equal cost” get goofy. The goofy part is that the target with that enable seems to be the last in order, not first. I’m unsetting these to see what happens. Will report later.

Related:

Cannot ping Ubuntu server by hostname

I know this is already asked but I can’t find a clear answer. I put a new Ubuntu server in my LAN and I can’t ping it but the hostname. I don’t want to edit all my LAN windows hosts’s file (+40 pcs) and add the entry. Perhaps I have another Ubuntu server and I can ping it by hostname without change anything.

All servers and client are using another internal DNS server based on an old redhat installation.

The hostname in the server that I can’t ping is setted. What I have to do/check?

Related:

Deploying play! 2.0 application on an apache server with a reverse proxy

I’m trying to deploy my play! 2.0 application on an Ubuntu 11.10 server and I have been running into error after error and hope someone can help me here. I am try to deploy my Play! application using a reverse proxy on Apache 2. I have enabled the apache proxy modules and configured the proxy.conf file in mods_enabled. The vhost for my domain looks like this:

<Directory /var/www/stage.domain.com>
    AllowOverride None
    Order Deny,Allow
    Deny from all
</Directory>

<VirtualHost *:80>

    DocumentRoot /var/www/stage.domain.com/web

    ServerName stage.domain.com
    ServerAdmin webmaster@stage.domain.com

#    ProxyRequests Off
#    ProxyPreserveHost On
     <Proxy *>
        Order allow,deny
            Allow from all
     </Proxy>
#    ProxyVia On
#    ProxyPass /play/ http://localhost:9000/
#    ProxyPassReverse /play/ http://localhost:9000/


    ErrorLog /var/log/ispconfig/httpd/stage.domain.com/error.log


    ErrorDocument 400 /error/400.html
    ErrorDocument 401 /error/401.html
    ErrorDocument 403 /error/403.html
    ErrorDocument 404 /error/404.html
    ErrorDocument 405 /error/405.html
    ErrorDocument 500 /error/500.html
    ErrorDocument 502 /error/502.html
    ErrorDocument 503 /error/503.html

    <IfModule mod_ssl.c>
    </IfModule>
    <Directory /var/www/stage.domain.com/web>
        Options FollowSymLinks
        AllowOverride All
        Order allow,deny
        Allow from all
    </Directory>
    <Directory /var/www/clients/client2/web7/web>
        Options FollowSymLinks
        AllowOverride All
        Order allow,deny
        Allow from all
    </Directory>



    # Clear PHP settings of this website
    <FilesMatch "\.ph(p3?|tml)$">
        SetHandler None
    </FilesMatch>
    # mod_php enabled
    AddType application/x-httpd-php .php .php3 .php4 .php5
    php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -fwebmaster@stage.domain.com"
    php_admin_value upload_tmp_dir /var/www/clients/client2/web7/tmp
    php_admin_value session.save_path /var/www/clients/client2/web7/tmp
        # PHPIniDir /var/www/conf/web7
    php_admin_value open_basedir /var/www/clients/client2/web7/:/var/www/clients/client2/web7/web:/va$


    # add support for apache mpm_itk
    <IfModule mpm_itk_module>
      AssignUserId web7 client2
    </IfModule>

    <IfModule mod_dav_fs.c>
          # Do not execute PHP files in webdav directory
      <Directory /var/www/clients/client2/web7/webdav>
            <FilesMatch "\.ph(p3?|tml)$">
          SetHandler None
        </FilesMatch>
      </Directory>
      # DO NOT REMOVE THE COMMENTS!  
      # IF YOU REMOVE THEM, WEBDAV WILL NOT WORK ANYMORE!
      # WEBDAV BEGIN
      # WEBDAV END
    </IfModule>
#       <Location /play/>
#               ProxyPass http://localhost:9000/
#               SetEnv force-proxy-request-1.0 1
#               SetEnv proxy-nokeepalive 1
#       </Location>   
        ProxyRequests Off
        ProxyPass /play/ http://localhost:9000/  
        ProxyPassReverse /play/ localhost:9000/
        ProxyPass /play http://localhost:9000/
ProxyPassReverse /play http://localhost:9000/

#       SetEnv force-proxy-request-1.0 1
#       SetEnv proxy-nokeepalive 1
</VirtualHost>

This vhost file was generated by ispconfig and I have not touched anything that was there before just added onto. As you can see by the commented out parts I have tried a lot of different things based on random tutorials I have found but all of them have ended up in Internal Server Error, 503 and most often a ‘502 Bad Gateway`.

I can start play and it does connect successfully to my database. I can get a page to show up when there is an error and the play! stack trace error pages comes up but where everything is fine I get one of the errors above.

My application.conf file looks like this:

db info
.......
application.mode=PROD
logger.root=ERROR

# Logger used by the framework:
logger.play=INFO

# Logger provided to your application:
logger.application=DEBUG

http.path="/play/"
XForwardedSupport="127.0.0.1"

And my hosts file looks like this (I have never changed or added anything to the host file):

127.0.0.1       localhost
127.0.1.1       matrix

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Any insights onto what I might be doing wrong or if theres anything I can try please let me know! Thanks!!

Edit

Again the reverse proxy will work (I checked with sending to to google.com). Its when there is a successful connection to Netty. It’s like Netty refuses the connection to the page.

Edit 2

output from apachectl -S

_default_:8081         127.0.0.1 (/etc/apache2/sites-enabled/000-apps.vhost:10)
*:8090                 is a NameVirtualHost
         default server 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10)
         port 8090 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10)
*:80                   is a NameVirtualHost
         default server 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1)
         port 80 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1)
         port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)
         port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)
         port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)
         port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)
         port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)
         port 80 namevhost stage.domain.com (/etc/apache2/sites-enabled/100-stage.domain.com.vhost:7)
         port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)

Related: