Monday, November 15, 2010

Using the Mighty IPTables to Prevent an HTTP(s) DoS Attack

Using this procedure, the kernel netfilter will deny (and log to /var/log/messages) packets to ports 80,443 from hosts that exceed 20 requests in 5 seconds. IPTables will then DROP packets for 5 seconds, then allow them back to. This has the benefit of not blocking legitimate traffic, only slowing it to a reasonable amount.
So, let's get started, install iptables:

yum -y install iptables

IPTables, by default only timestamps and tracks up to 20 connections. Which isn't very many. This means that by default if you use --hitcount 21 you'll error out. You can control the limit by updating /etc/modprob.d/modprobe.conf:

options ipt_recent ip_pkt_list_tot=50

Then, reload the ipt_recent kernel module:
rmmod ipt_recent
modprobe ipt_recent

Next, create the script that will add the rules (vi /tmp/

# Create a LOGDROP chain to log and drop packets
iptables -N LOGDROP
iptables -A LOGDROP -j LOG
iptables -A LOGDROP -j DROP

iptables -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -m recent --set --name "limit-http" --rsource
iptables -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -m recent --update --seconds 5 --hitcount 20 --name "limit-http" --rsource -j LOGDROP
iptables -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT

iptables -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -m recent --set --name "limit-https" --rsource
iptables -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -m recent --update --seconds 5 --hitcount 20 --name "limit-https" --rsource -j LOGDROP
iptables -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT

Execute that script:
sh /tmp/

Now, we want these rules to be applied on each reboot, so only do this if you have nothing in /etc/sysconfig/iptables - which most ec2 clients don't even have. If you have stuff in there, just bake the above in. so do (on CentOS/RHEL derivatives):

iptables-save > /etc/sysconfig/iptables

To check, do:

service iptables stop
service iptables start
service iptables status

Helpful NOTES:

You can see what the packet filter is doing, in real time, like this:

watch iptables -nvx -L

Also, you can use apache's benchmarking program, ab to trip the filter for testing purposes, like this:

ab -n 1000 -c 5 http://IP/index.html

Where 1000 is the total number of requests and 5 is the number of concurrent requests.

So, to test it out, point that at your webserver and tail /var/log/messages, and you'll find that you start dropping packets from the client running apache bench.

You're welcome! :)

Monday, November 8, 2010

Running OpenVAS Security Scanner: Ubuntu 10.10

The nessus project is now a for-pay company. I think you can get a free home license but if you want to scan your infrastructure at work, they no likey. A project called OpenVAS is a fork of the Nessus project that's all open source and free.

I want to continually scan some systems and generate reports I can diff to see if any jokers have added services or change rules that expose network services I don't want exposed. So, I'm installing OpenVAS on a VM instance that I'm going to use to scan my infrastructure. I'm not going to use the GUI client because I want this to be scripted. The way OpenVAS works is, you have an OpenVAS server, which you connect to with clients and tell it what to do. So, you could install the software on a system in a data center or EC2 or whatever, then run the client from your desktop and have it do you're bidding.

In this case, my client - is the command line client which is going to run on the same system as the server.

To install on Ubuntu 10.10, simply do:

# Update your distro
apt-get update && apt-get dist-upgrade

# Install openvas server and client software + security plugins
apt-get install openvas-server openvas-client \
   openvas-plugins-base openvas-plugins-dfsg

# Update the vuln. database

Add a user that you're going to use from the client, to login:

Here, you'll add a user/pass combination.

When prompted to add a 'rule' - I allow my user to do everything. The rules allow/disallow scanning of hosts. If you want you can let bob scan or whatever. I want my user to scan all, so when prompted, simply enter

default accept

Now, fire up the server. Note that the first time you run, it loads all those checks into memory so it takes a LONG time for the server to actually start.

/etc/init.d/openvas-server start

Now, you can start scanning. Create a file with IP's and/or hostnames that your client will feed to the server to scan. Something like this:


The server listens on port: 9390 by default so you'll want to tell your client to connect there. Once you have the file created, you can kick off your scan like this:

OpenVAS-Client -q 9390 admin scanme.txt -T html \

You'll be prompted to accept the SSL certificate, go ahead, it's automagically created by the pkg when it's installed. Then, open that file in a browser when it's done and start going through it. Be warned, scanning is very hostile so you should really only scan your own systems.. and those of your enemies.

Tuesday, November 2, 2010

Deploying a Set of HAProxy Servers as EC2 Instances

If you want high availability in EC2, one option is to deploy a couple of load balancers in front of a bunch of application servers, all in the same security group. I really like HAProxy. It's fast, and very configurable. I remember we got a couple of F5's back in the day and I wanna say that was like 60k for two. Which is a lot if you're a startup. Not that you can deploy them in EC2 but it's just kinda cool how all this stuff trickles on down to the cloud.

I think it makes complete sense. Anyway, here's how I did it recently. Since you can't get a floater IP in EC2, or more than one IP per instance, I setup DNS round robin for the proxy addresses. So, in this case we have two dedicated ec2 instances, running CentOS. Each one will have haproxy installed and configured. So, once you get them spun up, you'll want to get an elastic IP for each, then configure DNS to point to both. {, }

Each IP is the public address of each of your HAProxy servers. To test, you can just setup a dummy hostname, like pointing to those IP's and do the cut over when you're sure you're happy with the setup.

So, next, login to each instance and run:

yum -y install haproxy

HAProxy supports two modes, tcp and http. You can't do SSL in http mode so, this deployment was in tcp mode. HTTP mode has some really cool and interesting features with HAProxy's recent acl additions. Google around for HAProxy and ACL. You can get super granular on which app server handles what kind of traffic to include or where to direct certain kinds of request.. for example: go here for SSL, here for dynamic content and here for static HTML and images. It's really pretty cool and new in 1.3, I think. Maybe I'll try all that out someday.

Next, edit /etc/haproxy/haproxy.cfg and enter the following:
# HA Proxy Configuration
balance roundrobin

    chroot      /var/lib/haproxy
    pidfile     /var/run/
    maxconn     20000
    user        haproxy
    group       haproxy

    mode        tcp
    log       local0
    log       local1 notice
    option      dontlognull
    option      redispatch
    timeout connect 2000 # default 2 second time out if a backend is not found
    timeout client 300000
    timeout server 300000
    maxconn     60000
    retries     3

# This is what we're listening on.
frontend haproxy *:443
   mode tcp
   maxconn 20480
   default_backend app_servers

# This is who we send requests to
backend app_servers
   mode tcp
   server app1
   server app2
   server app3
   server app4

So, in this example we have 4 app servers. I feel like it's so simple and self explanatory that you can just get in there and edit and test it out. Both HAProxy instances have the exact same configuration file - assuming you've deployed everything to the same security group.

The only other thing i did was to add a snippit to /etc/syslog-ng/syslog-ng.conf to log all HAProxy's messages via syslog. That's here /etc/syslog-ng/syslog-ng.conf:

source s_udp {
       udp(ip( port(514));
destination d_haproxy { file("/var/log/haproxy"); };
filter f_local0 { facility(local0); };
log { source(s_udp); filter(f_local0); destination(d_haproxy); };

Now, just fire it all up:

service haproxy start
chkconfig haproxy on
service syslog-ng restart

HAProxy has a stats interface which I haven't enabled here. If I do, I'll edit the above with the stat's info config.

Wednesday, October 20, 2010

How To Create A Gluster Roll for Rocks Clusters

If you're here you probably already know what rocks is and why it's awesome. In this post, I'm going to download the Gluster 3.0.5 code and build a roll I can use to deploy to all my compute nodes. So, first off gluster is a super cool product. My only complaint is, it's missing some tools for troubleshooting and status checks. Thus, I've written some nagios checks that have been working out quite well though, so I get alerts if any badness goes down.

The thing I LOVE about Gluster? No meta-nodes! Seriously, that is KEY. If you manage a storage platform and deal with meta-nodes you know what I'm talking about. Clustered meta-nodes? Even more annoying. You have replicated storage, why would you need a clustered meta node?! Stuff all that information into the storage cloud and spread bits around. Not to mention, meta-nodes require a physical RAID, etc, which for one is oh, about $10k. Another thing I like about Gluster? You can run it between ec2 instances - no meta-nodes! That's another posting, how to get Gluster going on a bunch of cloud instances.. So anyway, let's jump right in, shall we? Also, sorry about the formatting for my code and cmds. Blogger might not be the best platform for me. Oh well....

Login to your frontend..

In order to compile Gluster, you'll need to install the following:

yum -y install flex libibverbs-devel

Next, download the Gluster source:
cd /root/
mkdir gluster-3.0.5/
cd $!

Next, we're going to turn that code into some RPM's to stuff in our roll.

rpmbuild -ta glusterfs-3.0.5.tar.gz

That rpmbuild will untar the archive, compile and build the software and bundle it into RPM's for us. Dope.

When it's done, the RPM's will be in


However, for all the performance goodness, we should also download Gluster's fuse code, as it uses fuse for bridging user/kernel land. Pretty much the same process, although fuse is a kernel module, so you're going to need the kernel sources to compile it against. In MY case, I'm running the Xen kernel, so I need the kernel-xen-devel package, you probably don't but it doesn't hurt to install both:

yum -y install kernel-devel 
yum -y install kernel-xen-devel

Now, download, compile and package:

cd /root/gluster-3.0.5/
rpmbuild -ta fuse-2.7.4glfs11.tar.gz

Ok, now you'll have all the fuse and gluster software bundled into RPM's:

[root@yomama gluster-3.0.5]# dir -1 /usr/src/redhat/RPMS/x86_64/

Ok, so now, simply create a new (empty) roll:
cd /export/site-roll/rocks/src/roll/
rocks create new roll gluster-3.0.5 version=5.3 color=brown

I use 5.3 just since all the other rolls are 5.3, you can use whatever. Also, the color is what's displayed in the software graph, so that is optional.

Ok, so now..

cd gluster-3.0.5/
rm -rf src/
mkdir RPMS/
cp /usr/src/redhat/RPMS/x86_64/gluster* RPMS/
cp /usr/src/redhat/RPMS/x86_64/fuse* RPMS/

Ok, so almost done. We simply need a couple the node and graph files.

Here's my node file. Note that I have my compute nodes grab the file off the frontend when the Gluster roll is installed on the node. The reason is that you don't have to rebuild your roll every time you make a change to the script, which you'll likely want to do as it's somewhat hardware dependent, in terms of your HD layout and also, the IP address of your frontend will be different. Once you're happy with the script, you can add it to the post section of your node file. It executes either way, so it's more of a cosmetic issue.

Edit: /export/site-roll/rocks/src/roll/gluster-3.0.5/nodes/gluster-3.0.5.xml
<?xml version="1.0" standalone="no"?>


        Gluster 3.0.5

        Copyright (c) 2010

        $Log: gluster-3.0.5.xml,v $
        Revision 1.17  2010 joey
        Let's rock 



           # This is somewhat hardware dependent.. so I don't put it 
           # in the nodes file.
           cd /tmp/


and here's my graph file. You'll probably want to change vm-container to compute nodes.

Edit: /export/site-roll/rocks/src/roll/gluster-3.0.5/graphs/default/gluster-3.0.5.xml
<?xml version="1.0" standalone="no"?>


        The gluster 3.0.5 Roll

        Copyright (c) 2010
        All rights reserved. 

        $Log: gluster.xml,v $

        # Install on all compute nodes, the frontend and vm-container nodes.
        <edge from="vm-container-client" to="gluster-3.0.5" />
        <edge from="compute" to="gluster-3.0.5" />
        <edge from="server" to="gluster-3.0.5" />


So, that's it for the roll. Just build it:

cd /export/site-roll/rocks/src/roll/gluster-3.0.5
make roll

Now, add your roll to the FE and enable it:

rocks add roll gluster-3.0.5-5.3-0.x86_64.disk1.iso
rocks enable roll gluster-3.0.5
rocks list roll gluster-3.0.5
cd /export/rocks/install && rocks create distro

Now, this script is something you'll have to edit and modify to YOUR environment. This is an example of one of my clusters. In this case, I'm taking all of /dev/sdb on the vm-containers and dedicating that to gluster. In my case, it's a 1TB drive. Note that this wacks all the data on sdb but you already know that cause you're a rocks admin and we're reimaging all the compute nodes.

You'll clearly want to modify this to suit your needs. This is the install script referenced in the node xml above. Now, note that the reason we have to do this, is because we need to know about all the nodes prior to configuring Gluster. New to Gluster in version 3.1 which has been released recently is the ability to add nodes dynamically - which will mean mean a fully functioning cluster share when the systems are imaged. Very nice.

Edit: /var/www/html/

# Install and configure gluster
# Joey 

mkdir /gluster/
mkdir /etc/glusterfs/
mkdir /glusterfs/

# Prepare sdb to be mounted as /gluster/
/sbin/fdisk -l /dev/sdb | perl -lane 'print "Wacking: /dev/sdb$1" and system "parted /dev/sdb rm $1" if (/\/dev\/sdb(\d+)\s/)'
/sbin/parted -s /dev/sdb mkpart primary ext3 0 1000200
sleep 5
/sbin/mkfs.ext3 /dev/sdb1

# Get sdb and the glusterfs loaded into /etc/fstab
echo "/dev/sdb1               /gluster                ext3    defaults        0 1" >> /etc/fstab
echo "/etc/glusterfs/glusterfs.vol  /glusterfs/ glusterfs  defaults  0  0" >> /etc/fstab

# Create the list of volume participants.
cd /etc/glusterfs/

# Replicated
glusterfs-volgen --name glusterfs --raid 1 \
   vm-container-0-0:/gluster \
   vm-container-0-1:/gluster \
   vm-container-0-2:/gluster \
   vm-container-0-3:/gluster \
   vm-container-0-4:/gluster \

cp vm-container-0-0-glusterfs-export.vol glusterfsd.vol
cp glusterfs-tcp.vol glusterfs.vol
rm -rf *glusterfs-export.vol
rm -rf *.sample
rm -rf glusterfs-tcp.vol

echo "modprobe fuse && mount -a" >> /etc/rc.local

Re-image your vm-containers/compute nodes and you'll have gluster mounted up on

Friday, October 8, 2010

How to Create a CentOS 5 AMI to run on EC2 or Eucalyptus

Building a CentOS 5 AMI

First of all, a bit of background and why I love the cloud. Previously, I have worked for large ISP's (UUNet, Level 3, British Telecom) and a couple of startups. In some cases, when I'd provision a new system, I'd go to the vendor website, spec out the box, order it, wait for it to arrive, rack and console it. Then, I'd provision switch ports, etc and through an OS and apps on it. Then, I'd unrack it, box it up and drive it to the data center. Re-rack, cable, test, switch port configure it, etc. Took a huge amount of effort. With ec2 and cloud computing, you run a few commands and you're up and running in like, oh.. 20 minutes or so. So, that's awesome, convenient, etc.

You can either run a public AMI, an image created by some random person, or you can build and run you're own. Here's how to do it my way.

First a couple of assumptions, you're running this sequence on an existing CentOS system. You have the fuse module loaded:
modprobe fuse && lsmod | grep fuse
Need that to do loopback mounts.

Ok, let's get started. Note that this whole thing can be scripted but I figured I'd give a little explanation on each step.

So, first let's create a directory to work in:

mkdir -p ~/ami/centos/ && cd ~/ami/

Now, we're going to create an empty disk image. If you know about anaconda, this is just like the downloading stage2 process (well, kinda). Mine is going to be 2G because I like to have a bunch of stuff in there. Change count to 1024 if you want it to be smaller.

dd if=/dev/zero of=centos.fs bs=1M count=2048

Now, create a file system on it:

mke2fs -F -j centos.fs

Good, now we're going to open it up to be written to by mounting it:

mount -o loop centos.fs ~/ami/centos/

Ok, now we're going to turn this thing into a Linux system you can boot up. The kernel likes to have this stuff. The directory ~/ami/centos/ is the root directory / on your new instance.

Make the /dev/ directory:
mkdir ~/ami/centos/dev
   /sbin/MAKEDEV -d ~/ami/centos/dev/ -x console
   /sbin/MAKEDEV -d ~/ami/centos/dev/ -x null
   /sbin/MAKEDEV -d ~/ami/centos/dev/ -x zero

Create /etc/

mkdir ~/ami/centos/etc/

Now, we're going to use yum - just the way the default installer does to add a bunch of software to your new system. We create a yum.conf on the local file system to use to install OS and software.

vi ~/ami/yum.conf

Add this to the file:

name=CentOS-5.5 Base
#released updates
name=CentOS-5.5 Updates
#packages used/produced in the build but not released
name=CentOS-5.5 Addons
name=CentOS 5.5 Extras $releasever $basearch

So that pulls stuff from the official CentOS mirror list. GPG checking is off because we don't have the keys installed.

Next, create the proc file system and mount it up. This is where the kernel keeps track of all the stuff it's doing.

mkdir ~/ami/centos/proc
   mount -t proc none ~/ami/centos/proc/

Ok, here's where the magic begins to happen. We're going to start loading os packages:

yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall Core

So using that command, you can install all the stuff you want. The following is my list, you probably don't need the group 'Development Tools'. That's a BUNCH of stuff you only really need if you're doing development. I like to have it on some instances, some it never gets used. So, you should probably ignore it. A good way to see what's available to install is to run:

yum grouplist | less

and that'll tell you what package groups exist. If you're creating a DNS server, you'll want to add the 'DNS Name Server' group. Pick through the list below and install what you want/need. Not everybody is going to want JDK for example.

yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Text-based Internet'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall Ruby
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Web Server'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Development Tools'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Java'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'MySQL Database'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y install curl wget rsync sudo mlocate lsof man tcpdump bc iptables

Once all your software is installed, configure sshd. Now, this is just an old sysadmin tip. Don't run sshd on port 22. It gets scanned 24x7x365 with all kinds of brute force attacks and everything else. I always, always, always run it on another port. You can do tcpwrappers and other tricks (iptables, etc) but just running it on some higher port saves you tons of problems.
So, in this example, I'm running on port 55000.

vi ~/ami/centos/etc/ssh/sshd_config

and add something like this:

Port 55000
Protocol 2
SyslogFacility AUTHPRIV

PermitRootLogin yes
MaxAuthTries 4
PasswordAuthentication no
ChallengeResponseAuthentication no

GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
UsePAM yes

X11Forwarding yes

Subsystem sftp /usr/libexec/openssh/sftp-server

Create a resolv.conf file with valid name servers. I pretty much always use google's because it's fast and anycasted so it's going to be fast no matter where you are, and it's google.

vi ~/ami/centos/etc/resolv.conf

and add:
# Google's public DNS servers.

Now, configure you motd. This is the banner that gets displayed whenever anyone logs in:

vi ~/ami/centos/etc/motd

Put whatever you want in there.. here's an example:

/ Unauthorized users will be killed and  \
\ eaten.                                 /
        \   ^__^
         \  (xx)\_______
            (__)\       )\/\
             U  ||----w |
                ||     ||

This is optional too but I like to have ec2tools installed on my instances:

cd ~/ami/centos/ && wget*
   chroot /root/ami/centos rpm -Uvh ec2-ami-tools.noarch.rpm

vi /etc/profile.d/
export EC2_HOME=/opt/ec2-tools
   export PATH=$EC2_HOME/bin:$PATH

Now, configure DHCP for networking:

mkdir -p /etc/sysconfig/network-scripts/
   vi /etc/sysconfig/network-scripts/ifcfg-eth0



That PERSISTENT_DHCLIENT is very nice to have. It means that if for some reason the DHCP server bites the dust, keep the lease you have. Otherwise your instance could loose it's IP at which point, it's game over.

Turn on networking:
vi /etc/sysconfig/network



Now, configure fstab to mount everything up:

vi /etc/fstab

Add the following:

/dev/sda1               /                       ext3    defaults 1 1
none                    /dev/pts                devpts  gid=5,mode=620 0 0
none                    /dev/shm                tmpfs   defaults 0 0
none                    /proc                   proc    defaults 0 0
none                    /sys                    sysfs   defaults 0 0
/dev/sdc1               /mnt                    ext3    defaults 0 0
/dev/sdc2               swap                    swap    defaults 0 0

Next, turn on the services you want.

chroot ~/ami/centos/ bash
chkconfig --level 345 sshd on
chkconfig --level 345 httpd on


Almost done. I also like to add a user account with a password and sudo access, so if for some reason I can't get in with my ssh key(s), I can just login as this user and sudo to root and figure out what's going on. This step is optional but very useful:

chroot ~/ami/centos/ bash
   useradd -g users mclovin
   passwd mclovin

Then add your user to the sudoers file:


find this line:
root    ALL=(ALL)       ALL

and add your new user:
mclovin    ALL=(ALL)       ALL

Then, exit the chroot'd env:

umount ~/ami/centos/

That's pretty much it holmes. Once that's all done you can unmount the image file. Bundle, upload and run.

Wednesday, October 6, 2010

CentOS 5 Icecast Server HowTo

So, you wanna stream some tunes, aye? Ok well here's mine, click to listen:

   Jah Radio

Please feel free to tune in, currently it's pretty much all Bob Marley. I'm using some pretty schweet software called icecast. You can pretty much stream any kind of audio you want. The icecast server takes a feed and retransmits, so you'll need a source. That can either be local MP3 files, or a stream from your desktop player, or someone else's stream. Which is what I'm doing. I'm just grabbing some other dude's stream and rebroadcasting it.

Here's how ya do that on a fresh CentOS 5 install:

First, grab the source:

Next, add the RPMforge repo (make sure it matches your architecture, goober):
rpm -Uhv

Install the depdencies:
yum -y install libvorbis-devel libogg-devel curl-devel libxml2-devel libxslt-devel libtheora-devel speex-devel

If you don't have development tools installed, you'll need them to build, compile and bundle, so just run:
yum groupinstall 'Development Tools'

Otherwise, if you already have gcc and all that jazz, run:
rpmbuild -ta icecast-2.3.2.tar.gz

That takes the icecast source and builds you an RPM which you can install. I like doing it that way so you can make a copy of that RPM and use it for other deployments.

Now, once the rpm(s) are build, you can install them like so:
rpm -Uvh /usr/src/redhat/RPMS/x86_64/icecast*.rpm

I like to stuff to log to /var/log - some folks don't. Whatever, it's your call, if your doing it my way (which you should because I'm awesome) do:
mkdir /var/log/icecast/
chown -R nobody:nobody /var/log/icecast/

Now, move their default /etc/icecast.xml and use mine:
mv /etc/icecast.xml /etc/icecast.xml.orig

This is a fully, ready to roll config file which will have you rebroadcasting a popular shoutcast Bob Marley stream:




<alias source="/" dest="/status.xsl"/>

<loglevel>3</loglevel> <!-- 4 Debug, 3 Info, 2 Warn, 1 Error -->
<logsize>10000</logsize> <!-- Max size of a logfile -->



You'll have to edit that file slightly for the hostname. Also, find your own music here:


and update the relay stanza in the above config.

Now, just make sure you have port 8000 open for TCP connects and hit:

Relax and enjoy.

By the way, I had to encode all that XML above to get it to look right in Blogger. The way to do that is to use a site like this:

Worked great.

Monday, October 4, 2010

Increasing File Handle Limits in CentOS

So, I pretty much use CentOS for all the systems I deploy to production. I futz around with others but I've been doing Redhat since, oh like 1996 and I just know it and it's solid and there's rpmforge for packages that aren't in Base and Extras.

I notice Java applications like to use a lot of file handles. Also, busy TCP network services (udp only uses a single socket) use up lots of sockets, which are counted as a file handle. Think, a very busy Java Message Bus..... so I have no clue why the default is 1024. But if you find your applications (or system applications that are logging to syslog) tell you that you're out of file handles, you can check to see how many you have like this:

[root@elvira ~]# ulimit -n

That should be more like 65535 for a system with, oh 8G or memory. So, you need to set it in a couple of spots. When a user logs into a system, the PAM system comes into play. The limits.conf file is PAM configuration file and affects user processes and shells. The other file is sysctl.conf which sets kernel values and thus affects pretty much everything else. Here's how it's done:

# Increase file descriptor limit to 65535

cat<< EOF >> /etc/sysctl.conf

# Increase file handle limit 
fs.file-max = 65535


cat<< EOF >> /etc/security/limits.conf

*       soft    nofile  65535
*       hard    nofile  65535


sysctl -p /etc/sysctl.conf

Before running that, check the files and make sure to delete any existing entries for the above, so you don't end up with duplicates.

Now, simply log out of your shell and log back in. You should now see that your ulimit is set to 65535.

[root@elvira~]# ulimit -n

I'm really doing it this time..

Howdy, well it's time for me to start blogging. I've been threatening myself for years. Blog or else. But I have a ton of other stuff going on and ski season is just around the corner. However.. I need a place to put stuff that I've discovered and just ramble on about stuff. Mostly this is going to be hard-core Linux stuff because I have a bunch of random notes from all over that could really help others out, the way I've learned so much from other helpful Internet citizens. Anyhoo, this is it. Welcome.

Hmm.. I just noticed that my URL sysextra looks a little provocative. Bonus!

So, some stuff about me. I use to work for UUNet, where I really lucked out. There were some really smart people that there I learned from. I started out in the support group which kinda turned into me working in a web hosting sys-admin group. After that, I was in product development (devo) which was really operations and development. From there it was off to Level 3 in Colorado, then a couple of start-ups and now I work for BT on a cloud computing platform. Funny how much it reminds me of managed hosting at UUNet, only cooler.

I really love all things operations so I always kinda do it, even when I'm in a development roll. Good sysadmins like to be on the frontlines because that's where the action is. I enjoy writing perl and PHP code and all things Linux. I think that's about it for now, the kids are bugging me to get dinner going. Oh  yea, I'm a big Grateful Dead fan.