Wednesday, October 20, 2010

How To Create A Gluster Roll for Rocks Clusters

If you're here you probably already know what rocks is and why it's awesome. In this post, I'm going to download the Gluster 3.0.5 code and build a roll I can use to deploy to all my compute nodes. So, first off gluster is a super cool product. My only complaint is, it's missing some tools for troubleshooting and status checks. Thus, I've written some nagios checks that have been working out quite well though, so I get alerts if any badness goes down.

The thing I LOVE about Gluster? No meta-nodes! Seriously, that is KEY. If you manage a storage platform and deal with meta-nodes you know what I'm talking about. Clustered meta-nodes? Even more annoying. You have replicated storage, why would you need a clustered meta node?! Stuff all that information into the storage cloud and spread bits around. Not to mention, meta-nodes require a physical RAID, etc, which for one is oh, about $10k. Another thing I like about Gluster? You can run it between ec2 instances - no meta-nodes! That's another posting, how to get Gluster going on a bunch of cloud instances.. So anyway, let's jump right in, shall we? Also, sorry about the formatting for my code and cmds. Blogger might not be the best platform for me. Oh well....

Login to your frontend..

In order to compile Gluster, you'll need to install the following:

yum -y install flex libibverbs-devel


Next, download the Gluster source:
cd /root/
mkdir gluster-3.0.5/
cd $!
wget http://download.gluster.com/pub/gluster/glusterfs/3.0/3.0.5/glusterfs-3.0.5.tar.gz

Next, we're going to turn that code into some RPM's to stuff in our roll.

rpmbuild -ta glusterfs-3.0.5.tar.gz

That rpmbuild will untar the archive, compile and build the software and bundle it into RPM's for us. Dope.

When it's done, the RPM's will be in

/usr/src/redhat/RPMS/x86_64/

However, for all the performance goodness, we should also download Gluster's fuse code, as it uses fuse for bridging user/kernel land. Pretty much the same process, although fuse is a kernel module, so you're going to need the kernel sources to compile it against. In MY case, I'm running the Xen kernel, so I need the kernel-xen-devel package, you probably don't but it doesn't hurt to install both:

yum -y install kernel-devel 
OR
yum -y install kernel-xen-devel

Now, download, compile and package:

cd /root/gluster-3.0.5/
wget http://download.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.7.4glfs11.tar.gz
rpmbuild -ta fuse-2.7.4glfs11.tar.gz

Ok, now you'll have all the fuse and gluster software bundled into RPM's:

[root@yomama gluster-3.0.5]# dir -1 /usr/src/redhat/RPMS/x86_64/
fuse-2.7.4glfs11-1.x86_64.rpm
fuse-devel-2.7.4glfs11-1.x86_64.rpm
fuse-libs-2.7.4glfs11-1.x86_64.rpm
glusterfs-client-3.0.5-1.x86_64.rpm
glusterfs-common-3.0.5-1.x86_64.rpm
glusterfs-devel-3.0.5-1.x86_64.rpm
glusterfs-server-3.0.5-1.x86_64.rpm

Ok, so now, simply create a new (empty) roll:
cd /export/site-roll/rocks/src/roll/
rocks create new roll gluster-3.0.5 version=5.3 color=brown

I use 5.3 just since all the other rolls are 5.3, you can use whatever. Also, the color is what's displayed in the software graph, so that is optional.

Ok, so now..

cd gluster-3.0.5/
rm -rf src/
mkdir RPMS/
cp /usr/src/redhat/RPMS/x86_64/gluster* RPMS/
cp /usr/src/redhat/RPMS/x86_64/fuse* RPMS/

Ok, so almost done. We simply need a couple the node and graph files.

Here's my node file. Note that I have my compute nodes grab the gluster_setup.sh file off the frontend when the Gluster roll is installed on the node. The reason is that you don't have to rebuild your roll every time you make a change to the gluster_setup.sh script, which you'll likely want to do as it's somewhat hardware dependent, in terms of your HD layout and also, the IP address of your frontend will be different. Once you're happy with the gluster_setup.sh script, you can add it to the post section of your node file. It executes either way, so it's more of a cosmetic issue.

Edit: /export/site-roll/rocks/src/roll/gluster-3.0.5/nodes/gluster-3.0.5.xml
<?xml version="1.0" standalone="no"?>

<kickstart>


        <description>
        Gluster 3.0.5
        </description>

        <copyright>
        Copyright (c) 2010
        </copyright>

        <changelog>
        $Log: gluster-3.0.5.xml,v $
        Revision 1.17  2010 joey
        Let's rock 
        </changelog>



        <package>glusterfs-client</package>
        <package>glusterfs-common</package>
        <package>glusterfs-devel</package>
        <package>glusterfs-server</package>

        <package>fuse</package>
        <package>fuse-devel</package>
        <package>fuse-libs</package>
        <package>libibverbs</package>
        <package>openib</package>

        <post>
           # This is somewhat hardware dependent.. so I don't put it 
           # in the nodes file.
           cd /tmp/
           wget http://10.19.0.1/gluster_setup.sh
           sh gluster_setup.sh
        </post>

</kickstart>

and here's my graph file. You'll probably want to change vm-container to compute nodes.

Edit: /export/site-roll/rocks/src/roll/gluster-3.0.5/graphs/default/gluster-3.0.5.xml
<?xml version="1.0" standalone="no"?>

<graph>

        <description>
        The gluster 3.0.5 Roll
        </description>

        <copyright>
        Copyright (c) 2010
        All rights reserved. 
        </copyright>

        <changelog>
        $Log: gluster.xml,v $
        </changelog>

        # Install on all compute nodes, the frontend and vm-container nodes.
        <edge from="vm-container-client" to="gluster-3.0.5" />
        <edge from="compute" to="gluster-3.0.5" />
        <edge from="server" to="gluster-3.0.5" />

</graph>


So, that's it for the roll. Just build it:

cd /export/site-roll/rocks/src/roll/gluster-3.0.5
make roll

Now, add your roll to the FE and enable it:

rocks add roll gluster-3.0.5-5.3-0.x86_64.disk1.iso
rocks enable roll gluster-3.0.5
rocks list roll gluster-3.0.5
cd /export/rocks/install && rocks create distro


Now, this script is something you'll have to edit and modify to YOUR environment. This is an example of one of my clusters. In this case, I'm taking all of /dev/sdb on the vm-containers and dedicating that to gluster. In my case, it's a 1TB drive. Note that this wacks all the data on sdb but you already know that cause you're a rocks admin and we're reimaging all the compute nodes.

You'll clearly want to modify this to suit your needs. This is the install script referenced in the node xml above. Now, note that the reason we have to do this, is because we need to know about all the nodes prior to configuring Gluster. New to Gluster in version 3.1 which has been released recently is the ability to add nodes dynamically - which will mean mean a fully functioning cluster share when the systems are imaged. Very nice.

Edit: /var/www/html/gluster_setup.sh

# Install and configure gluster
# Joey 

mkdir /gluster/
mkdir /etc/glusterfs/
mkdir /glusterfs/

# Prepare sdb to be mounted as /gluster/
/sbin/fdisk -l /dev/sdb | perl -lane 'print "Wacking: /dev/sdb$1" and system "parted /dev/sdb rm $1" if (/\/dev\/sdb(\d+)\s/)'
/sbin/parted -s /dev/sdb mkpart primary ext3 0 1000200
sleep 5
/sbin/mkfs.ext3 /dev/sdb1

# Get sdb and the glusterfs loaded into /etc/fstab
echo "/dev/sdb1               /gluster                ext3    defaults        0 1" >> /etc/fstab
echo "/etc/glusterfs/glusterfs.vol  /glusterfs/ glusterfs  defaults  0  0" >> /etc/fstab

# Create the list of volume participants.
cd /etc/glusterfs/

# Replicated
glusterfs-volgen --name glusterfs --raid 1 \
   vm-container-0-0:/gluster \
   vm-container-0-1:/gluster \
   vm-container-0-2:/gluster \
   vm-container-0-3:/gluster \
   vm-container-0-4:/gluster \
   vm-container-0-5:/gluster

cp vm-container-0-0-glusterfs-export.vol glusterfsd.vol
cp glusterfs-tcp.vol glusterfs.vol
rm -rf *glusterfs-export.vol
rm -rf *.sample
rm -rf glusterfs-tcp.vol

echo "modprobe fuse && mount -a" >> /etc/rc.local

Re-image your vm-containers/compute nodes and you'll have gluster mounted up on

Friday, October 8, 2010

How to Create a CentOS 5 AMI to run on EC2 or Eucalyptus

Building a CentOS 5 AMI

First of all, a bit of background and why I love the cloud. Previously, I have worked for large ISP's (UUNet, Level 3, British Telecom) and a couple of startups. In some cases, when I'd provision a new system, I'd go to the vendor website, spec out the box, order it, wait for it to arrive, rack and console it. Then, I'd provision switch ports, etc and through an OS and apps on it. Then, I'd unrack it, box it up and drive it to the data center. Re-rack, cable, test, switch port configure it, etc. Took a huge amount of effort. With ec2 and cloud computing, you run a few commands and you're up and running in like, oh.. 20 minutes or so. So, that's awesome, convenient, etc.

You can either run a public AMI, an image created by some random person, or you can build and run you're own. Here's how to do it my way.

First a couple of assumptions, you're running this sequence on an existing CentOS system. You have the fuse module loaded:
modprobe fuse && lsmod | grep fuse
Need that to do loopback mounts.

Ok, let's get started. Note that this whole thing can be scripted but I figured I'd give a little explanation on each step.

So, first let's create a directory to work in:

mkdir -p ~/ami/centos/ && cd ~/ami/

Now, we're going to create an empty disk image. If you know about anaconda, this is just like the downloading stage2 process (well, kinda). Mine is going to be 2G because I like to have a bunch of stuff in there. Change count to 1024 if you want it to be smaller.

dd if=/dev/zero of=centos.fs bs=1M count=2048

Now, create a file system on it:

mke2fs -F -j centos.fs

Good, now we're going to open it up to be written to by mounting it:

mount -o loop centos.fs ~/ami/centos/

Ok, now we're going to turn this thing into a Linux system you can boot up. The kernel likes to have this stuff. The directory ~/ami/centos/ is the root directory / on your new instance.

Make the /dev/ directory:
mkdir ~/ami/centos/dev
   /sbin/MAKEDEV -d ~/ami/centos/dev/ -x console
   /sbin/MAKEDEV -d ~/ami/centos/dev/ -x null
   /sbin/MAKEDEV -d ~/ami/centos/dev/ -x zero

Create /etc/

mkdir ~/ami/centos/etc/

Now, we're going to use yum - just the way the default installer does to add a bunch of software to your new system. We create a yum.conf on the local file system to use to install OS and software.

vi ~/ami/yum.conf

Add this to the file:

[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
exclude=*-debuginfo
gpgcheck=0
obsoletes=1
pkgpolicy=newest
distroverpkg=redhat-release
tolerant=1
exactarch=1
reposdir=/dev/null
metadata_expire=1800
[base]
name=CentOS-5.5 Base
baseurl=http://mirror.centos.org/centos/5.5/os/x86_64/
gpgcheck=0
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos5.5
priority=1
protect=1
#released updates
[update]
name=CentOS-5.5 Updates
baseurl=http://mirror.centos.org/centos/5.5/updates/x86_64/
gpgcheck=0
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos5.5
priority=1
protect=1
#packages used/produced in the build but not released
[addons]
name=CentOS-5.5 Addons
baseurl=http://mirror.centos.org/centos/5.5/addons/x86_64/
gpgcheck=0
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos5.5
priority=1
[extras]
name=CentOS 5.5 Extras $releasever $basearch
baseurl=http://mirror.centos.org/centos/5.5/extras/x86_64/
enabled=1


So that pulls stuff from the official CentOS mirror list. GPG checking is off because we don't have the keys installed.

Next, create the proc file system and mount it up. This is where the kernel keeps track of all the stuff it's doing.

mkdir ~/ami/centos/proc
   mount -t proc none ~/ami/centos/proc/

Ok, here's where the magic begins to happen. We're going to start loading os packages:

yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall Core

So using that command, you can install all the stuff you want. The following is my list, you probably don't need the group 'Development Tools'. That's a BUNCH of stuff you only really need if you're doing development. I like to have it on some instances, some it never gets used. So, you should probably ignore it. A good way to see what's available to install is to run:

yum grouplist | less

and that'll tell you what package groups exist. If you're creating a DNS server, you'll want to add the 'DNS Name Server' group. Pick through the list below and install what you want/need. Not everybody is going to want JDK for example.

yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Text-based Internet'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall Ruby
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Web Server'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Development Tools'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'Java'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y groupinstall 'MySQL Database'
   yum -c ~/ami/yum.conf --installroot=/root/ami/centos -y install curl wget rsync sudo mlocate lsof man tcpdump bc iptables


Once all your software is installed, configure sshd. Now, this is just an old sysadmin tip. Don't run sshd on port 22. It gets scanned 24x7x365 with all kinds of brute force attacks and everything else. I always, always, always run it on another port. You can do tcpwrappers and other tricks (iptables, etc) but just running it on some higher port saves you tons of problems.
So, in this example, I'm running on port 55000.

vi ~/ami/centos/etc/ssh/sshd_config

and add something like this:

Port 55000
Protocol 2
SyslogFacility AUTHPRIV

PermitRootLogin yes
MaxAuthTries 4
PasswordAuthentication no
ChallengeResponseAuthentication no

GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
UsePAM yes

AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL
X11Forwarding yes

Subsystem sftp /usr/libexec/openssh/sftp-server


Create a resolv.conf file with valid name servers. I pretty much always use google's because it's fast and anycasted so it's going to be fast no matter where you are, and it's google.

vi ~/ami/centos/etc/resolv.conf

and add:
# Google's public DNS servers.
nameserver 8.8.8.8
nameserver 8.8.4.4


Now, configure you motd. This is the banner that gets displayed whenever anyone logs in:

vi ~/ami/centos/etc/motd

Put whatever you want in there.. here's an example:

________________________________________
/ Unauthorized users will be killed and  \
\ eaten.                                 /
 ----------------------------------------
        \   ^__^
         \  (xx)\_______
            (__)\       )\/\
             U  ||----w |
                ||     ||


This is optional too but I like to have ec2tools installed on my instances:

cd ~/ami/centos/ && wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.noarch.rpm*
   chroot /root/ami/centos rpm -Uvh ec2-ami-tools.noarch.rpm


vi /etc/profile.d/ec2tools.sh
export EC2_HOME=/opt/ec2-tools
   export PATH=$EC2_HOME/bin:$PATH

Now, configure DHCP for networking:

mkdir -p /etc/sysconfig/network-scripts/
   vi /etc/sysconfig/network-scripts/ifcfg-eth0

Enter:

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Ethernet
USERCTL=yes
PEERDNS=yes
IPV6INIT=no
PERSISTENT_DHCLIENT=yes

That PERSISTENT_DHCLIENT is very nice to have. It means that if for some reason the DHCP server bites the dust, keep the lease you have. Otherwise your instance could loose it's IP at which point, it's game over.

Turn on networking:
vi /etc/sysconfig/network

add:

NETWORKING=yes

Now, configure fstab to mount everything up:

vi /etc/fstab

Add the following:

/dev/sda1               /                       ext3    defaults 1 1
none                    /dev/pts                devpts  gid=5,mode=620 0 0
none                    /dev/shm                tmpfs   defaults 0 0
none                    /proc                   proc    defaults 0 0
none                    /sys                    sysfs   defaults 0 0
/dev/sdc1               /mnt                    ext3    defaults 0 0
/dev/sdc2               swap                    swap    defaults 0 0

Next, turn on the services you want.

chroot ~/ami/centos/ bash
chkconfig --level 345 sshd on
chkconfig --level 345 httpd on

exit 

Almost done. I also like to add a user account with a password and sudo access, so if for some reason I can't get in with my ssh key(s), I can just login as this user and sudo to root and figure out what's going on. This step is optional but very useful:

chroot ~/ami/centos/ bash
   useradd -g users mclovin
   passwd mclovin

Then add your user to the sudoers file:

visudo

find this line:
root    ALL=(ALL)       ALL

and add your new user:
mclovin    ALL=(ALL)       ALL

Then, exit the chroot'd env:
exit

umount ~/ami/centos/

That's pretty much it holmes. Once that's all done you can unmount the image file. Bundle, upload and run.

Wednesday, October 6, 2010

CentOS 5 Icecast Server HowTo

So, you wanna stream some tunes, aye? Ok well here's mine, click to listen:

   Jah Radio

Please feel free to tune in, currently it's pretty much all Bob Marley. I'm using some pretty schweet software called icecast. You can pretty much stream any kind of audio you want. The icecast server takes a feed and retransmits, so you'll need a source. That can either be local MP3 files, or a stream from your desktop player, or someone else's stream. Which is what I'm doing. I'm just grabbing some other dude's stream and rebroadcasting it.

Here's how ya do that on a fresh CentOS 5 install:


First, grab the source:
wget http://downloads.xiph.org/releases/icecast/icecast-2.3.2.tar.gz

Next, add the RPMforge repo (make sure it matches your architecture, goober):
rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS//rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm

Install the depdencies:
yum -y install libvorbis-devel libogg-devel curl-devel libxml2-devel libxslt-devel libtheora-devel speex-devel

If you don't have development tools installed, you'll need them to build, compile and bundle, so just run:
yum groupinstall 'Development Tools'

Otherwise, if you already have gcc and all that jazz, run:
rpmbuild -ta icecast-2.3.2.tar.gz

That takes the icecast source and builds you an RPM which you can install. I like doing it that way so you can make a copy of that RPM and use it for other deployments.

Now, once the rpm(s) are build, you can install them like so:
rpm -Uvh /usr/src/redhat/RPMS/x86_64/icecast*.rpm

I like to stuff to log to /var/log - some folks don't. Whatever, it's your call, if your doing it my way (which you should because I'm awesome) do:
mkdir /var/log/icecast/
chown -R nobody:nobody /var/log/icecast/

Now, move their default /etc/icecast.xml and use mine:
mv /etc/icecast.xml /etc/icecast.xml.orig

This is a fully, ready to roll config file which will have you rebroadcasting a popular shoutcast Bob Marley stream:

<icecast>
<limits>
<clients>100</clients>
<sources>2</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>10</source-timeout>
<burst-on-connect>1</burst-on-connect>
<burst-size>65535</burst-size>
</limits>

<authentication>

<source-password>nanana</source-password>
<relay-password>nanana</relay-password>
<admin-user>admin</admin-user>
<admin-password>nanana</admin-password>
</authentication>

<hostname>tunes.cloud21cn.com</hostname>
<listen-socket>
<port>8000</port>
</listen-socket>
<relay>
<server>88.191.16.115</server>
<port>8000</port>
<mount>/</mount>
<local-mount>/JahRadio</local-mount>
<on-demand>0</on-demand>
<relay-shoutcast-metadata>0</relay-shoutcast-metadata>
</relay>
<fileserve>1</fileserve>
<paths>
<basedir>/usr/share/icecast</basedir>
<logdir>/var/log/icecast</logdir>
<webroot>/usr/share/icecast/web</webroot>
<adminroot>/usr/share/icecast/admin</adminroot>
<alias source="/" dest="/status.xsl"/>
</paths>

<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<loglevel>3</loglevel> <!-- 4 Debug, 3 Info, 2 Warn, 1 Error -->
<logsize>10000</logsize> <!-- Max size of a logfile -->
</logging>

<security>

<chroot>0</chroot>
<changeowner>
<user>nobody</user>
<group>nobody</group>
</changeowner>
</security>
</icecast>

You'll have to edit that file slightly for the hostname. Also, find your own music here:

Shoutcast

and update the relay stanza in the above config.

Now, just make sure you have port 8000 open for TCP connects and hit:

http://your.url.com:8000/

Relax and enjoy.

By the way, I had to encode all that XML above to get it to look right in Blogger. The way to do that is to use a site like this: http://centricle.com/tools/html-entities/

Worked great.

Monday, October 4, 2010

Increasing File Handle Limits in CentOS

So, I pretty much use CentOS for all the systems I deploy to production. I futz around with others but I've been doing Redhat since, oh like 1996 and I just know it and it's solid and there's rpmforge for packages that aren't in Base and Extras.

I notice Java applications like to use a lot of file handles. Also, busy TCP network services (udp only uses a single socket) use up lots of sockets, which are counted as a file handle. Think, a very busy Java Message Bus..... so I have no clue why the default is 1024. But if you find your applications (or system applications that are logging to syslog) tell you that you're out of file handles, you can check to see how many you have like this:


[root@elvira ~]# ulimit -n
1024


That should be more like 65535 for a system with, oh 8G or memory. So, you need to set it in a couple of spots. When a user logs into a system, the PAM system comes into play. The limits.conf file is PAM configuration file and affects user processes and shells. The other file is sysctl.conf which sets kernel values and thus affects pretty much everything else. Here's how it's done:

#!/bin/sh
#
# Increase file descriptor limit to 65535

cat<< EOF >> /etc/sysctl.conf

# Increase file handle limit 
fs.file-max = 65535

EOF

cat<< EOF >> /etc/security/limits.conf

*       soft    nofile  65535
*       hard    nofile  65535

EOF

sysctl -p /etc/sysctl.conf

Before running that, check the files and make sure to delete any existing entries for the above, so you don't end up with duplicates.

Now, simply log out of your shell and log back in. You should now see that your ulimit is set to 65535.



[root@elvira~]# ulimit -n
65535

I'm really doing it this time..

Howdy, well it's time for me to start blogging. I've been threatening myself for years. Blog or else. But I have a ton of other stuff going on and ski season is just around the corner. However.. I need a place to put stuff that I've discovered and just ramble on about stuff. Mostly this is going to be hard-core Linux stuff because I have a bunch of random notes from all over that could really help others out, the way I've learned so much from other helpful Internet citizens. Anyhoo, this is it. Welcome.


Hmm.. I just noticed that my URL sysextra looks a little provocative. Bonus!


So, some stuff about me. I use to work for UUNet, where I really lucked out. There were some really smart people that there I learned from. I started out in the support group which kinda turned into me working in a web hosting sys-admin group. After that, I was in product development (devo) which was really operations and development. From there it was off to Level 3 in Colorado, then a couple of start-ups and now I work for BT on a cloud computing platform. Funny how much it reminds me of managed hosting at UUNet, only cooler.


I really love all things operations so I always kinda do it, even when I'm in a development roll. Good sysadmins like to be on the frontlines because that's where the action is. I enjoy writing perl and PHP code and all things Linux. I think that's about it for now, the kids are bugging me to get dinner going. Oh  yea, I'm a big Grateful Dead fan.