The thing I LOVE about Gluster? No meta-nodes! Seriously, that is KEY. If you manage a storage platform and deal with meta-nodes you know what I'm talking about. Clustered meta-nodes? Even more annoying. You have replicated storage, why would you need a clustered meta node?! Stuff all that information into the storage cloud and spread bits around. Not to mention, meta-nodes require a physical RAID, etc, which for one is oh, about $10k. Another thing I like about Gluster? You can run it between ec2 instances - no meta-nodes! That's another posting, how to get Gluster going on a bunch of cloud instances.. So anyway, let's jump right in, shall we? Also, sorry about the formatting for my code and cmds. Blogger might not be the best platform for me. Oh well....
Login to your frontend..
In order to compile Gluster, you'll need to install the following:
yum -y install flex libibverbs-devel
Next, download the Gluster source:
cd /root/ mkdir gluster-3.0.5/ cd $! wget http://download.gluster.com/pub/gluster/glusterfs/3.0/3.0.5/glusterfs-3.0.5.tar.gz
Next, we're going to turn that code into some RPM's to stuff in our roll.
rpmbuild -ta glusterfs-3.0.5.tar.gz
That rpmbuild will untar the archive, compile and build the software and bundle it into RPM's for us. Dope.
When it's done, the RPM's will be in
/usr/src/redhat/RPMS/x86_64/
However, for all the performance goodness, we should also download Gluster's fuse code, as it uses fuse for bridging user/kernel land. Pretty much the same process, although fuse is a kernel module, so you're going to need the kernel sources to compile it against. In MY case, I'm running the Xen kernel, so I need the kernel-xen-devel package, you probably don't but it doesn't hurt to install both:
yum -y install kernel-devel OR yum -y install kernel-xen-devel
Now, download, compile and package:
cd /root/gluster-3.0.5/ wget http://download.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.7.4glfs11.tar.gz rpmbuild -ta fuse-2.7.4glfs11.tar.gz
Ok, now you'll have all the fuse and gluster software bundled into RPM's:
[root@yomama gluster-3.0.5]# dir -1 /usr/src/redhat/RPMS/x86_64/ fuse-2.7.4glfs11-1.x86_64.rpm fuse-devel-2.7.4glfs11-1.x86_64.rpm fuse-libs-2.7.4glfs11-1.x86_64.rpm glusterfs-client-3.0.5-1.x86_64.rpm glusterfs-common-3.0.5-1.x86_64.rpm glusterfs-devel-3.0.5-1.x86_64.rpm glusterfs-server-3.0.5-1.x86_64.rpm
Ok, so now, simply create a new (empty) roll:
cd /export/site-roll/rocks/src/roll/ rocks create new roll gluster-3.0.5 version=5.3 color=brown
I use 5.3 just since all the other rolls are 5.3, you can use whatever. Also, the color is what's displayed in the software graph, so that is optional.
Ok, so now..
cd gluster-3.0.5/ rm -rf src/ mkdir RPMS/ cp /usr/src/redhat/RPMS/x86_64/gluster* RPMS/ cp /usr/src/redhat/RPMS/x86_64/fuse* RPMS/
Ok, so almost done. We simply need a couple the node and graph files.
Here's my node file. Note that I have my compute nodes grab the gluster_setup.sh file off the frontend when the Gluster roll is installed on the node. The reason is that you don't have to rebuild your roll every time you make a change to the gluster_setup.sh script, which you'll likely want to do as it's somewhat hardware dependent, in terms of your HD layout and also, the IP address of your frontend will be different. Once you're happy with the gluster_setup.sh script, you can add it to the post section of your node file. It executes either way, so it's more of a cosmetic issue.
Edit: /export/site-roll/rocks/src/roll/gluster-3.0.5/nodes/gluster-3.0.5.xml
<?xml version="1.0" standalone="no"?> <kickstart> <description> Gluster 3.0.5 </description> <copyright> Copyright (c) 2010 </copyright> <changelog> $Log: gluster-3.0.5.xml,v $ Revision 1.17 2010 joey Let's rock </changelog> <package>glusterfs-client</package> <package>glusterfs-common</package> <package>glusterfs-devel</package> <package>glusterfs-server</package> <package>fuse</package> <package>fuse-devel</package> <package>fuse-libs</package> <package>libibverbs</package> <package>openib</package> <post> # This is somewhat hardware dependent.. so I don't put it # in the nodes file. cd /tmp/ wget http://10.19.0.1/gluster_setup.sh sh gluster_setup.sh </post> </kickstart>
and here's my graph file. You'll probably want to change vm-container to compute nodes.
Edit: /export/site-roll/rocks/src/roll/gluster-3.0.5/graphs/default/gluster-3.0.5.xml
<?xml version="1.0" standalone="no"?> <graph> <description> The gluster 3.0.5 Roll </description> <copyright> Copyright (c) 2010 All rights reserved. </copyright> <changelog> $Log: gluster.xml,v $ </changelog> # Install on all compute nodes, the frontend and vm-container nodes. <edge from="vm-container-client" to="gluster-3.0.5" /> <edge from="compute" to="gluster-3.0.5" /> <edge from="server" to="gluster-3.0.5" /> </graph>
So, that's it for the roll. Just build it:
cd /export/site-roll/rocks/src/roll/gluster-3.0.5 make roll
Now, add your roll to the FE and enable it:
rocks add roll gluster-3.0.5-5.3-0.x86_64.disk1.iso rocks enable roll gluster-3.0.5 rocks list roll gluster-3.0.5 cd /export/rocks/install && rocks create distro
Now, this script is something you'll have to edit and modify to YOUR environment. This is an example of one of my clusters. In this case, I'm taking all of /dev/sdb on the vm-containers and dedicating that to gluster. In my case, it's a 1TB drive. Note that this wacks all the data on sdb but you already know that cause you're a rocks admin and we're reimaging all the compute nodes.
You'll clearly want to modify this to suit your needs. This is the install script referenced in the node xml above. Now, note that the reason we have to do this, is because we need to know about all the nodes prior to configuring Gluster. New to Gluster in version 3.1 which has been released recently is the ability to add nodes dynamically - which will mean mean a fully functioning cluster share when the systems are imaged. Very nice.
Edit: /var/www/html/gluster_setup.sh
# Install and configure gluster # Joeymkdir /gluster/ mkdir /etc/glusterfs/ mkdir /glusterfs/ # Prepare sdb to be mounted as /gluster/ /sbin/fdisk -l /dev/sdb | perl -lane 'print "Wacking: /dev/sdb$1" and system "parted /dev/sdb rm $1" if (/\/dev\/sdb(\d+)\s/)' /sbin/parted -s /dev/sdb mkpart primary ext3 0 1000200 sleep 5 /sbin/mkfs.ext3 /dev/sdb1 # Get sdb and the glusterfs loaded into /etc/fstab echo "/dev/sdb1 /gluster ext3 defaults 0 1" >> /etc/fstab echo "/etc/glusterfs/glusterfs.vol /glusterfs/ glusterfs defaults 0 0" >> /etc/fstab # Create the list of volume participants. cd /etc/glusterfs/ # Replicated glusterfs-volgen --name glusterfs --raid 1 \ vm-container-0-0:/gluster \ vm-container-0-1:/gluster \ vm-container-0-2:/gluster \ vm-container-0-3:/gluster \ vm-container-0-4:/gluster \ vm-container-0-5:/gluster cp vm-container-0-0-glusterfs-export.vol glusterfsd.vol cp glusterfs-tcp.vol glusterfs.vol rm -rf *glusterfs-export.vol rm -rf *.sample rm -rf glusterfs-tcp.vol echo "modprobe fuse && mount -a" >> /etc/rc.local
Re-image your vm-containers/compute nodes and you'll have gluster mounted up on