The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

A completely rebuildable Fedora16 Gluster development box.

Gluster
2013-03-27
Playing with gluster in a simple Fedora VM gives you a chance to mess with the translator stack, fine tune your install, and other general aspects of glusterd maintenance.  Here’s an easy to rebuild and tear down gluster-development-environment on KVM with Fedora 16, without depending on any particular external disk device.

Before you start…
This is for developers or people that want to go into the innards of gluster in a nice, happy, safe, non-hazardous enviornment.  
If you are a “real” gluster user looking for tutorials, then you probably shouldn’t be here !!! Instead, go to http://community.gluster.org or http://redhat.com/storage.  
The workflow here is for people who:
  1. Want to build gluster from source without learning all the dark arts of c building.
  2. Want to be able to edit gluster source code and see the results immediately.
  3. Want to be able to easily restore their gluster environment after ruining it (either be deleting libraries, mucking up the code, or corrupting volume / brick information).
This means that you don’t have to worry about ruining or corrupting your precious gluster installation or corrupting volumes which somehow might effect the overal stability of gluster services.  If you corrupt something, you can rebuild this entire thing from scratch by running one shell script.    

Pre-requisites
 

Almost anyone should be able to follow the steps here to set up a gluster development environment.  All you really need is KVM, KVM’s virt-manager, and an internet connection.  Within 20 minutes (or so) you should have a working gluster file system that has been built directly from source.  More importantly – by rerunning the scripts in this post you will be able to rebuild your entire environment from scratch by running the install script, which, at the onset, cleans out any possible remnants of your gluster installation for you.

BTW: There is no magic here.
To make a reproducible, fully virtualized, simple to set-up and tear down gluster sandbox environment you need to solve 4 simple problems.  Its easy to forget the solution to any one of these, which is the only reason why this post exists.
  1. Automatically, safely, non-manually re-creating brand new brick disks on the fly: You can create a loop-back mount as your gluster brick.  Thus, you don’t need to get all fancy in attaching a device as your mount point.  That is, you can simulate a “disk device” with the unix “truncate” command. Thanks to matt farellee for showing me this.
  2. Building gluster from source in a fresh fedora box with no libraries already installed.  You can easily build gluster from source into RPMs whose state is very easy to manage.  Justin Clift has a great article on this (http://www.gluster.org/community/documentation/index.php/CompilingRPMS), which is adopted into a shell script here.
  3. Purging old gluster source code and libraries so that you know your build is really and truly fresh: You can purge all of gluster’s artifacts on your system in a couple of simple commands, just to be safe, as per a recent email thread http://www.mail-archive.com/gluster-devel@nongnu.org/msg09129.html.  The logic for this is *also* included in this post.
  4. External internet access from inside of a KVM virtual network is done through a default gateway address that you need to get right.  This is necessary building the source from github, acquiring the yum dependencies.  I’ve written this up here http://jayunit100.blogspot.com/2013/03/static-ips-on-minimal-kvm-fedora-16.html.
This could be easier.
Ideally however, you would realize an even more automated lifecycle for you gluster sandbox, using a modular configuration/infrastructure driver like puppet.  
In fact there has been chatter about using puppet and kvm together on the gluster community site: 
So, yeah… It would be great to see the scripts and tricks cobbled together in this post completely puppetized at some point.  
But for now, this should be easy enough:
1) Download the Fedora 16 ISO (updated: official fedora project link).

wget http://download.fedoraproject.org/pub/fedora/linux/releases/16/Fedora/x86_64/iso/Fedora-16-x86_64-DVD.iso
2) Install a fresh fedora box using KVM 
You can do this in the linux virtual machine manager ui, which allows you to create KVM boxes. 

  1. Click “create a new virtual machine”
  2. Select “local install media”
  3. User ISO Image, and browse to the above Fedora ISO file ^^.
  4. Allocate some RAM (a reasonable amount, 1G+, since your building a complex software application in it).
  5. Create a disk image directly on the drive (or get fancy and attach extra disk devices after).  I used 30G.
  6. When the install process starts – you’ll see the Fedora splash screens come up.  Chose “MINIMAL” as the installation type -that way your VM is super lean, starts up fast, and doesnt waste any resources on fancy windowing.  You can use the basic terminal that KVM gives you, or else, you can SSH into it from your host to run the install commands.

3) Start the VM installation process
The Fedora installation window will pop up – you can use the default settings, and (suggested) select “Minimal” install.

4)  Setup networking so that you have a static IP address, and so that your VM is capable of pinging the outside world.  
  • I’ve described this in the previous post: http://jayunit100.blogspot.com/2013/03/static-ips-on-minimal-kvm-fedora-16.html.  This involves editing the gateway proper vm.ifcfg-eth0 file (the file that defines the eth0 properties).  
  • Before you install gluster, you’ve got to make sure you have the GATEWAY field working on KVM.  In my case I used KVM’s NAT networking, and found out (the hard way) that the GATEWAY for KVM boxes is proxied as an internal ip address inside of the guest machine, in my case , gateway was 198.168.122.1. 
    • When your guests are in a “virtual network”, you bridge to them through libvirtd, and that is done through “192.168.122.1”. 
    • For RHEL ~ you will set these values using the “system-config-network” tool, which is visual (you can install it via yum install system-configure-network-tui).

5)  Install a few essentials on your bare bones Fedora box (if you chose the minimal installation): 
      yum install openssh-clients #<– this is “scp”
      yum install tree#<– you’ll want this to view directories
      yum install git #we’re pulling gluster from github.
      #below: gluster build dependencies:
      yum install libtool autoconf automake flex bison openssl openssl-devel libibverbs-devel readline-devel libxml-devel libxml2-devel make
       #This is a new one…
       yum install librdmacm-devel

4) Run this script ! 
read -p "Purging any trace of gluster from your system, you could even lose data, hit any key to continue." 

killall -9 -r gluster
yum remove $(rpm -qa | grep gluster)
rm -rf /var/lib/glusterd
rm -rf /etc/glusterfs

echo "Now the really dangerous part is over..., starting the install - getting libraries"

echo "Now switching to /tmp"
cd /tmp/
sudo yum -y install gcc python-devel python-setuptools
sudo easy_install python-swiftclient
sudo yum -y install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo yum -y install python-webob1.0 python-paste-deploy1.5 python-sphinx10
sudo yum -y install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo yum -y install git autoconf automake bison dos2unix flex fuse-devel libaio-devel \
libibverbs-devel libtool libxml2-devel lvm2-devel make openssl-devel pkgconfig \
python-devel python-eventlet python-netifaces python-paste-deploy python-simplejson \
python-sphinx python-webob pyxattr readline-devel rpm-build systemtap-sdt-devel tar
sudo yum -y install rpcbind


sleep 1
echo "Cloning down the source"
git clone git://git.gluster.org/glusterfs 
echo "Checking out 3.4 --- update this branch in the line below (git branch -a to list) or press enter to continue."
read 
git checkout release-3.4 
git pull

sleep 1
echo "Now starting the build"
cd glusterfs
./autogen.sh
./configure --enable-fusermount

#Rather than "make install", we make an rpm distribution. Suggested by
#Jeff Darcy and others as the right way to install.
make dist

cd extras/LinuxRPM
make glusterrpms

sleep 2

echo "Done building gluster. Now installing . " 
 
#Order matters here:
rpm -ivh glusterfs-*git-1.fc16.x86_64.rpm 
rpm -ivh glusterfs-*devel-*git-1.fc16.x86_64.rpm
rpm -ivh glusterfs-*fuse-*git-1.fc16.x86_64.rpm
rpm -ivh glusterfs-*server-*git-1.fc16.x86_64.rpm

echo "DONE. Note, if you get a final error message, it can be ignored for a dev enviornment."

echo "starting glusterd now !"
service glusterd start
 
5) Finally, create a loop-back mount system – a super easy trick for simulating a real “disk” device by just allocating a big file using the truncate command.  

You can run this script with the sample inputs in the echo statements below.  

Note: The first couple of lines unmount and remove the brick and mount point directory, so please please please don’t use this for anything important. 
#Delete a mount and brick, and recreate them using a loopback.
#This script is good for highly simplified development

if [ “$#” -ne 3 ]
  then
     echo “Usage: mount location, brick location.  “
     echo “For example : /mnt/glusterfs /mnt/mybrick1 MyVolume”
     exit 1 #exit shell script
fi

MNT=$1
BRICK=$2
VOL=$3

read -p “WARNING: DELETING $MNT and $BRICK … Press a key to continue !”
umount $MNT
rm -rf $MNT
mkdir -p $MNT

umount $BRICK
rm -rf $BRICK
mkdir -p $BRICK

echo “Now creating a file ${BRICK}.raw”

truncate -s 1G ${BRICK}.raw ;

#NOTE: ext4 is not ideal… SHOULD be mkfs.xfs instead of ext4
#ext4 is not recommomended by the gluster team !
mkfs.ext4 ${BRICK}.raw ;

#Here is where the magic happens.

echo “Now mounting the loopback!”
mount -o loop ${BRICK}.raw ${BRICK} ;

echo “Now creating the volume which writes to loopback brick”
gluster volume create $VOL $(hostname):$BRICK

echo “Now starting the volume…”
sleep 1
gluster volume start $VOL

sleep 1
echo “Finally : mounting gluster to $MNT”
mount -t glusterfs $(hostname):$VOL $MNT

 

BLOG

  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more