The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

instant, ephemeral gluster clusters with vagrant

Gluster
2013-10-01
more virtual big data love w/ gluster, vagrant, and mattf‘s little fake disk hack 🙂

For those of you who need to spin up virtual gluster clusters for development and testing:

Just finished creating a vagrantized, fully automated, totally rebuildable and teardownable two node fedora 19 gluster setup and shared it on the forge!  

It uses fedora19 but since its all vagrant powered, you dont need to grab or download a distro or iso or anything, just clone the git repo, run vagrant up, and let vagrant automagically pull down and manage your base box and set up the rest for you.

clone it here: https://forge.gluster.org/vagrant

So what does this do? This basically means that you can spin up 2 vms , from scratch, by installing vagrant and then, literally, typing:

git clone git@forge.gluster.org:vagrant/fedora19-gluster.git
cd fedora19-gluster
 

ln -l Vagrantfile_cluster Vagrantfile
 

vagrant up –provision

Does it work? Yes !  After watching it spin up , you can ssh in to the cluster by typing:

vagrant ssh gluster1

And destroy the same two node cluster by running:

vagrant destroy

How it works

To grok the basics of how it works:

– First: Check out the Vagrantfile.  That file defines some of the basic requirements, in particular, the static ips that the peers have. 

– How does the cluster install get coordinated? Now, look at the provisioning scripts referenced from the vagrant file.  Those run after the basic box is set up.  There is a little bit of hacking to ensure that, when the final box is setup, only then is the peer probing done… but otherwise its pretty straight forward (actually that could be dehacked by simply having a second provision script in the 2nd node of the Vagrantfile… but I only just learned on vagrant irc that you could have multiple provisioners.

– What about installing gluster? Thankfully, Fedora now has all the gluster rpms as supported for standard F19.  So it was super easy to yum install them.

– Finally: the bricks: And you will also see that there is a special methodology for creating gluster “bricks” on each machine.  That simple little trick for setting up a fake disk (using the truncate) command [+1 to spinningmatt.wordpress.com at redhat for showing me this]! 

………………………… Example …………………………………..

#Below : ssh into the first node, create a file, and ls it in the other node to confirm that the cluster is working.
[jays-macbook]$ vagrant ssh gluster1

[vagrant@gluster1 ~]$ sudo touch /mnt/glusterfs/a

[vagrant@gluster1 ~]$ ssh 10.10.10.12

[vagrant@gluster2 ~]$ ls /mnt/glusterfs/

Help wanted to maintain this and make development clusters for different
gluster use cases.

– bug reporting
– testing functionality of different tuning/config options
– testing hadoop interop
Lots of cool stuff we can do with this.  right now im attempting to bridge it with a vagrantized psuedodistributed hbase implementation here: https://github.com/jayunit100/vagrant-hbase .  Stay tuneeddddddddd

BLOG

  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more