The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Gluster New User Guide


DRAFT! (2012-08-09)


This document is intended to give you hands-on experience with Gluster by guiding you through the steps of setting it up for the first time.”step by step. If you are looking to get right into things, you can take a look at our quick start guide. After you deploy Gluster by following these steps, we recommend that you read the Gluster Admin Guide to learn how to administer Gluster and how to select a volume type that fits your needs.  Also, be sure to enlist the help of the Gluster community via the IRC channel or Q&A section .  We want you to be successful in as short a time as possible.


Before we begin, let’s talk about what Gluster is, dispel a few myths and misconceptions, and define a few terms.  This will help you to avoid some of the common issues that others encounter most frequently.

1) What is Gluster
Gluster is a distributed scale out filesystem that allows rapid provisioning of additional storage based on your storage consumption needs.  It incorporates automatic failover as a primary feature. All of this is accomplished without a centralized metadata server.

2) What is Gluster without making me learn an extra glossary of terminology?

  • Gluster is an easy way to provision your own storage backend NAS using almost any hardware you choose.
  • You can add as much as you want to start with, and if you need more later, adding more takes just a few steps.
  • You can configure failover automatically, so that if a server goes down, you don’t lose access to the data.   No manual steps are required for failover. When you fix the server that failed and bring it back online, you don’t have to do anything to get the data back except wait.  In the mean time, the most current copy of your data keeps getting served from the node that was still running.
  • You can build a clustered filesystem in a matter of minutes…it is trivially easy for basic setups
  • It takes advantage of what we refer to as “commodity hardware”, which means, we run on just about any hardware you can think of, from that stack of decomm’s and gigabit switches in the corner no one can figure out what to do with (how many license servers do you really need, after all?), to that dream array you were speccing out online.  Don’t worry, I won’t tell your boss.
  • It takes advantage of commodity software too.  No need to mess with kernels or fine tune the OS to a tee.  We run on top of most unix filesystems, with XFS and ext4 being the most popular choices.  We do have some recommendations for more heavily utilized arrays, but these are simple to implement and you probably have some of these configured already anyway.
  • Gluster data can be accessed from just about anywhere – You can use traditional NFS, SMB/CIFS for Windows clients, or our own native GlusterFS (a few additional packages are needed on the client machines for this, but as you will see, they are quite small).
  • There are even more advanced features than this, but for now we will focus on the basics.
  • It’s not just a toy.  Gluster is enterprise ready, and commercial support is available if you need it.  It is used in some of the most taxing environments like media serving, natural resource exploration, medical imaging, and even as a filesystem for Big Data.

3) Is Gluster going to work for me and what I need it to do?
Most likely, yes.  People use Gluster for all sorts of things.  You are encouraged to ask around in our IRC channel or Q&A forums to see if anyone has tried something similar.  That being said, there are a few places where Gluster is going to need more consideration than others.
– Accessing Gluster from SMB/CIFS is often going to be slow by most people’s standards.  If you only moderate access by users, then it most likely won’t be an issue for you.  On the other hand, adding enough Gluster servers into the mix, some people have seen better performance with us than other solutions due to the scale out nature of the technology
– Gluster does not support so called “structured data”, meaning live, SQL databases.  Of course, using Gluster to backup and restore the database would be fine
– Gluster is traditionally better when using file sizes at of least 16KB (with a sweet spot around 128KB or so).

4) How many billions of dollars is it going to cost to setup a cluster?  Don’t I need redundant networking, super fast SSD’s, technology from Alpha Centauri delivered by men in black, etc…?
I have never seen anyone spend even close to a billion, unless they got the rust proof coating on the servers.  You don’t seem like the type that would get bamboozled like that, so have no fear.  For purpose of this tutorial, if your laptop can run two VM’s with 1GB of memory each, you can get started testing and the only thing you are going to pay for is coffee (assuming the coffee shop doesn’t make you pay them back for the electricity to power your laptop).

If you want to test on bare metal, since Gluster is built with commodity hardware in mind, and because there is no centralized meta-data server, a very simple cluster can be deployed with two basic servers (2 CPU’s, 4GB of RAM each, 1 Gigabit network).  This is sufficient to have a nice file share or a place to put some nightly backups.  Gluster is deployed successfully on all kinds of disks, from the lowliest 5200 RPM SATA to mightiest 1.21 gigawatt SSD’s.  The more performance you need, the more consideration you will want to put into how much hardware to buy, but the great thing about Gluster is that you can start small, and add on as your needs grow.

5) OK, but if I add servers on later, don’t they have to be exactly the same?
In a perfect world, sure.  Having the hardware be the same means less troubleshooting when the fires start popping up.  But plenty of people deploy Gluster on mix and match hardware, and successfully.

6) Can we get started now?
Sure, let me get some coffee first

Getting Started

In this tutorial, I will cover different options for getting a cluster up and running.  Here is a rundown of the steps we need to do.  If you already have a test environment with two or more servers, you can skip to the Really Really Quick Start Guide <placeholder for link to bare minimum getting started steps>

To start, we will go over some common things you will need to know for setting up Gluster.

Next, choose the method you want to use to set up your first cluster:
– Within a virtual machine
– To bare metal servers
– To an EC2 instance in Amazon

Finally, we will install Gluster, create a few volumes, and test using them.

Common Setup Criteria

No matter where you will be installing Gluster, it helps to understand a few key concepts on what the moving parts are.  Let’s start with some key concepts:

First, it is important to understand that GlusterFS isn’t really a filesystem in and of itself.  It concatenates existing filesystems into one (or more) big chunks so that data being written into or read out of Gluster gets distributed across multiple hosts simultaneously.  This means that you can use space from any host that you have available.  Typically, XFS is recommended but it can be used with other filesystems as well. Most commonly EXT4 is used when XFS isn’t, but you can (and many, many people do) use another filesystem that suits you.  Now that we understand that, we can define a few of the common terms used in Gluster.


  • A “trusted pool” refers to the hosts in a given Gluster Cluster
  • A “node” or “server” refers to any server that is part of a trusted pool.  In general, this assumes the same trusted pool.
  • A “brick” is used to refer to any device (really this means filesystem) that is being used for Gluster storage.
  • An “export” refers to the mount path of the brick(s) on a given server, for example, /export/brick1

Other notes:

  • For this test, you can get away with using /etc/hosts entries for the two nodes.  However, when you move from this basic setup to using Gluster in production, correct DNS entries (forward and reverse) and NTP are essential.  If you don’t beleive me, watch what happens to the face of any senior sysadmin when you ask whether or not you REALLY need them
  • When you install the Operating System, do not format the Gluster storage disks!  We will use specific settings later on when we setup for Gluster.  If you are testing with a single disk (not recommended), make sure to carve out a free partition or two to be used by Gluster later, so that you can format or reformat at will during your testing.
  • Firewalls are great, except when they aren’t.  For storage servers, being able to operate in a trusted environment without firewalls can mean hugs gains in performance, and is recommended.  If you absolutely must have firewall in place, you can follow the instructions in the install and admin guides.

Step 1, Method 1 – Setting up in virtual machines

To set up Gluster using virtual machines, you will need at least two virtual machines with at least 1GB of RAM.  You may be able to test with less but most users will find it too slow for their tastes.  The particular virtualization product you use is a matter of choice.  Platforms I have used to test on include Xen, VMware ESX and Workstation, VirtualBox, and KVM.  For purpose of this article, all steps assume KVM but the concepts are expected to be simple to translate to other platforms as well.  The article assumes you know the particulars of how to create a virtual machine and have installed a 64 bit linux distribution already.

Create or clone two VM’s, with the following setup on each:
– 2 disks using the VirtIO driver, one for the base OS and one that we will use as a Gluster “brick”.  You can add more later to try testing some more advanced configurations, but for now let’s keep it simple.
Note: If you have ample space available, consider allocating all the disk space at once.
-2 NIC’s using VirtIO driver.  The second NIC is not strictly required, but will be used to demonstrate setting up a seperate network for client and server traffic.
Note: Attach each NIC to a seperate network.

Other notes:
Make sure that if you clone the VM, that Gluster has not already been installed.  Gluster generates a UUID to “fingerprint” each system, so cloning a previously deployed system will result in errors later on.

Once these are prepared, you are ready to move on to Step 2.

Step 1, Method 2 – Setting up on physical servers

To set up Gluster on physical servers, you just need two servers of very modest specifications (2 CPU’s, 2GB of RAM, 1GBE).  Since we are dealing with physical hardware here, keep in mind, what we are showing here is for testing purposes.  In the end, consider that forced beyond your control (aka, your bosses’ boss…) may mean that the “just for a quick test” envinronment you set up with could be turned into production despite your kicking and screaming against it.  For such eventualities, it can be prudent to deploy your test environment as much like you would want a production environment.  The being said, here are some of the things you would want to do as to get it as close as possible from the get go:

– Make sure DNS and NTP are setup, correct, and working
– If you have access to a backend storage network, use it!  10GBE or InfiniBand are great if you have access to them, but even a 1GBE backbone can help you get the most out of you deployment.  Make sure that the interfaces you are going to use are also in DNS since we will be using the hostnames when we deploy Gluster
– When it comes to disks, the more the merrier.  Although you could technically fake things out with a single disk, there would be performance issues as soon as you tried to do any real work on the servers
– A lot of users wonder about whether to use RAID on the physical disks or not.  The short answer is “yes”.  A somewhat more detailed version is described here.
– With the explosion of commodity hardware, you don’t need to be a hardware expert these days to deploy a server.  Although this is generally a good thing, it also means that paying attention to BIOS settings iscommonly ignored.  A few things to consider:

  •     Most manufacturers enable power saving mode by default.  This is a great idea for servers that do not have high performance requirements.  For the average storage server, the performance impact of the power savings is not a reasonable trade off
  •     Newer processors have lots of nifty features!  Enhancements in virtualization, advanced checksumming and NUMA are just a few to mention.  On the other hand, some manufactures ship hardware with inexplicably safe settings…so that blazing fast 10GBE card you were so thrilled about installing end up being crippled by a default 1x setting put in place on the PCI-E bus.  Most manufactures show all the BIOS settings, including the defaults, right in the manual.  It only takes a few minutes to download, and you don’t even have to power off the server unless you need to make changes, and more and more boards include the functionality to make changes on the fly without even powering the box off.  One word of caution of course, is don’t go too crazy.  Fretting over each tiny little detail and setting is usually not worth the time, and the more changes you make, the more you need to document and implement later.  Try to find the happy balance between time spent managing the hardware (which ideally should be as close to zero after you setup initially) and the expected gains you get back from it.
  •     Some hardware really is better that others.  Without pointing fingers anywhere specifically, it is often true that onboard components are not as robust as add-ons.  As a general rule, you can safely delegate the on-board hardware to things like management network for the NIC’s, and for installing the OS.  At least twice a year you should check the manufacturers website for bulletins about your hardware.  Critical performance issues are often resolved with a simple driver or firmware update, but if you don’t check, you won’t know.

Once you have setup the servers and installed the OS, you are ready to move on to Step 2.

Step 1, Method 3 – Deploying in AWS

Deploying in Amazon can be one of the fastest ways to get up and running with Gluster.  Of course, the steps here will work with other cloud platforms as well.

– Deploy at least two instances.  For testing, you can use micro instances (I go as far as using spot instances).  Debates rage on what size instance to use in production, and there is really no correct answer.  As with most things, the real answer is “whatever works for you”, where the trade-offs betweeen cost and performance are balanced in a continual dance of trying to make your project successful while making sure there is enough money left over in the budget for you to get that sweet new ping pong table in the break room.
– For cloud platforms, your data is wide open right from the start.  As such, you shouldn’t allow open access to all ports in your security groups if you plan to put a single piece of even the least valuable information on the test instances.  By least valuable, I mean “Cash value of this coupon is 1/100th of 1 cent” kind of least valuable.  Don’t be the next one to end up as a breaking news flash on the latest inconsiderate company to allow their data to fall into the hands of the baddies.  See Step 2 for the minimum ports you will need open to use Gluster
– You can use the free “ephemeral” storage for the Gluster bricks during testing, but make sure to use some form of protection against data loss when you move to production. Typically this means EBS backed volumes or using S3 to periodically back up your data bricks.

Other notes:
– In production, it is recommended to replicate your VM’s across multiple zones.  For purpose of this tutorial, it is overkill, but if anyone is interested in this please let us know since we are always looking to write articles on the most requested features and questions.
– Using EBS volumes and Elastic IP’s is also recommended in production.  For testing, you can safely ignore these as long as you are aware that the data could be lost at any moment, so make sure your test deployment is just that, testing only.
– Performance can fluctuate wildly in a cloud environment.  If performance issues are seen, there are several possible strategies, but keep in mind that this is the perfect place to take advantage of the scale-out capability of Gluster.  While it is not true in all cases that deploying more instances will necessarily result in a “faster” cluster, in general you will see that adding more nodes means more performance for the cluster overall.
– If a node reboots, you will typically need to do some extra work to get Gluster running again using the default EC2 configuration.  If a node is shut down, it can mean absolute loss of the node (depending on how you set things up).  This is well beyond the scope of this document, but is discussed in any number of AWS related forums and posts.  Since I found out the hard way myself (oh, so you read the manual every time?!), I thought it worth at least mentioning.
– Amazon EC2 instances have two IP’s by default, the world facing, DNS resolvable one, and an internal IP on a private IP address.  When setting up Gluster for this tutorial, it is ok to use the internal network address.  Interal IP’s in AWS can cause failures for Gluster in a few ways, however, and so should not be used.  Best practices for this are beyone the scope of this article but a best practices for AWS article is in the works.  For clarity, the IP or hostname you ssh into is the external address, and the 10.x.x.x address you see when you run ifconfig in the instance is the internal one.  Many thanks to Semiosis for his detailed feedback on this.
– Don’t forget, there are cases where the internal IP address can change.  If you think about that for a second and you don’t feel your spidey sense tingling, wait until we set things up…at some point you will stop for a moment or two to recover from your head shaking back and forth in disbelief

Once you have both instances up, we are ready to move on to Step 2.

Step 2 – Installing Gluster

Now that you have your platform configured, you are ready to move on to the step you were waiting for, getting this thing working!

Really Really Quick Start Guide

Here are the bare minimum steps you need to get Gluster up and running:

  • Have at least two nodes with a 64 bit OS and a working network connection
  • Format and mount the bricks
  • Add entries to /etc/fstab
  • Install Gluster packages on both nodes
  • Run the gluster peer probe command from one node to the other nodes (do not peer probe the first node)
  • Configure your Gluster volume
  • Test using the volume

Not-quite-as-quick-and-more-verbose-step-by-step-quick-ish-by-comparison Guide

Step 1 –  Have at least two nodes with a 64 bit OS and a working network connection

Step 2 – Format and mount the bricks

Gluster makes use of extended attributes (xattr’s) to work it’s magic.  For some filesystems, though, the default inode size isn’t sufficient to hold all of that extra data at once.  The end result become performance loss, which in most cases will be significant.  To avoid such unpleasantness, we can provision the bricks like this:
mkfs.xfs -i 512 /dev/sdb
Here, we assume that the first non-OS filesystem is at /dev/sdb, replace with the actual path to the brick on your servers.  For virtual hosts this can vary widely, depening on which platform and how you have things setup.  Common values include /dev/xdb, /dev/vdb and /dev/hdb.

Next, we want to mount the volume we just created.  The mount path can be just about anything you want, but here are some tips to help you avoid headaches down the line.

1) The mount point should have the same name on all servers in the trusted pool.  For example, if we have two servers with two Gluster bricks each, we would want to create the same directory structure on each node:
mkdir -p /export/brick{1,2}
2) Now we need to mount the bricks (otherwise, data would be written directly to the root partition which is only a good idea if you don’t really like working anymore…)
mount /dev/sdb /export/brick1

Step 3 – Add an entry to /etc/fstab
We want to make sure that the device comes up automatically if the server is rebooted.  Choose an editor of your choice (apparently vi is better than emacs? I myself am neutral on the subject of course) and add the following to /etc/fstab
/dev/sdb /export/brick1 xfs defaults 1 2

Step 4 –  Installing Gluster
For all distributions, if you will be using InfiniBand, add the appropriate RDMA package to the installations.
For RPM based systems, yum is used as the install method in order to satisfy external depencies such as compat-readline5

4a) For Debian:

Download the packages

wget -nd -nc -r -A.deb

Install the Gluster packages (do this on both servers)

dpkg -i glusterfs_3.3.0-1_amd64.deb

4b) For Ubuntu:

Download the packages
wget -nd -nc -r -A.deb
Note: Packages exist for Ubuntu 10 and 11 as well.

Install the Gluster packages (do this on both servers)

dpkg -i glusterfs_3.3.0-1_amd64.deb

4c) For Red Hat/CentOS (at least one user mentioned this works in SuSE as well but I haven’t tested myself):

Download the packages

wget -l 1 -nd -nc -r -A.rpm

Install the Gluster packages (do this on both servers)

yum install glusterfs-3.3.0-1.el6.x86_64.rpm glusterfs-fuse-3.3.0-1.el6.x86_64.rpm glusterfs-geo-replication-3.3.0-1.el6.x86_64.rpm glusterfs-server-3.3.0-1.el6.x86_64.rpm

4d) For Fedora
Download the packages:

wget -l 1 -nd -nc -r -A.rpm

Install the Gluster packages (do this on both servers)

yum install glusterfs-3.3.0-1.fc16.x86_64.rpm glusterfs-fuse-3.3.0-1.fc16.x86_64.rpm glusterfs-geo-replication-3.3.0-1.fc16.x86_64.rpm glusterfs-server-3.3.0-1.fc16.x86_64.rpm

Step 5 – Configure the trusted pool
Remember that the trusted pool is the term used to define a cluster of nodes in Gluster.  Choose a server to be the “primary” server.   This is where you will generally want to run commands from for this tutorial.  Keep in mind, running a Gluster specific command on one server in the cluster will execute the same command on all other servers.

gluster peer probe (hostname of the other server in the cluster, or IP address if you don’t have DNS or /etc/hosts entries)

Notice that running `gluster peer status` from the second node shows that the first node has already been added.

Step 6 – Set up a Gluster volume
The most basic Gluster volume type is a “Distribute only” volume (also referred to as a “pure DHT” volume if you want to impress the folks at the water cooler), this type of volume simply distributes the data evenly across the available bricks in a volume.  This is faster than a “replicated” volume, but isn’t as popular since it doesn’t give you two of the most sought after features of Gluster — multiple copies of the data, and automatic failover if something goes wrong.  To set up a replicated volume:

gluster volume create gv0 replica 2

Breaking this down into pieces, the first part says to create a gluster volume named gv0 (the name is arbitrary, gv0 was chosen simply because it’s less typing).
Next, we tell it to make the volume a replica volume, and to keep a copy of the data on at least 2 bricks at any given time.  Since we only have two bricks total, this means each server will house a copy of the data.
Lastly, we specify which nodes to use, and which bricks on those nodes.  The order here is important when you have more bricks…it is possible (as of the most current release as of this writing, Gluster 3.3) to specify the bricks in a such a way that you would make both copies of the data reside on a single node.  This would make for an embarrassing explanation to your boss when your bulletproof, completely redundant, always on super cluster comes to a grinding halt when a single point of failure occurs.

Now, we can check to make sure things are working as expected:

Volume Name: gv0

Type: Replicate

Volume ID: 8bc3e96b-a1b6-457d-8f7a-a91d1d4dc019

Status: Created

Number of Bricks: 1 x 2 = 2

Transport-type: tcp




This shows us essentially what we just specified during the volume creation.  The one this to mention is the “Status”.  A status of “Created” means that the volume has been created, but hasn’t yet been started, which would cause any attempt to mount the volume fail.

Step 7 – Testing your swanky new Gluster deploy

Now that we have a volume created, what on earth are we supposed to do with it?  These last steps show how to actually start using the cluster.

First, let’s start the volume.

gluster volume start gv0  && gluster volume info gv0

You should now see that the Status has changed from “Created” to “Started”.

Next, let’s create a mount point to test with.  To help you understand the concept of how Gluster works, I recommend using a different machine than your Gluster servers, but there is no reason it won’t work from the servers if you choose to do so.

mkdir -p /mnt/gluster/gv0

The path of the mount point is, again, arbitrary.  I added a /mnt/gluster directory instead of simple choosing /mnt/gv0 to denote that this is a Gluster mount, but if you want to save the extra typing, feel free.  The one mistake that seems to get made fairly often by new users is to have the mount path be the same as the export path.  For example, having the bricks mounted at /export/brick1 and having the volume mount point be in /export/gluster.  There is nothing wrong with this from a purely technical standpoint, it will work.  But trying to keep that straight at 2 in the morning when your pager goes off might not be as easy as you think.

When mounting Gluster, you have three different protocols possible at your disposal.  Keep in mind that for NFS and CIFS, Gluster does not have “automatic failover” available without configuring additional software such as CTDB or ucarp.  If having failover available without any manual intervention being required, consider using the GlusterFS native client.  Although it requires additional packages, they are small and easy to maintain, and for most users will be less of a hassle that setting up your own failover manually.

– GlusterFS native client – Requires the glusterfs and glusterfs-fuse packages to be installed on the client you want to mount from

mount -t glusterfs /mnt/gluster/gv0

– NFS  – Gluster uses its own implementation of an NFS server, based on NFSv3.  If you want to use NFS with Gluster, just make sure that you have the default NFS services disabled on the servers.  As you will see below, there is no need to modify the /etc/exports file to access Gluster via NFS, you simply specify the mount.  To mount using NFS:

For distributions that use NFSv3 –

mount -t nfs /mnt/gluster/gv0

For distributions that use NFSv4 –

mount -t nfs -overs=3 /mnt/gluster/gv0

– CIFS/Samba

mount -t cifs /mnt/gluster/gv0

From there, you can test writing data into the cluster by copying files directly onto the client mount point.  If you mount the same access point from another client, you will see that the files you copied in are there.  Removing files from one client will show them removed on the other client (this can appear to take some time.  In most cases, however, a performance caching feature of Gluster simply needs to refresh before it sees the changes.  You can force this in most cases by running `ls -R` against the client mount point.

Wrap Up

So, that’s it, you set up your first Gluster storage!  Once you have done it a few times, you will be amazed at how fast it really can be.  In most cases, if your servers are ready, the entire installation from starting to download the packages to having the client mount point copying in data is less than 5 minutes.  The most common question tends to be “Well, what else do we have to do from here to set up Gluster?”.  Feel free to be smug when you say “Nothing.”  Please see the additional resources below to help you get the most out of your Gluster experience.

Gluster main documentation page –
Gluster 3.3 user manual –
Great article with insight on how to performance tune your environment for Gluster –


  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more