Recently another Docker image for Gluster was released. More people getting Gluster images into Docker is fantastic news, but because Docker is essentially brand spanking new, few of us are experts. With that in mind, explaining WHY this is great news might be helpful.
In the simplest terms, Docker (also referred to as containers), is a way to deploy machine images. Big deal, we already have kickstart and virtualization, who needs another way to deploy? Anyone who obsesses about near-instant provisioning with absolutely minimal resource consumption, that’s who. In other words, pretty much all of us. An example from my test machine shows that launching the container takes a whopping two seconds:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED \
77b24fbf053a humble/fed20-gluster:latest /usr/bin/supervisord 18 seconds ago \
STATUS PORTS NAMES
Up 16 seconds 22/tcp fedora20-gluster
Another benefit is the footprint. The Docker images I have seen typically are less than 200MB to download. The image here is larger because it includes a fair amount of space to play with Gluster. Unless you have specific needs, you don’t have to build them from scratch as there is a global registry of every major flavor of Linux out there, and many with specific applications baked right in (as is our case with the Gluster image). You can search for other images at the Docker Hub Registry, or from the command line: docker search <string>
.
Maybe you already have rock-solid deploy scripts to your favorite cloud provider, where provisioning times are typically quite fast. That is great, but also comes with a price. Deploying with Docker is free, fast, and because, it uses as few resources as possible—and even those can be tuned, although that is outside the scope of this article—hearing of people deploying hundreds of images running simultaneously to a single machine is not uncommon. You might be able to get a few hundred VMs running on a single box, but typically the performance of the machines is minimal, and the cost of the hardware can be hard to swallow. So Docker, in a nutshell, gives us a great new way to test Gluster.
Before getting started, you must first do a few things. Start by installing the docker-io package (e.g., on my Fedora 20 machine, yum install docker-io
). There also is an older package called docker
, which apparently was for system tray replacement for KDE or Gnome. Also having that package is okay, I supposed, but it won’t help for our needs here.
After installing Docker, start the service:
# systemctl start docker.service
Next, follow the steps outlined in Part 1 of our Run Gluster Containers Using Docker instructions on the Gluster blog.
After you have deployed, ssh to the container IP address. (If you are using the humble/fed20-gluster image, the password is redhat).
To create a test volume from within the container, follow the usual steps:
mkdir -p /export/gluster
gluster volume create gv0 172.17.0.2:/gluster/export
(changing the IP to whatever is appropriate)gluster volume create gv0 172.17.0.2:/export/gluster force
(we use the force command here because we are doing something unsupported, which is fine for our testbed)gluster volume start gv0
Simplicity at its finest. After you have done the initial download, deploying another instance is as simple as:
#docker run --privileged -d --name <new instance name> -i -t 764261ddfd16
Getting another volume set up in the new instance took me less than three minutes, running all the commands manually. That’s very useful, for example, when you want to test similar volumes with different settings. Currently the only drawback is that the image by itself is limited to one node. Again, this is perfect for basic test scenarios.
The dockit project is working on a solution that allows multiple containers to communicate with each other, which is something I am eager to test. But for now, we can do basic work to “fake” creating distributed or distributed-replicated volumes.
First, create several folders within the /exports directory:
# mkdir -p /export/{1..6}
# ls /export/
1 2 3 4 5 6
Although these are not separate filesystems, we will use a special tool called “the power of imagination!” to pretend that they are. As we progress, you will see that functionally the directories will work the same.
To create a pure distributed volume:
# gluster volume create gv-dht 172.17.0.2:/export/{1..2}
And to add another volume with distributed-replicated capabilities:
# gluster volume create gv-afr 172.17.0.2:/export/{3..6}
# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 097eb563-e5db-4b38-8531-bc27a8e2de3d
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 172.17.0.2:/export/gluster
Volume Name: gv-dht
Type: Distribute
Volume ID: 0abe6387-bfe5-4a10-9226-3ce8e7e5f051
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 172.17.0.2:/export/1
Brick2: 172.17.0.2:/export/2
Volume Name: gv-afr
Type: Distribute
Volume ID: cdabdef6-bb57-4c7d-ae87-0db4ccb3fc13
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 172.17.0.2:/export/3
Brick2: 172.17.0.2:/export/4
Brick3: 172.17.0.2:/export/5
Brick4: 172.17.0.2:/export/6
Here we see all three volumes that we have created so far. Mount them the same as always:
# mount -t glusterfs 172.17.0.2:/gv-afr /mnt/gluster
If you prefer, make three mount points, one for each volume.
Keep in mind that the total amount of space in the instance is around 8GB, which should be plenty for simple tests; however, that 8GB is shared between all the volumes. Remember when we used the force
option to do something we weren’t supposed to? And that’s it — happy testing!
This article originally appeared on
community.redhat.com.
Follow the community on Twitter at
@redhatopen, and find us on
Facebook and
Google+.
GlusterFS 3.5.1 RPMs are now available: http://download.gluster.org/pub/gluster/glusterfs/LATEST/ Please go through release notes of 3.5.1 [1] for more information about this release. [1] http://www.gluster.org/2014/06/glusterfs-3-5-1-has-been-released/ All users of GlusterFS 3.5.x are strongly encouraged to upgrade. We welcome your suggestions/comments/feedback about this release through the GlusterFS Developers mailing list. Mailing list information is here: http://www.gluster.org/interact/mailinglists/
We are seeing significant interest and traction in GlusterFS working on more unix distributions. To encourage this, we’re adding maintainers for the various ports so far. 🙂 We are glad to announce the following individuals, who have been chugging GlusterFS along on those distributions, have readily agreed to be port maintainers. Please welcome: 1. Emmanuel …Read more
On Tue, Jun 24, 2014 at 03:15:58AM -0700, Gluster Build System wrote:> > > SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1.tar.gz> > This release is made off jenkins-release-73Many thanks to everyone how tested the…
If you would like to try out gluster, a new CentOS based docker container is available on the docker hub at https://registry.hub.docker.com/u/gluster/gluster/. This image is very new, so do not use it for production environments. It is meant to be an early community version of gluster running within docker. For correctness and performance reasons, we recommend running …Read more
GlusterFS 3.4.4-2 RPMs are now available: http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST These packages include the fix (http://review.gluster.org/#/c/8029/) for Bugzilla issue # 961615: https://bugzilla.redhat.com/show_bug.cgi?id=961615 All users of GlusterFS 3.4.x are strongly encouraged to upgrade. We welcome your suggestions/comments/feedback about this release through the GlusterFS Developers mailing list. Mailing list information is here: http://www.gluster.org/interact/mailinglists/
We need beta testers for Gluster 3.5.1 beta2. Source tarball and RPM’s are available: https://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.1beta2/ http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta2.tar.gz Please try it out on your (non-production!) systems, and attempt to break it. 😀 Results should be reported to the GlusterFS Developers mailing list: gluster-devel@gluster.org.
Earlier this year, R.I.Pienaar released his brilliant data in modules hack, a few months ago, I got the chance to start implementing it in Puppet-Gluster, and today I have found the time to blog about it. What is it? R.I.’s … Continue reading →
FIO (Flexible I/O Tester) is a popular I/O benchmark tool. It generates sequential/random read/write/trim synchronously or asynchronously. It supports client/server mode and can spawn multiple worker threads. FIO supports different IO engines. Previously, FIO runs against Gluster FUSE mount. This doesn’t yield the optimal performance due to the overhead associated with FUSE. But now it …Read more
Reposting the email to the Gluster Users and Developers mailinglists.On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:> > SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gzThis beta release is i…
All good things must come to an end. I can say with no equivocation that the last three years have been the most rewarding from a work perspective than any other job I’ve ever had. When I accepted this challenge in May, 2011, I had no idea that the project and community would blossom as …Read more
oVirt’s Hosted Engine feature, introduced in the project’s 3.4 release, enables the open source virtualization system to host its own management server, which means one fewer required machine, and more self-sufficiency for your oVirt installation.
While a self-sufficient oVirt installation has been achievable for some time using the project’s “All-in-One” method of running an oVirt virtualization host and management server together on one machine, the Hosted Engine feature allows multiple machines to partake in the hosting duties, eliminating any one host as a single point of failure.
The Hosted Engine feature relies on NFS storage to house the management VM. Running an NFS server on one of our virtualization hosts would make that host a new single point of failure, which means we need either to tap an external NFS filer (the approach I took in the walkthrough I posted here recently) or we need to figure out how to make our oVirt hosts serve up their own, replicated NFS storage.
In this post, I’m going to walk through that latter option – setting up a pair of CentOS 6 machines to serve as oVirt virtualization hosts that together provide the NFS storage required for the Hosted Engine feature, using Gluster for this replicated storage and for NFS and CTDB to provide a virtual IP address mount point for the storage.
NOTE: People have been running into some issues with converged Gluster + oVirt setups like the one I describe here.
It’s important to use CTDB, or something like it, to provide for automated IP failover. While it may seem reasonable to simply use “localhost” as the NFS mount point for the hosted engine storage, and rely on Gluster to handle the replication between the servers, this ends up not working reliably.
In my own lab, I’m running a setup like the one below, but with three machines, each serving as virt+storage hosts, with replica 3 Gluster volumes to ensure that a 51% quorum is maintained when one of the machines is down for maintenance.
My planned outages, where (as described below) I first stop the ctdb service on the to-be-shutdown machine, thereby prompting another node to pick up the job, have run smoothly.
I recently tested with an unplanned outage, where I pulled the plug (stopped via power management) on the machine hosting my Gluster NFS storage. Here the handoff left much to be desired – it took 18 minutes for oVirt engine to become fully available again. However, it took the only other VM I had running at the time (which wasn’t on the downed machine) never went offline, as is the norm for oVirt engine outages.
The prerequisites are the same as for the Up and Running with oVirt 3.4 walkthrough, with the addition of a healthy-sized disk or partition to use for our Gluster volumes. The hosted engine VM will require 20GB, and you’ll want to have plenty of storage space left over for the VMs you’ll create and manage with oVirt.
For networking, you can get away with a single network adapter, but for best results, you’ll want three: one for the CTDB heartbeat, one for Gluster traffic, and one for oVirt management traffic and everything else. No matter how you arrange your networking, your two hosts will need to be able to reach other on your network(s). If need be, edit /etc/hosts
on both of your machines to establish the right ip address / host name mappings.
NOTE: Unless I indicate otherwise, you’ll need to perform the steps that follow on both of your machines.
We need a partition to store our Gluster bricks. For simplicity, I’m using a single XFS partition, and my Gluster bricks will be directories within this partition.
You’ll need to carry out these steps on both of your oVirt/Gluster machines, as each will be sharing in the storage duties.
I’m using CentOS 6 hosts for this walkthrough, which is missing one of my favorite utilities from Fedora, system-storage-manager. I packaged up ssm for CentOS using Fedora’s nifty Copr service:
cd /etc/yum.repos.d/ && curl -O http://copr.fedoraproject.org/coprs/jasonbrooks/system-storage-manager/repo/epel-6-x86_64/jasonbrooks-system-storage-manager-epel-6-x86_64.repo && yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm && yum install -y system-storage-manager python-argparse
I use ssm to add a new disk to my LVM pool and to create an XFS partition for Gluster (you can do it however you want):
ssm list (take note of your pool name)
ssm add -p $YOUR_POOL_NAME $YOUR_DEVICE_NAME
ssm create -p $YOUR_POOL_NAME --fstype xfs -n gluster
Next, modify your /etc/fstab
to add the new partition:
mkdir /gluster
blkid $YOUR_NEW_VOLUME_NAME (something like /dev/pool/gluster)
Edit your /etc/fstab
and add the line:
UUID=$YOUR_UUID /gluster xfs defaults 1 2
mount -a
Edit /etc/sysconfig/iptables
to include the rules you’ll need for Gluster, oVirt and CTDB:
# oVirt/Gluster firewall configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -i lo -j ACCEPT
# vdsm
-A INPUT -p tcp --dport 54321 -j ACCEPT
# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
# snmp
-A INPUT -p udp --dport 161 -j ACCEPT
# libvirt tls
-A INPUT -p tcp --dport 16514 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport --dports 5900:6923 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
# glusterd
-A INPUT -p tcp -m tcp --dport 24007 -j ACCEPT
# portmapper
-A INPUT -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38465 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38466 -j ACCEPT
# nfs
-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
# status
-A INPUT -p tcp -m tcp --dport 39543 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 55863 -j ACCEPT
# nlockmgr
-A INPUT -p tcp -m tcp --dport 38468 -j ACCEPT
-A INPUT -p udp -m udp --dport 963 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 965 -j ACCEPT
# ctdbd
-A INPUT -p tcp -m tcp --dport 4379 -j ACCEPT
# Ports for gluster volume bricks (default 100 ports)
-A INPUT -p tcp -m tcp --dport 24009:24108 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 50152:50251 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 34865:34867 -j ACCEPT
# Reject any other input traffic
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
COMMIT
NOTE: During the oVirt Hosted Engine install process, which we’ll get to shortly, the installer script will ask if you want it to configure your iptables. You should answer no, but if you answer yes on accident, the installer will saved a backup copy of the previous configuration as something like /etc/sysconfig/iptables.$DATE
and you can just copy that back over. Keep an eye on this, because your pair of machines will have to communicate with each other for Gluster, for CTDB, for NFS, etc.
Restart your iptables service:
service iptables restart
yum localinstall -y http://resources.ovirt.org/releases/ovirt-release.noarch.rpm && yum install -y glusterfs-server
Edit /etc/glusterfs/glusterd.vol
, uncomment the line option base-port 49152
and change the value 49152
to 50152
. This change works around a conflict between the ports used by libvirt for live migration, and the ports Gluster uses for its bricks.
Now start the Gluster service and configure it to auto-start after subsequent reboots:
service glusterd start && chkconfig glusterd on
Now, we’ll probe our second machine from our first, combining them into a single Gluster trusted pool. Unlike many of the other commands in this walkthrough, it’s only necessary to run this on one of your two machines. If you’re using a separate network for Gluster traffic, you must use your machine’s address on that network for this command:
gluster peer probe $YOUR_OTHER_MACHINE
First, we’ll create a “meta” volume for the clusted file system that CTDB requires for its own needs:
gluster volume create meta replica 2 $YOUR_FIRST_HOST:/gluster/meta0 $YOUR_SECOND_HOST:/gluster/meta1
Then, we’ll start that volume:
gluster volume start meta
chkconfig rpcbind on
service rpcbind start
yum install -y nfs-utils ctdb
Set up conf files for ctdb:
mkdir -p /mnt/lock
mount -t glusterfs localhost:/meta /mnt/lock
Edit /mnt/lock/ctdb
:
CTDB_PUBLIC_ADDRESSES=/mnt/lock/public_addresses
CTDB_NODES=/etc/ctdb/nodes
# Only when using Samba. Unnecessary for NFS.
CTDB_MANAGES_SAMBA=no
# some tunables
CTDB_SET_DeterministicIPs=1
CTDB_SET_RecoveryBanPeriod=120
CTDB_SET_KeepaliveInterval=5
CTDB_SET_KeepaliveLimit=5
CTDB_SET_MonitorInterval=15
CTDB_RECOVERY_LOCK=/mnt/lock/reclock
Edit /mnt/lock/nodes
to include the list of CTDB interconnect/heartbeat IPs. For our two-node install there’ll be two of these. For more info on CTDB configuration, see Configuring CTDB.
Next, edit /mnt/lock/public_addresses
to include the list of virtual addresses to be hosted between the two machines (we only need one), and the network range, and the nic we’re using to host this virtual address:
XX.XX.XX.XX/24 eth0
chkconfig rpcbind on
service rpcbind start
yum install -y nfs-utils ctdb
mkdir -p /mnt/lock
mount -t glusterfs localhost:/meta /mnt/lock
Now, on both hosts, we’ll point our CTDB configuration files at the files we’ve created in the shared meta
volume:
mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.orig
ln -s /mnt/lock/ctdb /etc/sysconfig/ctdb
ln -s /mnt/lock/nodes /etc/ctdb/nodes
ln -s /mnt/lock/public_addresses /etc/ctdb/public_addresses
mount -t glusterfs localhost:/meta /mnt/lock && service ctdb start
gluster volume create engine replica 2 $YOUR_FIRST_MACHINE:/gluster/engine0 $YOUR_OTHER_MACHINE:/gluster/engine1
gluster volume set engine storage.owner-uid 36 && gluster volume set engine storage.owner-gid 36
gluster volume start engine
Create a file named `/var/lib/glusterd/groups/virt’ and paste in the lines below. This provides a “virt” group with settings optimized for VM storage. I’ve left off two quorum-related options present in the original group definition. These quorum settings help prevent split-brain, but will cause VMs hosted on Gluster volumes with the settings applied to pause when one of our two machines goes offline.
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=enable
Next, we’ll add our new engine volume to this virt group:
gluster volume set engine group virt
While we’re at it, let’s create a third Gluster volume, for our regular VM storage in oVirt:
gluster volume create data replica 2 $YOUR_FIRST_MACHINE:/gluster/data0 $YOUR_OTHER_MACHINE:/gluster/data1
gluster volume set data storage.owner-uid 36 && gluster volume set data storage.owner-gid 36
gluster volume set storage group virt
gluster volume start data
With our Gluster-provided NFS storage for the oVirt engine VM arranged, we can proceed with the Hosted Engine installation. See the “Installing oVirt with Hosted Engine” heading in the Up & Running walkthrough and follow the steps there.
As you do, keep these things in mind:
hosted-engine --deploy
command on your first machine, run ‘service ctdb stop’ to ensure that the shared virtual IP address is hosted from your other machine. This will ensure that the ovirtmgmt
network bridge will be created correctly. When it comes time to install your second host (under the heading “Installing a Second Host”), run service ctdb start
on your first host and then service ctdb stop
on your second host, and then start CTDB back up on the second host when the install is done./mnt/lock/public_addresses
) or a host name that resolves to this address. Using this virtual IP ensures that when one machine goes down, the other will automatically pick up the hosting duties for the NFS share.$YOUR_VIRTUAL_IP:data
to use the “data” Gluster volume we configured above for your VM storage.To take down one of the machines, while enabling VM workloads and the management server to continue running on the other machine, follow these steps:
service ctdb stop
to ensure that the second node takes on the virtual IP / NFS hosting chores.When it’s time to boot your maintained host back up, follow these steps on that machine:
mount -t glusterfs localhost:/meta /mnt/lock
service ctdb start
Commands that will come in handy:
service ctdb status
gluster volume heal $YOURVOL statistics
tail -f /var/log/ovirt-hosted-engine-ha/*
I hope that this walkthrough helps you get up and running with Gluster-based storage on your oVirt installation. The Hosted Engine feature is relatively new, and I’m still fully wrapping my brain around it. The biggest issue to watch out for with this configuration is getting your Gluster volumes into a split-brain state. On IRC, I’m jbrooks, ping me in the #ovirt room on OFTC, or the #gluster room on Freenode, or write a comment below.
If you’re interested in getting involved with oVirt or Gluster, you can find all the mailing list, issue tracker, source repository, and wiki information you need on the oVirt or Gluster project sites.
Finally, be sure to follow us on Twitter at @redhatopen for news on oVirt and other open source projects in the Red Hat world.
This article originally appeared on
community.redhat.com.
Follow the community on Twitter at
@redhatopen, and find us on
Facebook and
Google+.
Vagrant has become the de facto tool for devops. Faster iterations, clean environments, and less overhead. This isn’t an article about why you should use Vagrant. This is an article about how to get up and running with Vagrant on … Continue reading →
This post describes recent tests done by Red Hat on an 84 node gluster volume. Our experiments measured performance characteristics and management behavior. To our knowledge, this is the largest performance test ever done under controlled conditions within the organization (we have heard of larger clusters in the community but do not know any details about them). Red Hat officially …Read more
This is a quick trick for making working with git submodules more magic. One day you might find that using git submodules is needed for your project. It’s probably not necessary for everyday hacking, but if you’re glue-ing things together, … Continue reading →
With technologies around Open Software-defined Storage emerging as the way to get things done in the cloud, we’ve noticed strong interest in how to take advantage of this emerging software space. Storage is changing from the proprietary, expensive box in the corner to a set of APIs and open source software deployed in a scale-out …Read more
We are pleased to announce that GlusterFS 3.5 is now available. The latest release includes several long-awaited features such as improved logging, file snapshotting, on-wire compression, and at-rest encryption.
You can download GlusterFS 3.5 now.
#…
Here are the topics this blog is going to cover. Samba Server Samba VFS Libgfapi GlusterFS VFS plugin for Samba and libgfapi Without GlusterFS VFS plugin FUSE mount vs VFS plugin About Samba Server: Samba server runs on Unix and … Continue reading →
Deploying an open source enterprise cloud just got a little bit easier yesterday with the release of the newest version of the OpenStack platform: Icehouse. To quote an email from OpenStack rele…
Setting up an application server in the cloud isn’t that hard if you’re familiar with the tools and your application’s requirements. But what if you needed to do it dozens or hundreds of times, …