The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

GlusterFS on ARM — Adding Lower Cost and Lower Power Consumption to Your Storage


Back in February 2011, when I joined what ultimately became part of the GlusterFS development team at Red Hat, I had already been interested in low power — as in low power consumption — computing for a long time. For most of my earlier explorations I had used a Linksys WRT54G[1] router — which uses a MIPS-based SoC — and the OpenWRT[2] Linux distribution. My primary focus back then was to see if I could shoehorn the important bits of software that my then employer was shipping on its Intel/Linux-based product. As you might guess, the constraints of the platform were severely limiting — not enough memory, little to no storage, slow CPU, slow network, etc., etc.

I was excited to discover, on my arrival, that the Fedora Project was spearheading a new effort to focus on ARM[3] in general, and was working on cleaning up and rationalizing all the various sources for ARM devices in the Linux kernel source. As icing on the cake, they were targeting several modern and affordable devices, including the BeagleBoard xM[4][5], various Dreamplug, Guruplug, Sheevaplug, and Pandaboards, and the TrimSlice[6][7], to name a few. What’s appealing about these devices, among other things, is they have 1GHz CPUs, hardware floating point, 512MB RAM, 100baseT ethernet, and USB ports. The TrimSlice has a dual core CPU and SATA onboard too, but it’s substantially more expensive.

I ordered a BeagleBoard xM and attempted to install the preliminary versions of Fedora that were available then. I had some issues with a variety of things and I eventually decided to abandon the BeagleBoard and buy a TrimSlice H instead. Fedora ARM support had matured somewhat and I sailed through the install with no problems; a much better experience. In the mean time, The Raspberry Pi[8] was announced and I put my name on the waiting list to get one.

One thing to note, currently all ARM CPUs are 32-bit, and most are Little-Endian. All the devices I’ve mentioned so far are all Little-Endian. While there are Big-Endian ARM CPUs they seem to be rare. I’m not aware that anyone has ever run GlusterFS on Big-Endian machine even though there are PowerPC builds of GlusterFS for RHEL.And the first 64-bit ARM CPUs are in the works, and IIRC, we’ll start seeing some of the first ones around the end of 2013.

As it happens, I’m one of the maintainers of GlusterFS for Fedora. For the past year or more, I’ve been doing all the GlusterFS builds in Fedora’s Koji[9] build system. As a result I’ve almost come to prefer to rpmbuild GlusterFS over running make. (There’s probably a pill for that.) In addition to Fedora’s yum repository I also maintain my own yum repository[10] of GlusterFS-3.3.x for Fedora 17 and earlier, and RHEL (including CentOS), where Fedora/EPEL continue to ship GlusterFS-3.2.x for a number of reasons that I won’t go into here. Since I prefer to build GlusterFS with rpmbuild, it was a natural choice to take a GlusterFS source RPM[11] and install it on my TrimSlice. Source RPMs are installed like any other RPM, and the contents land in your ~/rpmbuild/… directory. Then it’s a simple matter to run rpmbuild with `rpmbuild -bb ~/rpmbuild/SPECS/glusterfs.spec`, wait a few minutes, then install the newly made GlusterFS RPMs with `cd ~/rpmbuild/RPMS/armhfp; yum localinstall glusterfs*`

The TrimSlice H has room in its case for a 2.5″ laptop drive. I had a 320GB drive laying around after upgrading the drive in my  work laptop, so I plugged that into the on-board SATA connector. (That was actually part of the install. There are two install options for the TrimSlice: one is to install on and run from an SD card, the other is to install on and run from the SATA drive. I chose the latter.) When I originally installed I left room on the disk for extra partitions. Now to use GlusterFS, I added two more partitions to fill up the rest of the drive. Then I made a btrfs file system on one, and an xfs file system on the other. We recommend that you create larger inodes on xfs volumes with -i size=512 These will be my GlusterFS bricks. I’ve mounted those volumes at /bricks/btrfs and /bricks/xfs. Enable and start glusterd with `systemctl enable glusterd.service; systemctl start glusterd.service` create the volumes with `gluster volume create btrfs $brickhostname:/bricks/btrfs; gluster volume create xfs $hostname:/bricks/xfs`, start the volumes with `gluster volume start btrfs; gluster volume start xfs` , et voila, I’m done. You can mount these volumes on your clients with NFS (use -o tcp,vers=3 on most Linux) or through Gluster native FUSE with `mount -t glusterfs $brickhostname:btrfs` Remember, you’ll need to install GlusterFS RPMs on your clients to use the Gluster native FUSE option.

Then, out of the blue, the Raspberry Pi I’d ordered so long ago finally arrived. This little board is about 5cm square and has a 100baseT ethernet, an HDMI port, and two USB ports. It’s a bit underpowered at only 700MHz CPU and doesn’t have hardware floating point. The Fedora Project offers a “remix” of Fedora 17 for the Pi[12]. That’s because all the kernel bits haven’t made it into the official kernel source yet, so this remix is Fedora 17, but with a one-off kernel. Similar to setting up the TrimSlice, I borrowed a 1GB WD Caviar Blue drive, coupled with a $15 USB/SATA drive “dock”[13], I created three partitions on the drive: one swap and two file systems. I needed the swap space on the drive because I found when compiling that I ran out of memory. I hadn’t noticed the memory issue on the TrimSlice because I had created a swap device on the drive as a matter of routine. The Pi doesn’t have the option of booting from the drive, it runs strictly from the SD card. With swap space on the drive though there’s more than enough memory to compile though. As before, I created a btrfs file system on one of the remaining partitions, and xfs on the other. Build and install GlusterFS, start glusterd, create and start the volumes as before, et voila, done.

When I thought the Raspberry Pi was never going to arrive, someone here at Red Hat arranged a bulk order of Gooseberry boards. These are $45-ish boards originally intended for an inexpensive tablet that somehow made their way out into the world sans the rest of the tablet. They have an SD slot, a mini USB port, and wireless networking. Fedora doesn’t run on them yet, I need to track down the Ubuntu release for this board and get it set up. More on that in another blog entry later.

And—— I was able to resurrect my BeagleBoard. This time around I had a much better experience getting it set up. I have another  drive dock on order and I will soon have another pair of bricks in my GlusterFS storage cluster.

Finally, HP and Calxeda are making server-class hardware you can buy today.



  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more