The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Brick Multiplexing in Gluster 3.10

Gluster
March 15, 2017

One of the salient features in Gluster 3.10 goes by the rather boring – and slightly opaque – name of brick multiplexing.  To understand what it is, and why it’s a good thing, read on.

First, let’s review some relevant parts of how Gluster works already.  All storage in Gluster is managed as bricks, which are just directories on servers.  Often they’re whole disks or volumes on servers, but that doesn’t have to be the case.  It has always been possible to have multiple bricks per server, or even per disk, though carving things up into two many pieces could have some unpleasant effects involving various kinds of resources.

  • Ports.  Each brick has its own port, which means hundreds of bricks could use up hundreds of ports – and hundreds of firewall rules to manage.
  • Memory.  Some data structures in Gluster are global, associated with the process, while others are associated with the translators within a brick.  Replicating these global parts for hundreds of processes can mean a lot of wasted space.
  • CPU.  Like global memory, each process has a global pool of threads – for handling network I/O, disk I/O, and various “housekeeping” purposes.  Replicating these across hundreds of processes can result in many more threads system-wide, and thus more context switching.

Brick multiplexing is just a term for putting multiple bricks into one process.  Therefore, many bricks can consume *one* port, *one* set of global data structures, and *one* pool of global threads.  This reduces resource consumption, allowing us to run more bricks than before – up to three times as many in some tests involving the very large numbers of bricks that might be involved in a container/hyperconverged kind of configuration.

In addition to reducing overall contention for these resources, brick multiplexing also brings that contention under more direct control.  Previously, we were at the mercy of the operating system’s scheduler and paging system to manage this contention.  They’d have to make many guesses about what we need, and often they’d guess wrong.  We *know*.  Now that multiple bricks can run in one process, we can manage contention more carefully to match our priorities and policies.  Some day, this will even be the lever we can use to provide multi-tenant isolation and quality of service.

It’s important to note that multiplexing is *not* primarily a performance enhancer.  At low brick counts – e.g. less than the number of CPU cores on a system – you’re probably better off not multiplexing.  This is true both because of performance and to keep failure domains smaller.  In the mid range (hundreds of bricks) multiplexing might or might not perform process-per-brick, depending on workload.  Mostly this is not because of multiplexing itself but because of other changes – such as a much more scalable memory-pool implementation – that were developed along with it.  There’s still some untapped potential here, so over time multiplexing is likely to improve performance in more cases.  At the high end (thousands of bricks) multiplexing is the only option; that many separate brick processes eventually consume so many resources and thrash so much that no useful work gets done.

In short, multiplexing is already helping us address the particular needs of container and hyperconverged workloads.  It also provides the infrastructure on which other enhancements can be built which will provider greater benefits to even more users.

BLOG

  • 12 Dec 2018
    Gluster Container Storage milestone...

    Today, we are announcing the availability of GCS (Gluster Container Storage) 0.4. The release was bit delayed to address some of the critical issues identified. This release brings in a good amount of bug fixes along with some key feature enhancements in GlusterD2. We’d request all of you to try...

    Read more
  • 05 Dec 2018
    Gluster Monthly Newsletter, Novembe...

    Gluster 6 – We’re in planning for our Gluster 6 release, currently scheduled for Feb 2019. More details on the mailing lists at https://lists.gluster.org/pipermail/gluster-devel/2018-November/055672.html Upcoming Community Meeting – December 19 – 15:00 UTC in #gluster-meeting on freenode. https://bit.ly/gluster-community-meetings has the agenda. Want swag for your meetup? https://www.gluster.org/events/ has a contact...

    Read more
  • 06 Nov 2018
    Gluster Monthly Newsletter, October...

    Gluster Monthly Newsletter, October 2018 Gluster 5 is out and our retrospective is currently open! This feedback is anonymous and goes to our release team. https://lists.gluster.org/pipermail/gluster-users/2018-October/035171.html https://www.gluster.org/gluster-5-0-retrospective/ Upcoming Community Meeting  – November 7, November 21 – 15:00 UTC in #gluster-meeting on freenode. https://bit.ly/gluster-community-meetings has the agenda. We’re participating in Outreachy...

    Read more