One of the salient features in Gluster 3.10 goes by the rather boring – and slightly opaque – name of brick multiplexing. To understand what it is, and why it’s a good thing, read on.
First, let’s review some relevant parts of how Gluster works already. All storage in Gluster is managed as bricks, which are just directories on servers. Often they’re whole disks or volumes on servers, but that doesn’t have to be the case. It has always been possible to have multiple bricks per server, or even per disk, though carving things up into two many pieces could have some unpleasant effects involving various kinds of resources.
Brick multiplexing is just a term for putting multiple bricks into one process. Therefore, many bricks can consume *one* port, *one* set of global data structures, and *one* pool of global threads. This reduces resource consumption, allowing us to run more bricks than before – up to three times as many in some tests involving the very large numbers of bricks that might be involved in a container/hyperconverged kind of configuration.
In addition to reducing overall contention for these resources, brick multiplexing also brings that contention under more direct control. Previously, we were at the mercy of the operating system’s scheduler and paging system to manage this contention. They’d have to make many guesses about what we need, and often they’d guess wrong. We *know*. Now that multiple bricks can run in one process, we can manage contention more carefully to match our priorities and policies. Some day, this will even be the lever we can use to provide multi-tenant isolation and quality of service.
It’s important to note that multiplexing is *not* primarily a performance enhancer. At low brick counts – e.g. less than the number of CPU cores on a system – you’re probably better off not multiplexing. This is true both because of performance and to keep failure domains smaller. In the mid range (hundreds of bricks) multiplexing might or might not perform process-per-brick, depending on workload. Mostly this is not because of multiplexing itself but because of other changes – such as a much more scalable memory-pool implementation – that were developed along with it. There’s still some untapped potential here, so over time multiplexing is likely to improve performance in more cases. At the high end (thousands of bricks) multiplexing is the only option; that many separate brick processes eventually consume so many resources and thrash so much that no useful work gets done.
In short, multiplexing is already helping us address the particular needs of container and hyperconverged workloads. It also provides the infrastructure on which other enhancements can be built which will provider greater benefits to even more users.
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...