Gluster blog stories provide high-level spotlights on our users all over the world
(This was originally posted on our Q&A site at community.gluster.org)
Problem: VERY slow performance when using ‘bedtools’ and other apps that write zillions of small output chunks.
If this was a self-writ app or an infrequently used one, I wouldn’t bother writing this up, but ‘bedtools’ is a fairly popular genomics app and since many installations use gluster to host Next-Gen sequencing data and analysis, I thought I’d document it a bit more.
A sophisticated user complained that when using the gluster filsystem, his ‘bedtools’ performance decreased horribly relative to using a local filesystem (fs) or NFS-mounted fs. The ‘bedtools’ utility ‘genomeCoverageBed’ reads in several GB of ‘bam’ file, compares it to a reference genome, and then writes out millions of small coordinates. I originally thought that this was due to reads and writes from the gluster fs conflicting at some point because if the output was directed to another fs, it ran normally.
It turned out that it’s not that simultaneous reads and writes dramatically decrease perf, but
that the /type of writes/ being done by ‘bedtools’ kills performance.
Solution: the short version:
Insert ‘gzip’ to compress and stream the data before sending it to the gluster fs. The improvement in IO (and application) performance is dramatic.
ie (all files on a gluster fs)
genomeCoverageBed -ibam RS_11261.bam -g ref/dmel-all-chromosome-r5.1.fasta \
-d |gzip > output.cov.gz
inserting the ‘| gzip’ increased the overall app speed by more than 30X, relative to not using it on a gluster fs. It even improved the wall clock speed of the app relative to running on a local
filesystem by ~1/3, decreased the gluster CPU utilization by ~99% and reduced the output size by 80%. So, wins all round.
Solution: the long version:
The type of writes that bedtools does is also fairly common in bioinformatics and especially in self-writ scripts and logs – lots of writes of tiny amounts of data.
As I understand it (which may be wrong; please correct) the gluster
native client which we’re using does not buffer IO as well as the
NFS client, which is why we frequently see complaints about gluster vs
NFS performance. The apparent problem for bedtools is that these zillions of tiny
writes are being handled separately or at least not cached well enough to be
consolidated in a large write. To present the data to gluster as a
continuous stream instead of these tiny writes, they have to be
converted to such a stream and ‘gzip’ is a nice solution because it
compresses as it converts.
Apparently anything that takes STDIN, buffers it appropriately, and then spits it out on STDOUT will work. Even piping the data thru ‘cat’ will work to allow bedtools to continue to run at 100%, tho it will cause the gluster CPU use to increase to ~90%. ‘cat’ of course uses less CPU itself (~14%) while gzip will use more (~60%) tho decreasing gluster’s use enormously.
Using ‘gzip’ seems to provide the better tradeoff since it decreases gluster use to ~1% and reduces the output dramatically as well. Note that this will obviously increase the total CPU use to more than 1 CPU, so you may have to modify your scheduler/CPU allocation to accomodate this.
Gluster does provide some performance options that seem to address this problem and I did try them, setting them to:
However, they did not seem to help at all and I’d still like an explanation
of what they’re supposed to do.
The upshot is that this seems like, if not a gluster bug, then at least an
opportunity to improve gluster performance considerably by consolidating small writes into larger ones.
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...