The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Community Survey Feedback, 2019

Amye Scavarda

In this year’s survey, we asked quite a few questions about how people are using Gluster, how much storage they’re managing, their primary use for Gluster, and what they’d like to see added.

Here’s some of the highlights from this year!

How long have you been using Gluster?
Less than one year – 18.75%
One to two years – 37.50%
Two to four years – 22.92%
More than four years – 18.75%

How much storage is Gluster managing?
Less than 50 terabytes 33 68.75%
50 to 150 terabytes 5 10.42%
150 to 500 terabytes 4 8.33%
500 terabytes to 1 petabyte 2 4.17%
Over 1 petabyte 2 4.17%
Don’t know 1 2.08%

What is your primary use for Gluster?
43.75% – Containers
35.42% – Backup
37.50% -File Sync and Share
50.00% – Virtual Infrastructure and/or App Data
8.33% – On Demand Media Store
10.42% – Media Production
22.92% – Web Content Delivery
4.17% – Hadoop File Store

How do you access Gluster?
Native client 25.00%
NFS 4.17%
QEMU 6.25%
Object/HTTP 6.25%
libgfapi-based methods 10.42%
SMB 2.08%

How else would you like to access Gluster?
– Fiber Channel.
– S3
– internet
– Native Client, NFS, Object Interfaces
– will be adding docker/k8 methods for openstack in the future. maybe as a block device/iscsi.
– Web Based managment
– Tested GFAPI but doesn’t work in a HA environment with oVirt.
– object, block
– FCoE
– Builtin iscsi
– libgfapi
– api

Where do you run Gluster?
On Premise Baremetal 68.75%
On Premise Private Cloud 16.67%
Hybrid (ie, On Premise and on Cloud) 2.08%
Other 4.17%

On what hardware platforms are you running Gluster?
SuperMicro 25.00%
Dell 4.17%
Other 20.83%
HP 25.00%

How happy are you with your Gluster installation?
Very happy – 20.83%
Somewhat happy – 54.17%
Somewhat unhappy – 20.83%
Very unhappy – 2.08%

What feature or features would make Gluster even more useful?

– Automatic provisioning of smb/nfs shares already in corosync HA between nodes
– Better performance with Samba and small files
– Restless Management API
– NFS HA (Storehaug?)
– We’d love to see GD2 and RIO.
– Better unit- and integration-test coverage. More robustness.
– Async replication inside a GlusterFS Cluster (without Geo-Replication).”
– need ISCSI and BLOCK
– Too often new releases contain bugs that break existing features. This is probably a sign of lack of testing. In general the stability is not great.
– Bitrot repair
– Better support of small files
– Audit Log support
– Web management interface
– Automated Bitrot Repair
– Xlator to realize an Audit Trail
– WORM Appendable File
– Improvement for Small File Performance
– Performance is poor for virtual machines
– Performance guidelines, gluster for windows
– IO perf are slow with oVirt 4.3
– Move to Network. Switch ALL peers/bricks to new network.
– ZFS integration, at least at the snapshot level.
– Documentation
– Recommanded settings for different workloads (eg : small files, R/O, R/W, etc…)”
– More integrations with ovirt and ansible
– Active-standby 2 nodes cluster
– Stability. We are moving to Lustre for that reason.
– Live movement of a brick to another server or failover (unmount on one server, mount on another with access to block devices).
– Sane operation when a brick is down (block until it is back).
– A tool like the Linux tuned daemon with predefined profiles for different workloads.
– Better performance, especially with small files and a good web interface
– Web Console administration

Some longer answers:

Better monitoring/introspection tools (for example, when volume heal is not converging, need to easily find out exactly why).
Would love to treat a gluster cluster as a “pool” of storage and describe the failure zones in terms of nodes and bricks, then have gluster assign replicas across failure zones automatically and rebalance/migrate replicas as cluster grows/changes.

Stability and less regression/problems in new releases. And a saner, less mozilla like release schedule/versioning. I’m using it with Ovirt, and it appears that the new Ovirt 4.3 release with Gluster 5.3 is extremely unstable. It feels like you’re forcing less well tested and unstable software on folk with the current release schedule. As a network storage system, stability and performance are my primary concerns, not new features.

We’re using it with oVirt, so the following isn’t a critique of Gluster itself. libgfapi is support is incomplete. in oVirt4. Gluster reliability has varied (not suggesting this is a fault with gluster – its likely to be interop). Note, our experience is with v3 (looking forward to v4). Also, better monitoring (i.e gluster-prometheus) is welcome & will be well received by users!


  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more