The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Further notes on Gluster 3.10 and the direction for Gluster

Gluster
February 28, 2017

This release of Gluster ushers in improvements for container storage, hyperconverged storage and scale-out Network Attached Storage (NAS) use cases. These use cases have been the primary focus areas for previous releases over the last 12-18 months and will continue to be the primary focus for the next three planned releases.

One of the things we’re really focused on as a project is persistent storage for containerized microservices. Part of this effort has been working with heketi and gluster-kubernetes to enhance our integration with containers. Continuing in the same vein, 3.10 brings about the following key improvements for container storage:

 

  • Brick multiplexing: Provides the ability to scale the number of exports and volumes per node. This is useful in container storage where there is a need for a large number of shared storage (Read Write Many) volumes. Brick Multiplexing also provides the infrastructure needed to implement Quality of Service in Gluster for a multi-tenant container deployment.
  • gluster-block: Along with 3.10, we are also releasing gluster-block v0.1.  gluster-block provides a very intuitive lifecycle management interface for block devices in a Gluster volume. This release of gluster-block configures block devices to be accessed from initiators through iSCSI.  Work on integrating gluster-block with Heketi for supporting Read Write Once volumes in Kubernetes is in progress.
  • S3 access for Gluster: We are also releasing an Amazon S3 compatible object storage container based on Swift3 and gluster-swift in Gluster’s docker hub. S3 access for Gluster will be useful for application developers who leverage S3 API for storage.

 

Deployment of hyperconverged storage for containers and virtualization is also a focus area for 3.10. gdeploy provides an improved ansible playbook for deploying hyperconvergence with oVirt and cockpit-gluster provides a wizard to make deployment using this playbook easy with oVirt. gk-deploy makes it easy to deploy Heketi and Gluster in hyperconverged container deployments.

 

There have been various improvements for scale-out NAS deployments in 3.10. Some of them include:

 

  • Parallelized readdirp: Improves performance of operations that perform directory crawl. This in conjunction with meta-data caching introduced in 3.9 provides a nice performance boost for small file operations.
  • Statedump support for libgfapi:  Debuggability and supportability improvement for projects like NFS Ganesha and Samba that have been integrated with libgfapi.
  • Estimate for rebalance completion: Helps administrators understand when rebalancing would complete.

 

Needless to say, none of this would have been possible without the support of our contributors and maintainers. Thank you to all those who made it happen! We  are excited at this juncture to deliver features that enhance the user experience on our key focus areas of container storage, hyperconvergence & scale-out NAS. We intend building on this momentum for further improvements on our key focus areas. Stay tuned and get involved as we progress along this journey with further Gluster releases!

 

BLOG

  • 31 Jan 2019
    Gluster Monthly Newsletter, January...

    Gluster Monthly Newsletter, January 2019 Gluster Community Survey – open from February 1st through February 28! Give us your feedback, we’ll send you a never before seen Gluster branded item!  https://www.gluster.org/gluster-community-survey-february-2019/   See you at FOSDEM! We have a jampacked Software Defined Storage day on Sunday, Feb 3rd  (with a...

    Read more
  • 11 Jan 2019
    Gluster Container Storage 0.5 relea...

    Today, we are announcing the availability of GCS (Gluster Container Storage) 0.5. Highlights and updates since v0.4: GCS environment updated to kube 1.13 CSI deployment moved to 1.0 Integrated Anthill deployment Kube & etcd metrics added to prometheus Tuning of etcd to increase stability GD2 bug fixes from scale testing...

    Read more