The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Gluster 4.0 & Kubernetes

vbellur
February 27, 2018

In the recent past, the Gluster community has been focusing on persistent storage for containers as a key use case for the project and Gluster has been making rapid strides in its integration with Kubernetes. The release of 4.0 will deepen that integration and provide a foundation for building more functionality using these two popular open source ecosystems.

If you have not been following our progress in this space, Gluster has been integrated with Kubernetes by using an intelligent middleware known as Heketi. Heketi manages storage on Gluster servers and exposes high level RESTful APIs for consumers to dynamically provision storage from multiple Gluster trusted storage pools. This allows users to create a new volume by simply specifying the desired size and the replication factor. Upon receiving such a request, Heketi figures out the nodes on which bricks have to be placed and creates bricks in a way that Gluster expects. Not only does Heketi provide interfaces for managing the lifecycle of file-based GlusterFS volumes, it also provides lifecycle management for block devices created using gluster-block. Additionally, Heketi provides interfaces for day-2 operations like volume expansion, disk replacement, and node replacement.

Gluster 4.0 introduces glusterd2, the next generation management engine for Gluster. Glusterd2 provides better scale for membership of servers in a trusted storage pool. It provides RESTful interfaces for volume & membership operations and provides a high degree of consistency for state management within a trusted storage pool by integrating with an embedded etcd store. The RESTful interface from Glusterd2 is useful for projects like Heketi which currently have to invoke Gluster’s CLI through ssh or kubectl exec. All such invocations can be replaced with RESTful calls to Glusterd2 to provide a complete service oriented architecture.

For the future, Glusterd2 aims to provide capabilities to manage storage devices, expose higher level APIs for volume management and make it easier for operators to monitor & manage storage. Heketi was originally implemented to address these gaps in Glusterd and already has these key capabilities. Hence we will be incorporating most of Heketi’s key logic in Glusterd2 in the near future. This will result in Glusterd2 exposing both the high level and the classical volume interfaces behind the same consistent API and CLI.  As a consequence, the cluster state that Heketi maintains in its database will be greatly simplified and will only contain minimal information about multiple Gluster trusted storage pools.

Heketi has evolved significantly with the recent release of v6.0.0. As a consequence, the integration with Kubernetes has been improved vastly with the introduction of the following features:

  • Support for provisioning gluster-block backed persistent volumes with the new external gluster-block-provisioner ()
  • Support for expanding persistent volumes (via Heketi)
  • Custom volume names for persistent volumes (via Heketi)
  • Prometheus metrics collection for gluster volumes (directly in Kubernetes)
  • Improved device management with resync API
  • Enhanced robustness for Heketi’s database

Future releases of Heketi are expected to add support for arbiter-volumes, complement features being developed in upstream Kubernetes, and expose interfaces for upcoming features in Kubernetes like Snapshots & Cloning. There are ongoing efforts to provide an enhanced disaster recovery mechanism for persistent volumes using the geo-replication feature in Gluster.

Gluster-block also has observed a slew of improvements in the recent past. The upcoming release of gluster-block will contain:

  • Ability to migrate configured block-devices across nodes
  • Creation of  new block devices with existing backing files
  • Configuration option for deletion of backing files

Further releases of gluster-block are expected to add support for loopback devices and provide mechanisms to snapshot and clone block devices.

With work ongoing in projects like gluster-kubernetes, gluster-subvol, gluster-csi-driver & gluster-s3 it has been an exciting phase for evolving Gluster as a robust & flexible storage backend for containers. Stay tuned as we build out more features on the foundation that Gluster 4.0 provides for container storage and help the broader cause of making application deployment simpler with microservices!

BLOG

  • 31 Mar 2019
    Gluster Monthly Newsletter, March 2...

    Congratulations to the team for getting Gluster 6 released! https://www.gluster.org/announcing-gluster-6/ https://lists.gluster.org/pipermail/gluster-users/2019-March/036144.html   Our retrospective survey is open through April 8th, give us feedback on what we should start, stop or continue! https://lists.gluster.org/pipermail/gluster-users/2019-March/036144.html   Gluster 7 Roadmap Discussion kicked off for our 7 roadmap on the mailing lists, see [Gluster-users] GlusterFS...

    Read more
  • 25 Mar 2019
    Announcing Gluster 6.0

    The Gluster community is pleased to announce the release of 6.0, our latest release. This is a major release that includes a range of code improvements and stability fixes along with a few features as noted below. Specifically,  this release addresses one of the major concerns regarding FUSE mount process memory...

    Read more