Gluster blog stories provide high-level spotlights on our users all over the world
In the recent past, the Gluster community has been focusing on persistent storage for containers as a key use case for the project and Gluster has been making rapid strides in its integration with Kubernetes. The release of 4.0 will deepen that integration and provide a foundation for building more functionality using these two popular open source ecosystems.
If you have not been following our progress in this space, Gluster has been integrated with Kubernetes by using an intelligent middleware known as Heketi. Heketi manages storage on Gluster servers and exposes high level RESTful APIs for consumers to dynamically provision storage from multiple Gluster trusted storage pools. This allows users to create a new volume by simply specifying the desired size and the replication factor. Upon receiving such a request, Heketi figures out the nodes on which bricks have to be placed and creates bricks in a way that Gluster expects. Not only does Heketi provide interfaces for managing the lifecycle of file-based GlusterFS volumes, it also provides lifecycle management for block devices created using gluster-block. Additionally, Heketi provides interfaces for day-2 operations like volume expansion, disk replacement, and node replacement.
Gluster 4.0 introduces glusterd2, the next generation management engine for Gluster. Glusterd2 provides better scale for membership of servers in a trusted storage pool. It provides RESTful interfaces for volume & membership operations and provides a high degree of consistency for state management within a trusted storage pool by integrating with an embedded etcd store. The RESTful interface from Glusterd2 is useful for projects like Heketi which currently have to invoke Gluster’s CLI through ssh or kubectl exec. All such invocations can be replaced with RESTful calls to Glusterd2 to provide a complete service oriented architecture.
For the future, Glusterd2 aims to provide capabilities to manage storage devices, expose higher level APIs for volume management and make it easier for operators to monitor & manage storage. Heketi was originally implemented to address these gaps in Glusterd and already has these key capabilities. Hence we will be incorporating most of Heketi’s key logic in Glusterd2 in the near future. This will result in Glusterd2 exposing both the high level and the classical volume interfaces behind the same consistent API and CLI. As a consequence, the cluster state that Heketi maintains in its database will be greatly simplified and will only contain minimal information about multiple Gluster trusted storage pools.
Heketi has evolved significantly with the recent release of v6.0.0. As a consequence, the integration with Kubernetes has been improved vastly with the introduction of the following features:
Future releases of Heketi are expected to add support for arbiter-volumes, complement features being developed in upstream Kubernetes, and expose interfaces for upcoming features in Kubernetes like Snapshots & Cloning. There are ongoing efforts to provide an enhanced disaster recovery mechanism for persistent volumes using the geo-replication feature in Gluster.
Gluster-block also has observed a slew of improvements in the recent past. The upcoming release of gluster-block will contain:
Further releases of gluster-block are expected to add support for loopback devices and provide mechanisms to snapshot and clone block devices.
With work ongoing in projects like gluster-kubernetes, gluster-subvol, gluster-csi-driver & gluster-s3 it has been an exciting phase for evolving Gluster as a robust & flexible storage backend for containers. Stay tuned as we build out more features on the foundation that Gluster 4.0 provides for container storage and help the broader cause of making application deployment simpler with microservices!
Congratulations to the team for getting Gluster 6 released! https://www.gluster.org/announcing-gluster-6/ https://lists.gluster.org/pipermail/gluster-users/2019-March/036144.html Our retrospective survey is open through April 8th, give us feedback on what we should start, stop or continue! https://lists.gluster.org/pipermail/gluster-users/2019-March/036144.html Gluster 7 Roadmap Discussion kicked off for our 7 roadmap on the mailing lists, see [Gluster-users] GlusterFS...
The Gluster community is pleased to announce the release of 6.0, our latest release. This is a major release that includes a range of code improvements and stability fixes along with a few features as noted below. Specifically, this release addresses one of the major concerns regarding FUSE mount process memory...