Gluster blog stories provide high-level spotlights on our users all over the world
Containers are designed to run applications and be stateless in nature. This necessitates containerized applications to store data externally on persistent storage. Since applications can be launched at any point in time in a container cloud, the persistent storage shares also need to be dynamically provisioned without any administrative intervention. Gluster has been taking big strides for this form of container storage by introducing new features and deepening integration with other projects in the container ecosystem.
We have introduced two deployment models for addressing persistent storage with Gluster:
A lot of our integration focus for persistent storage has been with Kubernetes. Kubernetes provides multiple access modes for persistent storage – Read Write Many (RWM), Read Write Once (RWO) and Read Only Many (ROM). Gluster’s native file based access has been found to be an apt match for RWM & ROM workloads. Block devices in gluster volumes are suitable for RWO workloads.
For RWM scenarios with both CNS and CRS, we recommend mapping a Kubernetes persistent volume claim to a Gluster volume. This approach provides isolation, reduces the likelihood of noisy neighbors and enables data services like geo-replication, snapshotting to be applied separately for different persistent volumes.
To enable dynamic provisioning of Gluster volumes, ReST based volume management operations have been introduced via Heketi. Heketi can manage multiple trusted storage pools and has the intelligence to carve out a volume in a trusted storage pool with minimal inputs from users. The provisioner for glusterfs in Kubernetes leverages the capabilities exposed by Heketi and creates volumes on the fly for addressing persistent volume claims made by users. You can find our work to bring together all these projects in the gluster-kubernetes project on github. With support for Storage Classes and Daemon Sets, we have eased the storage setup and dynamic provisioning even further.
Along with dynamic provisioning, a key requirement in container storage environments is the ability to scale and address a large number of persistent volume claims. To get to this level of scale, Gluster has evolved significantly in the recent 3.10 release. Key features that enable scale include:
Brick multiplexing introduces the capability of aggregating bricks belonging to several volumes in a single glusterfsd process. This vastly improves the memory footprint of gluster for serving multiple brick directories from the same node. In addition to being a lesser memory hog, a multiplexed brick also consumes far fewer network ports than the non-multiplexed model. In hyperconverged CNS deployments where resources need to be shared between compute and storage, brick multiplexing optimizes gluster to scale to more number of volumes.
gluster-block provides a management framework for exposing block devices backed by files in a volume through iSCSI. Going forward, we intend using this block interface for scalable RWO persistent volumes. We already have an external provisioner to integrate Kubernetes, Heketi and gluster-block to dynamically provision RWO persistent volumes.
Along with file and block accesses, we have envisioned the need for an Amazon S3 compatible object store in containerized environments. Several applications that are containerized look for ReSTful access to persist data. To address that we recently announced the availability of a gluster-object container that enables accessing a gluster volume through S3 APIs.
We are excited about these innovations in file, block and object accesses of Gluster to address container storage needs. Do let us know if our vision matches your container storage requirements and look forward to more details about our onward journey in the container world here!
Congratulations to the team for getting Gluster 6 released! https://www.gluster.org/announcing-gluster-6/ https://lists.gluster.org/pipermail/gluster-users/2019-March/036144.html Our retrospective survey is open through April 8th, give us feedback on what we should start, stop or continue! https://lists.gluster.org/pipermail/gluster-users/2019-March/036144.html Gluster 7 Roadmap Discussion kicked off for our 7 roadmap on the mailing lists, see [Gluster-users] GlusterFS...
The Gluster community is pleased to announce the release of 6.0, our latest release. This is a major release that includes a range of code improvements and stability fixes along with a few features as noted below. Specifically, this release addresses one of the major concerns regarding FUSE mount process memory...