Gluster blog stories provide high-level spotlights on our users all over the world
We are very pleased to inform you that GlusterFS 3.4 has now hit GA! This marks an incredible milestone for the Gluster community, and pushes GlusterFS into exciting new directions, including virtual block storage, OpenStack integration and a lot more.
A number of improvements have been performed to let Gluster volumes provide storage for Virtual Machine Images. Some of them include:
The above results in significant improvements in performance for VM image hosting.
GlusterFS 3.4 features significant improvements in performance for the replication (AFR) translator. This is in addition to bug fixes for volumes that used replica 3.
Resource Agents (RA) plug glusterd into Open Cluster Framework
(OCF) compliant cluster resource managers, like Pacemaker.
The glusterd RA manages the glusterd daemon like any upstart or systemd job would, except that Pacemaker can do it in a cluster-aware fashion.
The volume RA starts a volume and monitors individual brick’s daemons in a cluster aware fashion, recovering bricks when their processes fail.
setfacl and getfacl commands now can be used on a nfs mount that exports a gluster volume to set or read posix ACLs.
The new op-version infrastructure provides compatibility with 3.3.x release of GlusterFS. 3.3.x clients can talk to 3.4.x servers and the vice-versa is also possible.
If a volume option that corresponds to 3.4 is enabled, then 3.3 clients cannot mount the volume.
New RPMs for libgfapi and OCF RA are present with 3.4.0.
As experimental features, we don’t expect them to work perfectly for this release, but you can expect them to improve dramatically as we make successive 3.4.x releases.
Download GlusterFS 3.4 and get crackalackin’!
And that’s not all. We have also released a new version of the 3.3 branch, GlusterFS 3.3.2. This is a maintenance release and features a vast number of bug fixes and performance enhancements over its 3.3.1 predecessor.
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.5. Highlights and updates since v0.4: GCS environment updated to kube 1.13 CSI deployment moved to 1.0 Integrated Anthill deployment Kube & etcd metrics added to prometheus Tuning of etcd to increase stability GD2 bug fixes from scale testing...
See you at FOSDEM! We have a jampacked Software Defined Storage day on Sunday, Feb 3rd (with a few sessions on the previous day): https://fosdem.org/2019/schedule/track/software_defined_storage/ We also have a shared stand with Ceph, come find us! Gluster 6 – We’re in planning for our Gluster 6 release, currently scheduled for...
Today, we are announcing the availability of GCS (Gluster Container Storage) 0.4. The release was bit delayed to address some of the critical issues identified. This release brings in a good amount of bug fixes along with some key feature enhancements in GlusterD2. We’d request all of you to try...