The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

GlusterFS 3.4 is Here!

Gluster
July 15, 2013

We are very pleased to inform you that GlusterFS 3.4 has now hit GA! This marks an incredible milestone for the Gluster community, and pushes GlusterFS into exciting new directions, including virtual block storage, OpenStack integration and a lot more.

-> Download here.

Changes Of Note

Improvements for Virtual Machine Image Storage

A number of improvements have been performed to let Gluster volumes provide storage for Virtual Machine Images. Some of them include:

  • qemu – libgfapi integration.
  • Causal ordering in write-behind translator.
  • Tunables for a gluster volume in group-virt.example.

The above results in significant improvements in performance for VM image hosting.

Synchronous Replication Improvements

GlusterFS 3.4 features significant improvements in performance for the replication (AFR) translator. This is in addition to bug fixes for volumes that used replica 3.

Open Cluster Framework compliant Resource Agents

Resource Agents (RA) plug glusterd into Open Cluster Framework
(OCF) compliant cluster resource managers, like Pacemaker.

The glusterd RA manages the glusterd daemon like any upstart or systemd job would, except that Pacemaker can do it in a cluster-aware fashion.

The volume RA starts a volume and monitors individual brick’s daemons in a cluster aware fashion, recovering bricks when their processes fail.

POSIX ACL support over NFSv3

setfacl and getfacl commands now can be used on a nfs mount that exports a gluster volume to set or read posix ACLs.

3.3.x compatibility

The new op-version infrastructure provides compatibility with 3.3.x release of GlusterFS. 3.3.x clients can talk to 3.4.x servers and the vice-versa is also possible.

If a volume option that corresponds to 3.4 is enabled, then 3.3 clients cannot mount the volume.

Packaging changes

New RPMs for libgfapi and OCF RA are present with 3.4.0.

Experimental Features

  • RDMA-connection manager (RDMA-CM)
  • New Block Device translator
  • Support for NUFA

As experimental features, we don’t expect them to work perfectly for this release, but you can expect them to improve dramatically as we make successive 3.4.x releases.

Minor Improvements:

  • The Ext4 file system change which affected readdir workloads for Gluster volumes has been addressed.
  • More options for selecting read-child with afr available now.
  • Custom layouts possible with distribute translator.
  • No 32-aux-gid limit
  • SSL support for socket connections.
  • Known issues with replica count greater than 2 addressed.
  • quick-read and md-cache translators have been refactored.
  • open-behind translator introduced.
  • Ability to avoid glusterfs bind to reserved ports.
  • statedumps are now created in /var/run/gluster instead of /tmp by default.

Download GlusterFS 3.4 and get crackalackin’!

Also New – 3.3.2

And that’s not all. We have also released a new version of the 3.3 branch, GlusterFS 3.3.2. This is a maintenance release and features a vast number of bug fixes and performance enhancements over its 3.3.1 predecessor.

BLOG

  • 06 Nov 2018
    Gluster Monthly Newsletter, October...

    Gluster Monthly Newsletter, October 2018 Gluster 5 is out and our retrospective is currently open! This feedback is anonymous and goes to our release team. https://lists.gluster.org/pipermail/gluster-users/2018-October/035171.html https://www.gluster.org/gluster-5-0-retrospective/ Upcoming Community Meeting  – November 7, November 21 – 15:00 UTC in #gluster-meeting on freenode. https://bit.ly/gluster-community-meetings has the agenda. We’re participating in Outreachy...

    Read more
  • 23 Oct 2018
    Announcing Gluster 5

    The Gluster community is pleased to announce the release of 5.0, our latest release. This is a major release that includes a range of code improvements and stability fixes with some management and standalone features as noted below. A selection of the key features and changes are documented on this...

    Read more