[Gluster-users] Upgrading gluster installation -- best practices guide?

Barry Jaspan barry.jaspan at acquia.com
Mon Jan 18 19:21:21 UTC 2010


We're using glusterfs for a web hosting environment. We perform
upgrades within a major release (e.g. 2.x) with no downtime.

To upgrade the client glusterfs software, we just take one web node at
a time out of rotation, stop the app server, umount glusterfs, upgrade
glusterfs, remount glusterfs, restart the app server, and put it back
into rotation. Our load balancers (nginx) also happen to be glusterfs
clients (to eliminate extra hops for static files). For those, we just
umount glusterfs during the upgrade, and since the balancer cannot
then find any files to serve, it passes those requests back to any one
of the app servers, until we upgrade and re-mount glusterfs.

Our glusterfs volume uses cluster/replicate to two servers. To upgrade
the servers, we just take one down at a time and upgrade it. The
clients handle it fine. Note, however, that prior to 2.0.8, when you
take down the *first* subvolume in a cluster/replicate volume, all
directory listings disappear ("ls" returns nothing) even though an
attempt to open existing files by name will work. We're told this is
fixed in 2.0.9 but have not upgraded yet.

We're told that upgrading to glusterfs 3.0 requires a complete
simultaneous shutdown. We do not want to subject our customers to this
more than necessary, so we're letting 3.0 cook a little longer to gain
more confidence in its stability before we try it.

Barry


On Fri, Jan 15, 2010 at 2:32 PM, Paul <pkoelle at gmail.com> wrote:
> Hi all,
>
> We run glusterFS in our testing lab (since 2.0rc1). We are currently using
> client-side AFR (mirror) with two server and two clients over GigE.
>
> Testing is going well except one important point: How do you upgrade with
> minimal/zero downtime? Here I have several questions:
>
> 1. Is the wire-protocol stable during major releases? Can I mix and match
> all 2.0.x client/servers? If not how do find out which one are compatible?
>
> 2. Can I export one directory on the servers through multiple instances of
> glusterfsd (running on different ports)? This would allow to run old and new
> version in parallel for a short time and do a test from the client.
>
> 3. How do I restart clients without shutting down all services accessing the
> mountpoint? Will glusterfs re-read config via signals (HUP)? Or is it OK to
> kill/restart in one go?
>
> How do YOU handle upgrades, especially wrt downtime and rolling back to a
> known good configuration?
>
>
> cheers and thanks in advance
>  Paul
>
> BTW: I have automated most of compile/install/confgen/start/stop on multiple
> clients/servers via bash/ssh. If someone is interested I could share it
> here.
>
> PS: Sorry to the list moderator, first try with wrong FROM-address
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



More information about the Gluster-users mailing list