[Gluster-users] trouble combining nufa, distribute and replicate

Matthias Munnich matthias.muennich at env.ethz.ch
Wed Jul 7 11:13:11 UTC 2010


Dear Jeff,

thanks a lot for your comment! It sounds like I hit a bug here.  Would 
you still feel queasy if I added "option lookup-unhashed yes" to avoid dht?

What I reported here was some initial testing.  In the end I like to use 
glusterfs to provide a uniform name space for our O(20) workstations
with lokal storage of 4 to 16TB.  The data should be mirrored (once)
for reliability and stored locally were possible for speed.   I also
would prefer not to glue together local disk using LVM or software
raid to keep it easy to add/remove disks without having to expand a
filesystem. 

Any hints how to  set this up  with glusterfs?

... Matt


On Tuesday 06 July 2010 06:30:35 pm Jeff Darcy wrote:
> On 07/06/2010 11:19 AM, Matthias Munnich wrote:
> > Hi!
> >
> > I am trying to combine nufa, distribute and replicate but am running in
> > to messages like
> >
> > ls: cannot open directory .: Stale NFS file handle
> >
> > When I try to list in the mounted directory.  I don't use NFS at all and
> > am puzzled as to what is going on.  Attached you find my client config
> > file. The comments marked "ok" are setups which work. However, more than
> > one disk is local which let me to use 3 layers:
> > 1: replicate, 2: distribute: 3: nufa
> > but somehow this is not working. Does anybody spot what is wrong?
> > Any help is appreciated.
> 
> First, you can pretty much ignore the reference to NFS.  It's just a bad
> errno-to-string conversion.
> 
> Second, it seems like there are several places where we treat ESTALE
> specially, but only one in the I/O path where we generate it.  That one
> is in dht-common.c, which is shared between distribute and nufa.  The
> function is dht_revalidate_cbk, and the ESTALE comes from detecting that
> the dht "layout" structure is inconsistent.  This leads me to wonder
> whether the problem has to do with the fact that distribute/nufa both
> use this code and the same set of extended attributes, and might be
> stepping on each other.  In general, having explored in some depth how
> these translators work, the idea of stacking nufa/distribute on top of
> one another (or themselves) makes me a bit queasy.
> 
> >From your volfile, it looks like you want to create files on one of two
> 
> filesystems replicated between localhost and mentha, and look for files
> created elsewhere on dahlia and salvia.  Assuming the four nodes are
> similar, you might want to consider using nufa with local-volume-name
> set to one of the two replicated subvolumes, and let mentha use the
> other replicated subvolume for the other direction.  Also, you should be
> able to use the localhost filesystems with just storage/posix instead of
> protocol/client (I assume you must have a separate glusterfsd running
> for this setup to work) which would eliminate some context switches and
> another layer of translator hierarchy.  See
> http://www.gluster.com/community/documentation/index.php/NUFA_with_single_p
> rocess for further examples and explanation, and good luck.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 



More information about the Gluster-users mailing list