<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 07/27/2015 08:30 PM, Glomski,
Patrick wrote:<br>
</div>
<blockquote
cite="mid:CALkMjdDXL3D0JwyWuvnN0cXVCWw=+utgq_s_gOy=qUPyoguSzw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>I built a patched version of 3.6.4 and the problem does
seem to be fixed on a test server/client when I mounted with
those flags (acl, resolve-gids, and gid-timeout). Seeing as it
was a test system, I can't really provide anything meaningful
as to the performance hit seen without the gid-timeout option.
Thank you for implementing it so quickly, though! <br>
<br>
Is there any chance of getting this fix incorporated in the
upcoming 3.6.5 release?<br>
<br>
</div>
Patrick<br>
</div>
</blockquote>
<br>
I am planning to include this fix in 3.6.5. This fix is still under
review. Once it is accepted in master, it cab be backported to
release-3.6 branch. I will wait till then and make 3.6.5.<br>
<br>
Regards,<br>
Raghavendra Bhat<br>
<br>
<blockquote
cite="mid:CALkMjdDXL3D0JwyWuvnN0cXVCWw=+utgq_s_gOy=qUPyoguSzw@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Jul 23, 2015 at 6:27 PM, Niels
de Vos <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:ndevos@redhat.com" target="_blank">ndevos@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">On Tue, Jul 21, 2015 at 10:30:04PM +0200,
Niels de Vos wrote:<br>
> On Wed, Jul 08, 2015 at 03:20:41PM -0400, Glomski,
Patrick wrote:<br>
> > Gluster devs,<br>
> ><br>
> > I'm running gluster v3.6.3 (both server and
client side). Since my<br>
> > application requires more than 32 groups, I
don't mount with ACLs on the<br>
> > client. If I mount with ACLs between the
bricks and set a default ACL on<br>
> > the server, I think I'm right in stating that
the server should respect<br>
> > that ACL whenever a new file or folder is
made.<br>
><br>
> I would expect that the ACL gets in herited on the
brick. When a new<br>
> file is created without the default ACL, things
seem to be wrong. You<br>
> mention that creating the file directly on the
brick has the correct<br>
> ACL, so there must be some Gluster component
interfering.<br>
><br>
> You reminded me on IRC about this email, and that
helped a lot. Its very<br>
> easy to get distracted when trying to investigate
things from the<br>
> mailinglists.<br>
><br>
> I had a brief look, and I think we could reach a
solution. An ugly patch<br>
> for initial testing is ready. Well... it compiles.
I'll try to run some<br>
> basic tests tomorrow and see if it improves things
and does not crash<br>
> immediately.<br>
><br>
> The change can be found here:<br>
> <a moz-do-not-send="true"
href="http://review.gluster.org/11732"
rel="noreferrer" target="_blank">http://review.gluster.org/11732</a><br>
><br>
> It basically adds a "resolve-gids" mount option for
the FUSE client.<br>
> This causes the fuse daemon to call getgrouplist()
and retrieve all the<br>
> groups for the UID that accesses the mountpoint.
Without this option,<br>
> the behavior is not changed, and /proc/$PID/status
is used to get up to<br>
> 32 groups (the $PID is the process that accesses
the mountpoint).<br>
><br>
> You probably want to also mount with
"gid-timeout=N" where N is seconds<br>
> that the group cache is valid. In the current
master branch this is set<br>
> to 300 seconds (like the sssd default), but if the
groups of a used<br>
> rarely change, this value can be increased.
Previous versions had a<br>
> lower timeout which could cause resolving the
groups on almost each<br>
> network packet that arrives (HUGE performance
impact).<br>
><br>
> When using this option, you may also need to enable
server.manage-gids.<br>
> This option allows using more than ~93 groups on
the bricks. The network<br>
> packets can only contain ~93 groups, when
server.manage-gids is enabled,<br>
> the groups are not sent in the network packets, but
are resolved on the<br>
> bricks with getgrouplist().<br>
<br>
</div>
</div>
The patch linked above had been tested, corrected and
updated. The<br>
change works for me on a test-system.<br>
<br>
A backport that you should be able to include in a package
for 3.6 can<br>
be found here: <a moz-do-not-send="true"
href="http://termbin.com/f3cj" rel="noreferrer"
target="_blank">http://termbin.com/f3cj</a><br>
Let me know if you are not familiar with rebuilding patched
packages,<br>
and I can build a test-version for you tomorrow.<br>
<br>
On glusterfs-3.6, you will want to pass a gid-timeout mount
option too.<br>
The option enables caching of the resolved groups that the
uid belongs<br>
too, if caching is not enebled (or expires quickly), you
will probably<br>
notice a preformance hit. Newer version of GlusterFS set the
timeout to<br>
300 seconds (like the default timeout sssd uses).<br>
<br>
Please test and let me know if this fixes your use case.<br>
<br>
Thanks,<br>
Niels<br>
<div class="HOEnZb">
<div class="h5"><br>
<br>
><br>
> Cheers,<br>
> Niels<br>
><br>
> > Maybe an example is in order:<br>
> ><br>
> > We first set up a test directory with setgid
bit so that our new<br>
> > subdirectories inherit the group.<br>
> > [root@gfs01a hpc_shared]# mkdir test; cd test;
chown pglomski.users .;<br>
> > chmod 2770 .; getfacl .<br>
> > # file: .<br>
> > # owner: pglomski<br>
> > # group: users<br>
> > # flags: -s-<br>
> > user::rwx<br>
> > group::rwx<br>
> > other::---<br>
> ><br>
> > New subdirectories share the group, but the
umask leads to them being group<br>
> > read-only.<br>
> > [root@gfs01a test]# mkdir a; getfacl a<br>
> > # file: a<br>
> > # owner: root<br>
> > # group: users<br>
> > # flags: -s-<br>
> > user::rwx<br>
> > group::r-x<br>
> > other::r-x<br>
> ><br>
> > Setting default ACLs on the server allows
group write to new directories<br>
> > made on the server.<br>
> > [root@gfs01a test]# setfacl -m d:g::rwX ./;
mkdir b; getfacl b<br>
> > # file: b<br>
> > # owner: root<br>
> > # group: users<br>
> > # flags: -s-<br>
> > user::rwx<br>
> > group::rwx<br>
> > other::---<br>
> > default:user::rwx<br>
> > default:group::rwx<br>
> > default:other::---<br>
> ><br>
> > The respect for ACLs is (correctly) shared
across bricks.<br>
> > [root@gfs02a test]# getfacl b<br>
> > # file: b<br>
> > # owner: root<br>
> > # group: users<br>
> > # flags: -s-<br>
> > user::rwx<br>
> > group::rwx<br>
> > other::---<br>
> > default:user::rwx<br>
> > default:group::rwx<br>
> > default:other::---<br>
> ><br>
> > [root@gfs02a test]# mkdir c; getfacl c<br>
> > # file: c<br>
> > # owner: root<br>
> > # group: users<br>
> > # flags: -s-<br>
> > user::rwx<br>
> > group::rwx<br>
> > other::---<br>
> > default:user::rwx<br>
> > default:group::rwx<br>
> > default:other::---<br>
> ><br>
> > However, when folders are created client-side,
the default ACLs appear on<br>
> > the server, but don't seem to be correctly
applied.<br>
> > [root@client test]# mkdir d; getfacl d<br>
> > # file: d<br>
> > # owner: root<br>
> > # group: users<br>
> > # flags: -s-<br>
> > user::rwx<br>
> > group::r-x<br>
> > other::---<br>
> ><br>
> > [root@gfs01a test]# getfacl d<br>
> > # file: d<br>
> > # owner: root<br>
> > # group: users<br>
> > # flags: -s-<br>
> > user::rwx<br>
> > group::r-x<br>
> > other::---<br>
> > default:user::rwx<br>
> > default:group::rwx<br>
> > default:other::---<br>
> ><br>
> > As no groups or users were specified, I
shouldn't need to specify a mask<br>
> > for the ACL and, indeed, specifying a mask
doesn't help.<br>
> ><br>
> > If it helps diagnose the problem, the volume
options are as follows:<br>
> > Options Reconfigured:<br>
> > performance.io-thread-count: 32<br>
> > performance.cache-size: 128MB<br>
> > performance.write-behind-window-size: 128MB<br>
> > server.allow-insecure: on<br>
> > network.ping-timeout: 10<br>
> > storage.owner-gid: 100<br>
> > geo-replication.indexing: off<br>
> > geo-replication.ignore-pid-check: on<br>
> > changelog.changelog: on<br>
> > changelog.fsync-interval: 3<br>
> > changelog.rollover-time: 15<br>
> > server.manage-gids: on<br>
> ><br>
> > This approach to server-side ACLs worked
properly with previous versions of<br>
> > gluster. Can anyone assess the situation for
me, confirm/deny that<br>
> > something changed, and possibly suggest how I
can achieve inherited groups<br>
> > with write permission for new subdirectories
in a >32-group environment?<br>
> ><br>
> > Thanks for your time,<br>
> ><br>
> > Patrick<br>
><br>
> >
_______________________________________________<br>
> > Gluster-devel mailing list<br>
> > <a moz-do-not-send="true"
href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > <a moz-do-not-send="true"
href="http://www.gluster.org/mailman/listinfo/gluster-devel"
rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-devel">http://www.gluster.org/mailman/listinfo/gluster-devel</a>
</pre>
</blockquote>
<br>
</body>
</html>