<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi Atin/Kaushal,<br>
I am interested to take up "selective read-only mode" feature.
(Bug#829042)<br>
I will look into this and talk to you further.<br>
<br>
Thanks,<br>
Saravana<br>
<br>
<div class="moz-cite-prefix">On 08/13/2015 08:58 PM, Atin Mukherjee
wrote:<br>
</div>
<blockquote
cite="mid:CAGkR8FPvVcCvCUn5Sb+mdwJ0jQBeOcZfUsWrAvY3EjnGEAc2RQ@mail.gmail.com"
type="cite">
<p dir="ltr">Can we have some volunteers of these BZs?</p>
<p dir="ltr">-Atin<br>
Sent from one plus one</p>
<div class="gmail_quote">On Aug 12, 2015 12:34 PM, "Kaushal M"
<<a moz-do-not-send="true" href="mailto:kshlmster@gmail.com">kshlmster@gmail.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Csaba,<br>
<br>
These are the updates regarding the requirements, after our
meeting<br>
last week. The specific updates on the requirements are
inline.<br>
<br>
In general, we feel that the requirements for selective
read-only mode<br>
and immediate disconnection of clients on access revocation
are doable<br>
for GlusterFS-3.8. The only problem right now is that we do
not have<br>
any volunteers for it.<br>
<br>
> 1. Bug 829042 - [FEAT] selective read-only mode<br>
> <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=829042"
rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=829042</a><br>
><br>
> absolutely necessary for not getting tarred &
feathered in Tokyo ;)<br>
> either resurrect <a moz-do-not-send="true"
href="http://review.gluster.org/3526" rel="noreferrer"
target="_blank">http://review.gluster.org/3526</a><br>
> and _find out integration with auth mechanism for
special<br>
> mounts_, or come up with a completely different
concept<br>
><br>
<br>
With the availability of client_t, implementing this should
become<br>
easier. The server xlator would store the incoming connections
common<br>
name or address in the client_t associated with the
connection. The<br>
read-only xlator could then make use of this information to<br>
selectively allow read-only clients. The read-only xlator
would need<br>
to implement a new option for selective read-only, which would
be<br>
populated with lists of common-names and addresses of clients
which<br>
would get read-only access.<br>
<br>
> 2. Bug 1245380 - [RFE] Render all mounts of a volume
defunct upon access revocation<br>
> <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1245380"
rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1245380</a><br>
><br>
> necessary to let us enable a watershed scalability<br>
> enhancement<br>
><br>
<br>
Currently, when auth.allow/reject and auth.ssl-allow options
are<br>
changed, the server xlator does a reconfigure to reload its
access<br>
list. It just does a reload, and doesn't affect any existing<br>
connections. To bring this feature in, the server xlator would
need to<br>
iterate through its xprt_list and check every connection for<br>
authorization again on a reconfigure. Those connections which
have<br>
lost authorization would be disconnected.<br>
<br>
> 3. Bug 1226776 – [RFE] volume capability query<br>
> <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1226776"
rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1226776</a><br>
><br>
> eventually we'll be choking in spaghetti if we
don't get<br>
> this feature. The ugly version checks we need to do
against<br>
> GlusterFS as in<br>
><br>
> <a moz-do-not-send="true"
href="https://review.openstack.org/gitweb?p=openstack/manila.git;a=commitdiff;h=29456c#patch3"
rel="noreferrer" target="_blank">https://review.openstack.org/gitweb?p=openstack/manila.git;a=commitdiff;h=29456c#patch3</a><br>
><br>
> will proliferate and eat the guts of the code out
of its<br>
> living body if this is not addressed.<br>
><br>
<br>
This requires some more thought to figure out the correct
solution.<br>
One possible way to get the capabilities of the cluster would
be to<br>
look at the clusters running op-version. This can be obtained
using<br>
`gluster volume get all cluster.op-version` (the volume get
command is<br>
available in glusterfs-3.6 and above). But this doesn't
provide much<br>
improvement over the existing checks being done in the driver.<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
<a moz-do-not-send="true"
href="http://www.gluster.org/mailman/listinfo/gluster-devel"
rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-devel">http://www.gluster.org/mailman/listinfo/gluster-devel</a>
</pre>
</blockquote>
<br>
</body>
</html>