Ideally you want the clients to coordinate among themselves. Note that this feature cannot be implemented foolproof (theoretically) in a system that supports NFSv3.<br><br><div class="gmail_quote">On Thu Jan 08 2015 at 8:57:48 AM Harmeet Kalsi <<a href="mailto:kharmeet@hotmail.com">kharmeet@hotmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div dir="ltr">Hi Anand, that was spot on. Any idea if there will be development on this side in near future as multiple clients writing to the same file can cause issues. <br><br>Regards<br><br><div><hr>From: <a href="mailto:avati@gluster.org" target="_blank">avati@gluster.org</a><br>Date: Thu, 8 Jan 2015 16:07:50 +0000</div></div></div><div><div dir="ltr"><div><br>Subject: Re: [Gluster-devel] mandatory lock<br></div></div></div><div><div dir="ltr"><div>To: <a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>; <a href="mailto:kharmeet@hotmail.com" target="_blank">kharmeet@hotmail.com</a><br>CC: <a href="mailto:gluster-devel@gluster.org" target="_blank">gluster-devel@gluster.org</a></div></div></div><div><div dir="ltr"><div><br><br>Note that the mandatory locks available in the locks translator is just the mandatory extensions for posix locks - at least one of the apps must be using locks to begin with. What Harmeet is asking for is something different - automatic exclusive access to edit files. i.e, if one app has opened a file for editing, other apps which attempt an open must either fail (EBUSY) or block till the first app closes. We need to treat open(O_RDONLY) as a read lock and open(O_RDWR|O_WRONLY) as a write lock request (essentially an auto applied oplock). This is something gluster does not yet have.<br><div><br></div><div>Thanks</div><br><div>On Thu Jan 08 2015 at 2:49:29 AM Raghavendra Gowdappa <<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>> wrote:<br><blockquote style="border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
----- Original Message -----<br>
> From: "Raghavendra Gowdappa" <<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>><br>
> To: "Harmeet Kalsi" <<a href="mailto:kharmeet@hotmail.com" target="_blank">kharmeet@hotmail.com</a>><br>
> Cc: "<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>" <<a href="mailto:gluster-devel@gluster.org" target="_blank">gluster-devel@gluster.org</a>><br>
> Sent: Thursday, January 8, 2015 4:12:44 PM<br>
> Subject: Re: [Gluster-devel] mandatory lock<br>
><br>
><br>
><br>
> ----- Original Message -----<br>
> > From: "Harmeet Kalsi" <<a href="mailto:kharmeet@hotmail.com" target="_blank">kharmeet@hotmail.com</a>><br>
> > To: "<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>" <<a href="mailto:gluster-devel@gluster.org" target="_blank">gluster-devel@gluster.org</a>><br>
> > Sent: Wednesday, January 7, 2015 5:55:43 PM<br>
> > Subject: [Gluster-devel] mandatory lock<br>
> ><br>
> > Dear All.<br>
> > Would it be possible for someone to guide me in the right direction to<br>
> > enable<br>
> > the mandatory lock on a volume please.<br>
> > At the moment two clients can edit the same file at the same time which is<br>
> > causing issues.<br>
><br>
> I see code related to mandatory locking in posix-locks xlator (pl_writev,<br>
> pl_truncate etc). To enable it you've to set "option mandatory-locks yes" in<br>
> posix-locks xlator loaded on bricks<br>
> (/var/lib/glusterd/vols/<<u></u>volname>/*.vol). We've no way to set this option<br>
> through gluster cli. Also, I am not sure to what extent this feature is<br>
> tested/used till now. You can try it out and please let us know whether it<br>
> worked for you :).<br>
<br>
If mandatory locking doesn't work for you, can you modify your application to use advisory locking, since advisory locking is tested well and being used for long time?<br>
<br>
><br>
> > Many thanks in advance<br>
> > Kind Regards<br>
> ><br>
> > ______________________________<u></u>_________________<br>
> > Gluster-devel mailing list<br>
> > <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/<u></u>mailman/listinfo/gluster-devel</a><br>
> ><br>
> ______________________________<u></u>_________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/<u></u>mailman/listinfo/gluster-devel</a><br>
><br>
______________________________<u></u>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/<u></u>mailman/listinfo/gluster-devel</a><br>
</blockquote></div></div></div></div></blockquote></div>