<div dir="ltr"><div><br>Thanks Atin, I had three merge conflicts in the third patch.. I've attached the files with the conflicts. Would any of the intervening commits be needed as well?<br><br></div>The conflicts were in :<br><br> both modified: libglusterfs/src/mem-types.h<br> both modified: xlators/mgmt/glusterd/src/glusterd-utils.c<br> both modified: xlators/mgmt/glusterd/src/glusterd-utils.h<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 17, 2016 at 2:17 PM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 06/17/2016 12:44 PM, B.K.Raghuram wrote:<br>
> Thanks Atin.. I'm not familiar with pulling patches the review system<br>
> but will try:)<br>
<br>
</span>It's not that difficult. Open the gerrit review link, go to the download<br>
drop box at the top right corner, click on it and then you will see a<br>
cherry pick option, copy that content and paste it the source code repo<br>
you host. If there are no merge conflicts, it should auto apply,<br>
otherwise you'd need to fix them manually.<br>
<br>
HTH.<br>
<span class="HOEnZb"><font color="#888888">Atin<br>
</font></span><span class="im HOEnZb"><br>
><br>
> On Fri, Jun 17, 2016 at 12:35 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
</span><span class="im HOEnZb">> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>> wrote:<br>
><br>
><br>
><br>
> On 06/16/2016 06:17 PM, Atin Mukherjee wrote:<br>
> ><br>
> ><br>
> > On 06/16/2016 01:32 PM, B.K.Raghuram wrote:<br>
> >> Thanks a lot Atin,<br>
> >><br>
> >> The problem is that we are using a forked version of 3.6.1 which has<br>
> >> been modified to work with ZFS (for snapshots) but we do not have the<br>
> >> resources to port that over to the later versions of gluster.<br>
> >><br>
> >> Would you know of anyone who would be willing to take this on?!<br>
> ><br>
> > If you can cherry pick the patches and apply them on your source and<br>
> > rebuild it, I can point the patches to you, but you'd need to give a<br>
> > day's time to me as I have some other items to finish from my plate.<br>
><br>
><br>
> Here is the list of the patches need to be applied on the following<br>
> order:<br>
><br>
> <a href="http://review.gluster.org/9328" rel="noreferrer" target="_blank">http://review.gluster.org/9328</a><br>
> <a href="http://review.gluster.org/9393" rel="noreferrer" target="_blank">http://review.gluster.org/9393</a><br>
> <a href="http://review.gluster.org/10023" rel="noreferrer" target="_blank">http://review.gluster.org/10023</a><br>
><br>
> ><br>
> > ~Atin<br>
> >><br>
> >> Regards,<br>
> >> -Ram<br>
> >><br>
> >> On Thu, Jun 16, 2016 at 11:02 AM, Atin Mukherjee<br>
> <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
</span><div class="HOEnZb"><div class="h5">> >> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>> wrote:<br>
> >><br>
> >><br>
> >><br>
> >> On 06/16/2016 10:49 AM, B.K.Raghuram wrote:<br>
> >> ><br>
> >> ><br>
> >> > On Wed, Jun 15, 2016 at 5:01 PM, Atin Mukherjee<br>
> <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> >> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>> wrote:<br>
> >> ><br>
> >> ><br>
> >> ><br>
> >> > On 06/15/2016 04:24 PM, B.K.Raghuram wrote:<br>
> >> > > Hi,<br>
> >> > ><br>
> >> > > We're using gluster 3.6.1 and we periodically find<br>
> that gluster commands<br>
> >> > > fail saying the it could not get the lock on one of<br>
> the brick machines.<br>
> >> > > The logs on that machine then say something like :<br>
> >> > ><br>
> >> > > [2016-06-15 08:17:03.076119] E<br>
> >> > > [glusterd-op-sm.c:3058:glusterd_op_ac_lock]<br>
> 0-management: Unable to<br>
> >> > > acquire lock for vol2<br>
> >> ><br>
> >> > This is a possible case if concurrent volume operations<br>
> are run. Do you<br>
> >> > have any script which checks for volume status on an<br>
> interval from all<br>
> >> > the nodes, if so then this is an expected behavior.<br>
> >> ><br>
> >> ><br>
> >> > Yes, I do have a couple of scripts that check on volume and<br>
> quota<br>
> >> > status.. Given this, I do get a "Another transaction is in<br>
> progress.."<br>
> >> > message which is ok. The problem is that sometimes I get<br>
> the volume lock<br>
> >> > held message which never goes away. This sometimes results<br>
> in glusterd<br>
> >> > consuming a lot of memory and CPU and the problem can only<br>
> be fixed with<br>
> >> > a reboot. The log files are huge so I'm not sure if its ok<br>
> to attach<br>
> >> > them to an email.<br>
> >><br>
> >> Ok, so this is known. We have fixed lots of stale lock issues<br>
> in 3.7<br>
> >> branch and some of them if not all were also backported to<br>
> 3.6 branch.<br>
> >> The issue is you are using 3.6.1 which is quite old. If you<br>
> can upgrade<br>
> >> to latest versions of 3.7 or at worst of 3.6 I am confident<br>
> that this<br>
> >> will go away.<br>
> >><br>
> >> ~Atin<br>
> >> ><br>
> >> > ><br>
> >> > > After sometime, glusterd then seems to give up and die..<br>
> >> ><br>
> >> > Do you mean glusterd shuts down or segfaults, if so I<br>
> am more<br>
> >> interested<br>
> >> > in analyzing this part. Could you provide us the<br>
> glusterd log,<br>
> >> > cmd_history log file along with core (in case of SEGV) from<br>
> >> all the<br>
> >> > nodes for the further analysis?<br>
> >> ><br>
> >> ><br>
> >> > There is no segfault. glusterd just shuts down. As I said<br>
> above,<br>
> >> > sometimes this happens and sometimes it just continues to<br>
> hog a lot of<br>
> >> > memory and CPU..<br>
> >> ><br>
> >> ><br>
> >> > ><br>
> >> > > Interestingly, I also find the following line in the<br>
> >> beginning of<br>
> >> > > etc-glusterfs-glusterd.vol.log and I dont know if<br>
> this has any<br>
> >> > > significance to the issue :<br>
> >> > ><br>
> >> > > [2016-06-14 06:48:57.282290] I<br>
> >> > > [glusterd-store.c:2063:glusterd_restore_op_version]<br>
> >> 0-management:<br>
> >> > > Detected new install. Setting op-version to maximum :<br>
> 30600<br>
> >> > ><br>
> >> ><br>
> >> ><br>
> >> > What does this line signify?<br>
> >><br>
> >><br>
><br>
><br>
</div></div></blockquote></div><br></div>