<div dir="ltr">I'd tried that sometime back but ran into some merge conflicts and was not sure who to turn to :) May I come to you for help with that?!<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 17, 2016 at 3:29 PM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 06/17/2016 03:21 PM, B.K.Raghuram wrote:<br>
> Thanks a ton Atin. That fixed cherry-pick. Will build it and let you<br>
> know how it goes. Does it make sense to try and merge the whole upstream<br>
> glusterfs repo for the 3.6 branch in order to get all the other bug<br>
> fixes? That may bring in many more merge conflicts though..<br>
<br>
</span>Yup, I'd not recommend that. Applying your local changes on the latest<br>
version is a much easier option :)<br>
<span class="im HOEnZb"><br>
><br>
> On Fri, Jun 17, 2016 at 3:07 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
</span><span class="im HOEnZb">> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>> wrote:<br>
><br>
> I've resolved the merge conflicts and files are attached. Copy these<br>
> files and follow the instructions from the cherry pick command which<br>
> failed.<br>
><br>
> ~Atin<br>
><br>
> On 06/17/2016 02:55 PM, B.K.Raghuram wrote:<br>
> ><br>
> > Thanks Atin, I had three merge conflicts in the third patch.. I've<br>
> > attached the files with the conflicts. Would any of the intervening<br>
> > commits be needed as well?<br>
> ><br>
> > The conflicts were in :<br>
> ><br>
> > both modified: libglusterfs/src/mem-types.h<br>
> > both modified: xlators/mgmt/glusterd/src/glusterd-utils.c<br>
> > both modified: xlators/mgmt/glusterd/src/glusterd-utils.h<br>
> ><br>
> ><br>
> > On Fri, Jun 17, 2016 at 2:17 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
</span><span class="im HOEnZb">> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>> wrote:<br>
> ><br>
> ><br>
> ><br>
> > On 06/17/2016 12:44 PM, B.K.Raghuram wrote:<br>
> > > Thanks Atin.. I'm not familiar with pulling patches the review system<br>
> > > but will try:)<br>
> ><br>
> > It's not that difficult. Open the gerrit review link, go to the download<br>
> > drop box at the top right corner, click on it and then you will see a<br>
> > cherry pick option, copy that content and paste it the source code repo<br>
> > you host. If there are no merge conflicts, it should auto apply,<br>
> > otherwise you'd need to fix them manually.<br>
> ><br>
> > HTH.<br>
> > Atin<br>
> ><br>
> > ><br>
> > > On Fri, Jun 17, 2016 at 12:35 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
</span><div class="HOEnZb"><div class="h5">> > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>> wrote:<br>
> > ><br>
> > ><br>
> > ><br>
> > > On 06/16/2016 06:17 PM, Atin Mukherjee wrote:<br>
> > > ><br>
> > > ><br>
> > > > On 06/16/2016 01:32 PM, B.K.Raghuram wrote:<br>
> > > >> Thanks a lot Atin,<br>
> > > >><br>
> > > >> The problem is that we are using a forked version of 3.6.1 which has<br>
> > > >> been modified to work with ZFS (for snapshots) but we do not have the<br>
> > > >> resources to port that over to the later versions of gluster.<br>
> > > >><br>
> > > >> Would you know of anyone who would be willing to take this on?!<br>
> > > ><br>
> > > > If you can cherry pick the patches and apply them on your source and<br>
> > > > rebuild it, I can point the patches to you, but you'd need to give a<br>
> > > > day's time to me as I have some other items to finish from my plate.<br>
> > ><br>
> > ><br>
> > > Here is the list of the patches need to be applied on the following<br>
> > > order:<br>
> > ><br>
> > > <a href="http://review.gluster.org/9328" rel="noreferrer" target="_blank">http://review.gluster.org/9328</a><br>
> > > <a href="http://review.gluster.org/9393" rel="noreferrer" target="_blank">http://review.gluster.org/9393</a><br>
> > > <a href="http://review.gluster.org/10023" rel="noreferrer" target="_blank">http://review.gluster.org/10023</a><br>
> > ><br>
> > > ><br>
> > > > ~Atin<br>
> > > >><br>
> > > >> Regards,<br>
> > > >> -Ram<br>
> > > >><br>
> > > >> On Thu, Jun 16, 2016 at 11:02 AM, Atin Mukherjee<br>
> > > <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>><br>
</div></div><div class="HOEnZb"><div class="h5">> > > >> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>>> wrote:<br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >> On 06/16/2016 10:49 AM, B.K.Raghuram wrote:<br>
> > > >> ><br>
> > > >> ><br>
> > > >> > On Wed, Jun 15, 2016 at 5:01 PM, Atin Mukherjee<br>
> > > <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>><br>
> > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>><br>
> > > >> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>><br>
> > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>>>> wrote:<br>
> > > >> ><br>
> > > >> ><br>
> > > >> ><br>
> > > >> > On 06/15/2016 04:24 PM, B.K.Raghuram wrote:<br>
> > > >> > > Hi,<br>
> > > >> > ><br>
> > > >> > > We're using gluster 3.6.1 and we<br>
> periodically find<br>
> > > that gluster commands<br>
> > > >> > > fail saying the it could not get the lock<br>
> on one of<br>
> > > the brick machines.<br>
> > > >> > > The logs on that machine then say<br>
> something like :<br>
> > > >> > ><br>
> > > >> > > [2016-06-15 08:17:03.076119] E<br>
> > > >> > > [glusterd-op-sm.c:3058:glusterd_op_ac_lock]<br>
> > > 0-management: Unable to<br>
> > > >> > > acquire lock for vol2<br>
> > > >> ><br>
> > > >> > This is a possible case if concurrent volume<br>
> > operations<br>
> > > are run. Do you<br>
> > > >> > have any script which checks for volume<br>
> status on an<br>
> > > interval from all<br>
> > > >> > the nodes, if so then this is an expected<br>
> behavior.<br>
> > > >> ><br>
> > > >> ><br>
> > > >> > Yes, I do have a couple of scripts that check on<br>
> > volume and<br>
> > > quota<br>
> > > >> > status.. Given this, I do get a "Another<br>
> transaction<br>
> > is in<br>
> > > progress.."<br>
> > > >> > message which is ok. The problem is that<br>
> sometimes I get<br>
> > > the volume lock<br>
> > > >> > held message which never goes away. This sometimes<br>
> > results<br>
> > > in glusterd<br>
> > > >> > consuming a lot of memory and CPU and the<br>
> problem can<br>
> > only<br>
> > > be fixed with<br>
> > > >> > a reboot. The log files are huge so I'm not sure if<br>
> > its ok<br>
> > > to attach<br>
> > > >> > them to an email.<br>
> > > >><br>
> > > >> Ok, so this is known. We have fixed lots of stale<br>
> lock<br>
> > issues<br>
> > > in 3.7<br>
> > > >> branch and some of them if not all were also<br>
> backported to<br>
> > > 3.6 branch.<br>
> > > >> The issue is you are using 3.6.1 which is quite<br>
> old. If you<br>
> > > can upgrade<br>
> > > >> to latest versions of 3.7 or at worst of 3.6 I am<br>
> confident<br>
> > > that this<br>
> > > >> will go away.<br>
> > > >><br>
> > > >> ~Atin<br>
> > > >> ><br>
> > > >> > ><br>
> > > >> > > After sometime, glusterd then seems to<br>
> give up<br>
> > and die..<br>
> > > >> ><br>
> > > >> > Do you mean glusterd shuts down or<br>
> segfaults, if so I<br>
> > > am more<br>
> > > >> interested<br>
> > > >> > in analyzing this part. Could you provide<br>
> us the<br>
> > > glusterd log,<br>
> > > >> > cmd_history log file along with core (in<br>
> case of<br>
> > SEGV) from<br>
> > > >> all the<br>
> > > >> > nodes for the further analysis?<br>
> > > >> ><br>
> > > >> ><br>
> > > >> > There is no segfault. glusterd just shuts down.<br>
> As I said<br>
> > > above,<br>
> > > >> > sometimes this happens and sometimes it just<br>
> continues to<br>
> > > hog a lot of<br>
> > > >> > memory and CPU..<br>
> > > >> ><br>
> > > >> ><br>
> > > >> > ><br>
> > > >> > > Interestingly, I also find the following line<br>
> > in the<br>
> > > >> beginning of<br>
> > > >> > > etc-glusterfs-glusterd.vol.log and I dont<br>
> know if<br>
> > > this has any<br>
> > > >> > > significance to the issue :<br>
> > > >> > ><br>
> > > >> > > [2016-06-14 06:48:57.282290] I<br>
> > > >> > ><br>
> [glusterd-store.c:2063:glusterd_restore_op_version]<br>
> > > >> 0-management:<br>
> > > >> > > Detected new install. Setting op-version to<br>
> > maximum :<br>
> > > 30600<br>
> > > >> > ><br>
> > > >> ><br>
> > > >> ><br>
> > > >> > What does this line signify?<br>
> > > >><br>
> > > >><br>
> > ><br>
> > ><br>
> ><br>
> ><br>
><br>
><br>
</div></div></blockquote></div><br></div>