<div dir="ltr"><div><div><div><div><div>Hi Gaurav,<br><br></div>Here, You can find the attached logs for the boards in case of remove-brick failure.<br></div>In these logs we do not have the cmd_history and etc-glusterfs-glusterd.vol.log for the second board. <br></div><div><br></div>May be for that we need to some more time.<br><br></div><div><br></div>Regards,<br></div>Abhishek<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 22, 2016 at 10:18 AM, Gaurav Garg <span dir="ltr"><<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Abhishek,<br>
<span class=""><br>
>> I'll provide the required log to you.<br>
<br>
</span>sure<br>
<br>
on both node. do "pkill glusterd" and then start glusterd services.<br>
<span class="im HOEnZb"><br>
Thanks,<br>
<br>
~Gaurav<br>
<br>
----- Original Message -----<br>
From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
</span><span class="im HOEnZb">To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
</span><div class="HOEnZb"><div class="h5">Sent: Monday, February 22, 2016 10:11:48 AM<br>
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
<br>
Hi Gaurav,<br>
<br>
Thanks for your prompt reply.<br>
<br>
I'll provide the required log to you.<br>
<br>
As a workaround you suggested that restart the glusterd service. Could you<br>
please tell me the point where I can do this?<br>
<br>
Regards,<br>
Abhishek<br>
<br>
On Fri, Feb 19, 2016 at 6:11 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
<br>
> Hi Abhishek,<br>
><br>
> Peer status output looks interesting where it have stale entry,<br>
> technically it should not happen. Here few thing need to ask<br>
><br>
> Did you perform any manual operation with GlusterFS configuration file<br>
> which resides in /var/lib/glusterd/* folder.<br>
><br>
> Can you provide output of "ls /var/lib/glusterd/peers" from both of your<br>
> nodes.<br>
><br>
> Could you provide output of #gluster peer status command when 2nd node is<br>
> down<br>
><br>
> Can you provide output of #gluster volume info command<br>
><br>
> Can you provide full logs details of cmd_history.log and<br>
> etc-glusterfs-glusterd.vol.log from both the nodes.<br>
><br>
><br>
> You can restart your glusterd as of now as a workaround but we need to<br>
> analysis this issue further.<br>
><br>
> Thanks,<br>
> Gaurav<br>
><br>
> ----- Original Message -----<br>
> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> Sent: Friday, February 19, 2016 5:27:21 PM<br>
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
><br>
> Hi Gaurav,<br>
><br>
> After the failure of add-brick following is outcome "gluster peer status"<br>
> command<br>
><br>
> Number of Peers: 2<br>
><br>
> Hostname: 10.32.1.144<br>
> Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
> State: Peer in Cluster (Connected)<br>
><br>
> Hostname: 10.32.1.144<br>
> Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
> State: Peer in Cluster (Connected)<br>
><br>
> Regards,<br>
> Abhishek<br>
><br>
> On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
> ><br>
> wrote:<br>
><br>
> > Hi Gaurav,<br>
> ><br>
> > Both are the board connect through the backplane using ethernet.<br>
> ><br>
> > Even this inconsistency also occurs when I am trying to bringing back the<br>
> > node in slot. Means some time add-brick executes without failure but some<br>
> > time following error occurs.<br>
> ><br>
> > volume add-brick c_glusterfs replica 2 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > force : FAILED : Another transaction is in progress for c_glusterfs.<br>
> Please<br>
> > try again after sometime.<br>
> ><br>
> ><br>
> > You can also see the attached logs for add-brick failure scenario.<br>
> ><br>
> > Please let me know if you need more logs.<br>
> ><br>
> > Regards,<br>
> > Abhishek<br>
> ><br>
> ><br>
> > On Fri, Feb 19, 2016 at 5:03 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
> ><br>
> >> Hi Abhishek,<br>
> >><br>
> >> How are you connecting two board, and how are you removing it manually<br>
> >> that need to know because if you are removing your 2nd board from the<br>
> >> cluster (abrupt shutdown) then you can't perform remove brick operation<br>
> in<br>
> >> 2nd node from first node and its happening successfully in your case.<br>
> could<br>
> >> you ensure your network connection once again while removing and<br>
> bringing<br>
> >> back your node again.<br>
> >><br>
> >> Thanks,<br>
> >> Gaurav<br>
> >><br>
> >> ------------------------------<br>
> >> *From: *"ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> >> *To: *"Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> >> *Cc: *<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> >> *Sent: *Friday, February 19, 2016 3:36:21 PM<br>
> >><br>
> >> *Subject: *Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
> >><br>
> >> Hi Gaurav,<br>
> >><br>
> >> Thanks for reply<br>
> >><br>
> >> 1. Here, I removed the board manually here but this time it works fine<br>
> >><br>
> >> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica<br>
> 1<br>
> >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> >> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS<br>
> >><br>
> >> Yes this time board is reachable but how? don't know because board is<br>
> >> detached.<br>
> >><br>
> >> 2. Here, I attached the board this time its works fine in add-bricks<br>
> >><br>
> >> 2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS<br>
> >> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2<br>
> >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> >><br>
> >> 3.Here, again I removed the board this time failed occur<br>
> >><br>
> >> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs replica<br>
> 1<br>
> >> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick<br>
> >> 10.32.1.144:/opt<br>
> >> /lvmdir/c2/brick for volume c_glusterfs<br>
> >><br>
> >> but here board is not reachable.<br>
> >><br>
> >> why this inconsistency is there while doing the same step multiple time.<br>
> >><br>
> >> Hope you are getting my point.<br>
> >><br>
> >> Regards,<br>
> >> Abhishek<br>
> >><br>
> >> On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
> >><br>
> >>> Abhishek,<br>
> >>><br>
> >>> when sometime its working fine means 2nd board network connection is<br>
> >>> reachable to first node. you can conform this by executing same<br>
> #gluster<br>
> >>> peer status command.<br>
> >>><br>
> >>> Thanks,<br>
> >>> Gaurav<br>
> >>><br>
> >>> ----- Original Message -----<br>
> >>> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> >>> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> >>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> >>> Sent: Friday, February 19, 2016 3:12:22 PM<br>
> >>> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
> >>><br>
> >>> Hi Gaurav,<br>
> >>><br>
> >>> Yes, you are right actually I am force fully detaching the node from<br>
> the<br>
> >>> slave and when we removed the board it disconnected from the another<br>
> >>> board.<br>
> >>><br>
> >>> but my question is I am doing this process multiple time some time it<br>
> >>> works<br>
> >>> fine but some time it gave these errors.<br>
> >>><br>
> >>><br>
> >>> you can see the following logs from cmd_history.log file<br>
> >>><br>
> >>> [2016-02-18 10:03:34.497996] : volume set c_glusterfs nfs.disable on :<br>
> >>> SUCCESS<br>
> >>> [2016-02-18 10:03:34.915036] : volume start c_glusterfs force :<br>
> SUCCESS<br>
> >>> [2016-02-18 10:03:40.250326] : volume status : SUCCESS<br>
> >>> [2016-02-18 10:03:40.273275] : volume status : SUCCESS<br>
> >>> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs<br>
> replica 1<br>
> >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> >>> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS<br>
> >>> [2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS<br>
> >>> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2<br>
> >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> >>> [2016-02-18 10:30:53.297415] : volume status : SUCCESS<br>
> >>> [2016-02-18 10:30:53.313096] : volume status : SUCCESS<br>
> >>> [2016-02-18 10:37:02.748714] : volume status : SUCCESS<br>
> >>> [2016-02-18 10:37:02.762091] : volume status : SUCCESS<br>
> >>> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs<br>
> replica 1<br>
> >>> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick<br>
> >>> 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs<br>
> >>><br>
> >>><br>
> >>> On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
> >>><br>
> >>> > Hi Abhishek,<br>
> >>> ><br>
> >>> > Seems your peer 10.32.1.144 have disconnected while doing remove<br>
> brick.<br>
> >>> > see the below logs in glusterd:<br>
> >>> ><br>
> >>> > [2016-02-18 10:37:02.816009] E [MSGID: 106256]<br>
> >>> > [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick]<br>
> >>> 0-management:<br>
> >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume<br>
> >>> c_glusterfs<br>
> >>> > [Invalid argument]<br>
> >>> > [2016-02-18 10:37:02.816061] E [MSGID: 106265]<br>
> >>> > [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick]<br>
> >>> 0-management:<br>
> >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume<br>
> >>> c_glusterfs<br>
> >>> > The message "I [MSGID: 106004]<br>
> >>> > [glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management:<br>
> Peer<br>
> >>> > <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state<br>
> <Peer<br>
> >>> in<br>
> >>> > Cluster>, has disconnected from glusterd." repeated 25 times between<br>
> >>> > [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]<br>
> >>> ><br>
> >>> ><br>
> >>> ><br>
> >>> > If you are facing the same issue now, could you paste your # gluster<br>
> >>> peer<br>
> >>> > status command output here.<br>
> >>> ><br>
> >>> > Thanks,<br>
> >>> > ~Gaurav<br>
> >>> ><br>
> >>> > ----- Original Message -----<br>
> >>> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> >>> > To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> >>> > Sent: Friday, February 19, 2016 2:46:35 PM<br>
> >>> > Subject: [Gluster-users] Issue in Adding/Removing the gluster node<br>
> >>> ><br>
> >>> > Hi,<br>
> >>> ><br>
> >>> ><br>
> >>> > I am working on two board setup connecting to each other. Gluster<br>
> >>> version<br>
> >>> > 3.7.6 is running and added two bricks in replica 2 mode but when I<br>
> >>> manually<br>
> >>> > removed (detach) the one board from the setup I am getting the<br>
> >>> following<br>
> >>> > error.<br>
> >>> ><br>
> >>> > volume remove-brick c_glusterfs replica 1 <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
> >>> /opt/lvmdir/c2/brick<br>
> >>> > force : FAILED : Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick<br>
> for<br>
> >>> > volume c_glusterfs<br>
> >>> ><br>
> >>> > Please find the logs file as an attachment.<br>
> >>> ><br>
> >>> ><br>
> >>> > Regards,<br>
> >>> > Abhishek<br>
> >>> ><br>
> >>> ><br>
> >>> > _______________________________________________<br>
> >>> > Gluster-users mailing list<br>
> >>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >>> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
> >>> ><br>
> >>><br>
> >>><br>
> >>><br>
> >>> --<br>
> >>><br>
> >>><br>
> >>><br>
> >>><br>
> >>> Regards<br>
> >>> Abhishek Paliwal<br>
> >>><br>
> >><br>
> >><br>
> >><br>
> >> --<br>
> >><br>
> >><br>
> >><br>
> >><br>
> >> Regards<br>
> >> Abhishek Paliwal<br>
> >><br>
> >><br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>