<div dir="ltr"><div><div><div>Hi Gaurav,<br><br></div>Please find the vol.tar file.<br><br></div>Regards,<br></div>Abhishek<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 23, 2016 at 2:37 PM, Gaurav Garg <span dir="ltr"><<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi abhishek,<br>
<span class=""><br>
>> But after analyzing the following logs from the 1st board seems that the<br>
process which will update the second brick in output of "# gluster volume<br>
status c_glusterfs" takes sometime to update this table and before the<br>
updation of this table remove-brick is getting executed that is why it is<br>
getting failed.<br>
<br>
</span>It should not take that much of time. If your peer probe is successful and you are able to<br>
see 2nd broad peer entry in #gluster peer status command then it have updated all information<br>
of volume internally.<br>
<br>
your gluster volume status showing 2nd board entry:<br>
<span class=""><br>
Brick 10.32.0.48:/opt/lvmdir/c2/brick 49153 0 Y<br>
2537<br>
Self-heal Daemon on localhost N/A N/A Y<br>
5577<br>
Self-heal Daemon on 10.32.1.144 N/A N/A Y<br>
3850<br>
<br>
</span>but its not showing 2nd board brick entry.<br>
<br>
<br>
Did you perform any manual operation with configuration file which resides in /var/lib/glusterd/* ?<br>
<br>
could you attach/paste the file /var/lib/glusterd/vols/c_glusterfs/trusted-*.tcp-fuse.vol file.<br>
<br>
<br>
Thanks,<br>
<br>
Regards,<br>
<span class="">Gaurav<br>
<br>
----- Original Message -----<br>
From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
</span><span class="">Sent: Tuesday, February 23, 2016 1:33:30 PM<br>
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
<br>
Hi Gaurav,<br>
<br>
</span><span class="">For the network connectivity I am doing peer probe to the 10.32.1.144 i.e.<br>
2nd board thats working fine means connectivity is there.<br>
<br>
#peer probe 10.32.1.144<br>
<br>
if the above command get success<br>
<br>
I executed the the remove-brick command which is getting failed.<br>
<br>
So, now it seems the the peer probe will not give the correct connectivity<br>
status to execute the remove-brick command.<br>
<br>
But after analyzing the following logs from the 1st board seems that the<br>
process which will update the second brick in output of "# gluster volume<br>
status c_glusterfs" takes sometime to update this table and before the<br>
updation of this table remove-brick is getting executed that is why it is<br>
getting failed.<br>
<br>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
<br>
</span>*1st board:*<br>
<div><div class="h5"># gluster volume info<br>
status<br>
gluster volume status c_glusterfs<br>
Volume Name: c_glusterfs<br>
Type: Replicate<br>
Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
Options Reconfigured:<br>
nfs.disable: on<br>
network.ping-timeout: 4<br>
performance.readdir-ahead: on<br>
# gluster peer status<br>
Number of Peers: 1<br>
<br>
Hostname: 10.32.1.144<br>
Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1<br>
State: Peer in Cluster (Connected)<br>
# gluster volume status c_glusterfs<br>
Status of volume: c_glusterfs<br>
Gluster process TCP Port RDMA Port Online<br>
Pid<br>
------------------------------------------------------------------------------<br>
<br>
Brick 10.32.0.48:/opt/lvmdir/c2/brick 49153 0 Y<br>
2537<br>
Self-heal Daemon on localhost N/A N/A Y<br>
5577<br>
Self-heal Daemon on 10.32.1.144 N/A N/A Y<br>
3850<br>
<br>
Task Status of Volume c_glusterfs<br>
------------------------------------------------------------------------------<br>
<br>
There are no active volume tasks<br>
<br>
+++++++++++++++++++++++++++++++++++++++++++++++<br>
<br>
I'll try this with some delay or wait to remove-brick until the # gluster<br>
volume status c_glusterfs command show second brick in the list.<br>
<br>
May we this approach will resolve the issue.<br>
<br>
Please comment, If you are agree with my observation<br>
<br>
Regards,<br>
Abhishek<br>
<br>
On Tue, Feb 23, 2016 at 1:10 PM, ABHISHEK PALIWAL <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
wrote:<br>
<br>
> Hi Gaurav,<br>
><br>
> In my case we are removing the brick in the offline state with the force<br>
> option like in the following way:<br>
><br>
><br>
><br>
</div></div>> *gluster volume remove-brick %s replica 1 %s:%s force --mode=script*<br>
<span class="">> but still getting the failure or remove-brick<br>
><br>
> it seems that brick is not present which we are trying to remove here are<br>
> the log snippet of both of the boards<br>
><br>
><br>
</span>> *1st board:*<br>
<div><div class="h5">> # gluster volume info<br>
> status<br>
> gluster volume status c_glusterfs<br>
> Volume Name: c_glusterfs<br>
> Type: Replicate<br>
> Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99<br>
> Status: Started<br>
> Number of Bricks: 1 x 2 = 2<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
> Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
> Options Reconfigured:<br>
> nfs.disable: on<br>
> network.ping-timeout: 4<br>
> performance.readdir-ahead: on<br>
> # gluster peer status<br>
> Number of Peers: 1<br>
><br>
> Hostname: 10.32.1.144<br>
> Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1<br>
> State: Peer in Cluster (Connected)<br>
> # gluster volume status c_glusterfs<br>
> Status of volume: c_glusterfs<br>
> Gluster process TCP Port RDMA Port Online<br>
> Pid<br>
> ------------------------------------------------------------------------------<br>
><br>
> Brick 10.32.0.48:/opt/lvmdir/c2/brick 49153 0 Y<br>
> 2537<br>
> Self-heal Daemon on localhost N/A N/A Y<br>
> 5577<br>
> Self-heal Daemon on 10.32.1.144 N/A N/A Y<br>
> 3850<br>
><br>
> Task Status of Volume c_glusterfs<br>
> ------------------------------------------------------------------------------<br>
><br>
> There are no active volume tasks<br>
><br>
</div></div>> *2nd Board*:<br>
<div class="HOEnZb"><div class="h5">><br>
> # gluster volume info<br>
> status<br>
> gluster volume status c_glusterfs<br>
> gluster volume heal c_glusterfs info<br>
><br>
> Volume Name: c_glusterfs<br>
> Type: Replicate<br>
> Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99<br>
> Status: Started<br>
> Number of Bricks: 1 x 2 = 2<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
> Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
> Options Reconfigured:<br>
> performance.readdir-ahead: on<br>
> network.ping-timeout: 4<br>
> nfs.disable: on<br>
> # gluster peer status<br>
> Number of Peers: 1<br>
><br>
> Hostname: 10.32.0.48<br>
> Uuid: e7c4494e-aa04-4909-81c9-27a462f6f9e7<br>
> State: Peer in Cluster (Connected)<br>
> # gluster volume status c_glusterfs<br>
> Status of volume: c_glusterfs<br>
> Gluster process TCP Port RDMA Port Online<br>
> Pid<br>
> ------------------------------------------------------------------------------<br>
><br>
> Brick 10.32.0.48:/opt/lvmdir/c2/brick 49153 0 Y<br>
> 2537<br>
> Self-heal Daemon on localhost N/A N/A Y<br>
> 3850<br>
> Self-heal Daemon on 10.32.0.48 N/A N/A Y<br>
> 5577<br>
><br>
> Task Status of Volume c_glusterfs<br>
> ------------------------------------------------------------------------------<br>
><br>
> There are no active volume tasks<br>
><br>
> Do you know why these logs are not showing the Brick info at the time of<br>
> gluster volume status.<br>
> Because we are not able to collect the logs of cmd_history.log file from<br>
> the 2nd board.<br>
><br>
> Regards,<br>
> Abhishek<br>
><br>
><br>
> On Tue, Feb 23, 2016 at 12:02 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
><br>
>> Hi abhishek,<br>
>><br>
>> >> Can we perform remove-brick operation on the offline brick? what is the<br>
>> meaning of offline and online brick?<br>
>><br>
>> No, you can't perform remove-brick operation on the offline brick. brick<br>
>> is offline means brick process is not running. you can see it by executing<br>
>> #gluster volume status. If brick is offline then respective brick will show<br>
>> "N" entry in Online column of #gluster volume status command. Alternatively<br>
>> you can also check whether glusterfsd process for that brick is running or<br>
>> not by executing #ps aux | grep glusterfsd, this command will list out all<br>
>> the brick process you can filter out from them, which one is online, which<br>
>> one is not.<br>
>><br>
>> But if you want to perform remove-brick operation on the offline brick<br>
>> then you need to execute it with force option. #gluster volume remove-brick<br>
>> <volname> hostname:/brick_name force. This might lead to data loss.<br>
>><br>
>><br>
>><br>
>> >> Also, Is there any logic in gluster through which we can check the<br>
>> connectivity of node established or not before performing the any<br>
>> operation<br>
>> on brick?<br>
>><br>
>> Yes, you can check it by executing #gluster peer status command.<br>
>><br>
>><br>
>> Thanks,<br>
>><br>
>> ~Gaurav<br>
>><br>
>><br>
>> ----- Original Message -----<br>
>> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
>> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> Sent: Tuesday, February 23, 2016 11:50:43 AM<br>
>> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
>><br>
>> Hi Gaurav,<br>
>><br>
>> one general question related to gluster bricks.<br>
>><br>
>> Can we perform remove-brick operation on the offline brick? what is the<br>
>> meaning of offline and online brick?<br>
>> Also, Is there any logic in gluster through which we can check the<br>
>> connectivity of node established or not before performing the any<br>
>> operation<br>
>> on brick?<br>
>><br>
>> Regards,<br>
>> Abhishek<br>
>><br>
>> On Mon, Feb 22, 2016 at 2:42 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
>><br>
>> > Hi abhishek,<br>
>> ><br>
>> > I went through your logs of node 1 and by looking glusterd logs its<br>
>> > clearly indicate that your 2nd node (10.32.1.144) have disconnected from<br>
>> > the cluster, because of that remove-brick operation failed. I think you<br>
>> > need to check your network interface.<br>
>> ><br>
>> > But surprising things is that i did not see duplicate peer entry in<br>
>> > #gluster peer status command output.<br>
>> ><br>
>> > May be i will get some more information from your (10.32.1.144) 2nd node<br>
>> > logs. Could you also attach your 2nd node logs.<br>
>> ><br>
>> > after restarting glusterd, are you seeing duplicate peer entry in<br>
>> #gluster<br>
>> > peer status command output ?<br>
>> ><br>
>> > will wait for 2nd node logs for further analyzing duplicate peer entry<br>
>> > problem.<br>
>> ><br>
>> > Thanks,<br>
>> ><br>
>> > ~Gaurav<br>
>> ><br>
>> > ----- Original Message -----<br>
>> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
>> > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> > Sent: Monday, February 22, 2016 12:48:55 PM<br>
>> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
>> ><br>
>> > Hi Gaurav,<br>
>> ><br>
>> > Here, You can find the attached logs for the boards in case of<br>
>> remove-brick<br>
>> > failure.<br>
>> > In these logs we do not have the cmd_history and<br>
>> > etc-glusterfs-glusterd.vol.log for the second board.<br>
>> ><br>
>> > May be for that we need to some more time.<br>
>> ><br>
>> ><br>
>> > Regards,<br>
>> > Abhishek<br>
>> ><br>
>> > On Mon, Feb 22, 2016 at 10:18 AM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
>> ><br>
>> > > Hi Abhishek,<br>
>> > ><br>
>> > > >> I'll provide the required log to you.<br>
>> > ><br>
>> > > sure<br>
>> > ><br>
>> > > on both node. do "pkill glusterd" and then start glusterd services.<br>
>> > ><br>
>> > > Thanks,<br>
>> > ><br>
>> > > ~Gaurav<br>
>> > ><br>
>> > > ----- Original Message -----<br>
>> > > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
>> > > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > > Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> > > Sent: Monday, February 22, 2016 10:11:48 AM<br>
>> > > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
>> > ><br>
>> > > Hi Gaurav,<br>
>> > ><br>
>> > > Thanks for your prompt reply.<br>
>> > ><br>
>> > > I'll provide the required log to you.<br>
>> > ><br>
>> > > As a workaround you suggested that restart the glusterd service. Could<br>
>> > you<br>
>> > > please tell me the point where I can do this?<br>
>> > ><br>
>> > > Regards,<br>
>> > > Abhishek<br>
>> > ><br>
>> > > On Fri, Feb 19, 2016 at 6:11 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> wrote:<br>
>> > ><br>
>> > > > Hi Abhishek,<br>
>> > > ><br>
>> > > > Peer status output looks interesting where it have stale entry,<br>
>> > > > technically it should not happen. Here few thing need to ask<br>
>> > > ><br>
>> > > > Did you perform any manual operation with GlusterFS configuration<br>
>> file<br>
>> > > > which resides in /var/lib/glusterd/* folder.<br>
>> > > ><br>
>> > > > Can you provide output of "ls /var/lib/glusterd/peers" from both of<br>
>> > your<br>
>> > > > nodes.<br>
>> > > ><br>
>> > > > Could you provide output of #gluster peer status command when 2nd<br>
>> node<br>
>> > is<br>
>> > > > down<br>
>> > > ><br>
>> > > > Can you provide output of #gluster volume info command<br>
>> > > ><br>
>> > > > Can you provide full logs details of cmd_history.log and<br>
>> > > > etc-glusterfs-glusterd.vol.log from both the nodes.<br>
>> > > ><br>
>> > > ><br>
>> > > > You can restart your glusterd as of now as a workaround but we need<br>
>> to<br>
>> > > > analysis this issue further.<br>
>> > > ><br>
>> > > > Thanks,<br>
>> > > > Gaurav<br>
>> > > ><br>
>> > > > ----- Original Message -----<br>
>> > > > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
>> > > > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > > > Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> > > > Sent: Friday, February 19, 2016 5:27:21 PM<br>
>> > > > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster<br>
>> node<br>
>> > > ><br>
>> > > > Hi Gaurav,<br>
>> > > ><br>
>> > > > After the failure of add-brick following is outcome "gluster peer<br>
>> > status"<br>
>> > > > command<br>
>> > > ><br>
>> > > > Number of Peers: 2<br>
>> > > ><br>
>> > > > Hostname: 10.32.1.144<br>
>> > > > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
>> > > > State: Peer in Cluster (Connected)<br>
>> > > ><br>
>> > > > Hostname: 10.32.1.144<br>
>> > > > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
>> > > > State: Peer in Cluster (Connected)<br>
>> > > ><br>
>> > > > Regards,<br>
>> > > > Abhishek<br>
>> > > ><br>
>> > > > On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <<br>
>> > > <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
>> > > > ><br>
>> > > > wrote:<br>
>> > > ><br>
>> > > > > Hi Gaurav,<br>
>> > > > ><br>
>> > > > > Both are the board connect through the backplane using ethernet.<br>
>> > > > ><br>
>> > > > > Even this inconsistency also occurs when I am trying to bringing<br>
>> back<br>
>> > > the<br>
>> > > > > node in slot. Means some time add-brick executes without failure<br>
>> but<br>
>> > > some<br>
>> > > > > time following error occurs.<br>
>> > > > ><br>
>> > > > > volume add-brick c_glusterfs replica 2 <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
>> > > /opt/lvmdir/c2/brick<br>
>> > > > > force : FAILED : Another transaction is in progress for<br>
>> c_glusterfs.<br>
>> > > > Please<br>
>> > > > > try again after sometime.<br>
>> > > > ><br>
>> > > > ><br>
>> > > > > You can also see the attached logs for add-brick failure scenario.<br>
>> > > > ><br>
>> > > > > Please let me know if you need more logs.<br>
>> > > > ><br>
>> > > > > Regards,<br>
>> > > > > Abhishek<br>
>> > > > ><br>
>> > > > ><br>
>> > > > > On Fri, Feb 19, 2016 at 5:03 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > wrote:<br>
>> > > > ><br>
>> > > > >> Hi Abhishek,<br>
>> > > > >><br>
>> > > > >> How are you connecting two board, and how are you removing it<br>
>> > manually<br>
>> > > > >> that need to know because if you are removing your 2nd board from<br>
>> > the<br>
>> > > > >> cluster (abrupt shutdown) then you can't perform remove brick<br>
>> > > operation<br>
>> > > > in<br>
>> > > > >> 2nd node from first node and its happening successfully in your<br>
>> > case.<br>
>> > > > could<br>
>> > > > >> you ensure your network connection once again while removing and<br>
>> > > > bringing<br>
>> > > > >> back your node again.<br>
>> > > > >><br>
>> > > > >> Thanks,<br>
>> > > > >> Gaurav<br>
>> > > > >><br>
>> > > > >> ------------------------------<br>
>> > > > >> *From: *"ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
>> > > > >> *To: *"Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > > > >> *Cc: *<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> > > > >> *Sent: *Friday, February 19, 2016 3:36:21 PM<br>
>> > > > >><br>
>> > > > >> *Subject: *Re: [Gluster-users] Issue in Adding/Removing the<br>
>> gluster<br>
>> > > node<br>
>> > > > >><br>
>> > > > >> Hi Gaurav,<br>
>> > > > >><br>
>> > > > >> Thanks for reply<br>
>> > > > >><br>
>> > > > >> 1. Here, I removed the board manually here but this time it works<br>
>> > fine<br>
>> > > > >><br>
>> > > > >> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs<br>
>> > > replica<br>
>> > > > 1<br>
>> > > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
>> > > > >> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS<br>
>> > > > >><br>
>> > > > >> Yes this time board is reachable but how? don't know because<br>
>> board<br>
>> > is<br>
>> > > > >> detached.<br>
>> > > > >><br>
>> > > > >> 2. Here, I attached the board this time its works fine in<br>
>> add-bricks<br>
>> > > > >><br>
>> > > > >> 2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS<br>
>> > > > >> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs<br>
>> > replica 2<br>
>> > > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
>> > > > >><br>
>> > > > >> 3.Here, again I removed the board this time failed occur<br>
>> > > > >><br>
>> > > > >> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs<br>
>> > > replica<br>
>> > > > 1<br>
>> > > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect<br>
>> brick<br>
>> > > > >> 10.32.1.144:/opt<br>
>> > > > >> /lvmdir/c2/brick for volume c_glusterfs<br>
>> > > > >><br>
>> > > > >> but here board is not reachable.<br>
>> > > > >><br>
>> > > > >> why this inconsistency is there while doing the same step<br>
>> multiple<br>
>> > > time.<br>
>> > > > >><br>
>> > > > >> Hope you are getting my point.<br>
>> > > > >><br>
>> > > > >> Regards,<br>
>> > > > >> Abhishek<br>
>> > > > >><br>
>> > > > >> On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > > wrote:<br>
>> > > > >><br>
>> > > > >>> Abhishek,<br>
>> > > > >>><br>
>> > > > >>> when sometime its working fine means 2nd board network<br>
>> connection<br>
>> > is<br>
>> > > > >>> reachable to first node. you can conform this by executing same<br>
>> > > > #gluster<br>
>> > > > >>> peer status command.<br>
>> > > > >>><br>
>> > > > >>> Thanks,<br>
>> > > > >>> Gaurav<br>
>> > > > >>><br>
>> > > > >>> ----- Original Message -----<br>
>> > > > >>> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
>> > > > >>> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > > > >>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> > > > >>> Sent: Friday, February 19, 2016 3:12:22 PM<br>
>> > > > >>> Subject: Re: [Gluster-users] Issue in Adding/Removing the<br>
>> gluster<br>
>> > > node<br>
>> > > > >>><br>
>> > > > >>> Hi Gaurav,<br>
>> > > > >>><br>
>> > > > >>> Yes, you are right actually I am force fully detaching the node<br>
>> > from<br>
>> > > > the<br>
>> > > > >>> slave and when we removed the board it disconnected from the<br>
>> > another<br>
>> > > > >>> board.<br>
>> > > > >>><br>
>> > > > >>> but my question is I am doing this process multiple time some<br>
>> time<br>
>> > it<br>
>> > > > >>> works<br>
>> > > > >>> fine but some time it gave these errors.<br>
>> > > > >>><br>
>> > > > >>><br>
>> > > > >>> you can see the following logs from cmd_history.log file<br>
>> > > > >>><br>
>> > > > >>> [2016-02-18 10:03:34.497996] : volume set c_glusterfs<br>
>> nfs.disable<br>
>> > > on :<br>
>> > > > >>> SUCCESS<br>
>> > > > >>> [2016-02-18 10:03:34.915036] : volume start c_glusterfs force :<br>
>> > > > SUCCESS<br>
>> > > > >>> [2016-02-18 10:03:40.250326] : volume status : SUCCESS<br>
>> > > > >>> [2016-02-18 10:03:40.273275] : volume status : SUCCESS<br>
>> > > > >>> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs<br>
>> > > > replica 1<br>
>> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
>> > > > >>> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 :<br>
>> SUCCESS<br>
>> > > > >>> [2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS<br>
>> > > > >>> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs<br>
>> > replica<br>
>> > > 2<br>
>> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
>> > > > >>> [2016-02-18 10:30:53.297415] : volume status : SUCCESS<br>
>> > > > >>> [2016-02-18 10:30:53.313096] : volume status : SUCCESS<br>
>> > > > >>> [2016-02-18 10:37:02.748714] : volume status : SUCCESS<br>
>> > > > >>> [2016-02-18 10:37:02.762091] : volume status : SUCCESS<br>
>> > > > >>> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs<br>
>> > > > replica 1<br>
>> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect<br>
>> brick<br>
>> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs<br>
>> > > > >>><br>
>> > > > >>><br>
>> > > > >>> On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
>> > > wrote:<br>
>> > > > >>><br>
>> > > > >>> > Hi Abhishek,<br>
>> > > > >>> ><br>
>> > > > >>> > Seems your peer 10.32.1.144 have disconnected while doing<br>
>> remove<br>
>> > > > brick.<br>
>> > > > >>> > see the below logs in glusterd:<br>
>> > > > >>> ><br>
>> > > > >>> > [2016-02-18 10:37:02.816009] E [MSGID: 106256]<br>
>> > > > >>> > [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick]<br>
>> > > > >>> 0-management:<br>
>> > > > >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume<br>
>> > > > >>> c_glusterfs<br>
>> > > > >>> > [Invalid argument]<br>
>> > > > >>> > [2016-02-18 10:37:02.816061] E [MSGID: 106265]<br>
>> > > > >>> > [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick]<br>
>> > > > >>> 0-management:<br>
>> > > > >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume<br>
>> > > > >>> c_glusterfs<br>
>> > > > >>> > The message "I [MSGID: 106004]<br>
>> > > > >>> > [glusterd-handler.c:5065:__glusterd_peer_rpc_notify]<br>
>> > 0-management:<br>
>> > > > Peer<br>
>> > > > >>> > <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in<br>
>> state<br>
>> > > > <Peer<br>
>> > > > >>> in<br>
>> > > > >>> > Cluster>, has disconnected from glusterd." repeated 25 times<br>
>> > > between<br>
>> > > > >>> > [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]<br>
>> > > > >>> ><br>
>> > > > >>> ><br>
>> > > > >>> ><br>
>> > > > >>> > If you are facing the same issue now, could you paste your #<br>
>> > > gluster<br>
>> > > > >>> peer<br>
>> > > > >>> > status command output here.<br>
>> > > > >>> ><br>
>> > > > >>> > Thanks,<br>
>> > > > >>> > ~Gaurav<br>
>> > > > >>> ><br>
>> > > > >>> > ----- Original Message -----<br>
>> > > > >>> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
>> > > > >>> > To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
>> > > > >>> > Sent: Friday, February 19, 2016 2:46:35 PM<br>
>> > > > >>> > Subject: [Gluster-users] Issue in Adding/Removing the gluster<br>
>> > node<br>
>> > > > >>> ><br>
>> > > > >>> > Hi,<br>
>> > > > >>> ><br>
>> > > > >>> ><br>
>> > > > >>> > I am working on two board setup connecting to each other.<br>
>> Gluster<br>
>> > > > >>> version<br>
>> > > > >>> > 3.7.6 is running and added two bricks in replica 2 mode but<br>
>> when<br>
>> > I<br>
>> > > > >>> manually<br>
>> > > > >>> > removed (detach) the one board from the setup I am getting the<br>
>> > > > >>> following<br>
>> > > > >>> > error.<br>
>> > > > >>> ><br>
>> > > > >>> > volume remove-brick c_glusterfs replica 1 <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
>> > > > >>> /opt/lvmdir/c2/brick<br>
>> > > > >>> > force : FAILED : Incorrect brick <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
>> > /opt/lvmdir/c2/brick<br>
>> > > > for<br>
>> > > > >>> > volume c_glusterfs<br>
>> > > > >>> ><br>
>> > > > >>> > Please find the logs file as an attachment.<br>
>> > > > >>> ><br>
>> > > > >>> ><br>
>> > > > >>> > Regards,<br>
>> > > > >>> > Abhishek<br>
>> > > > >>> ><br>
>> > > > >>> ><br>
>> > > > >>> > _______________________________________________<br>
>> > > > >>> > Gluster-users mailing list<br>
>> > > > >>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> > > > >>> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>> > > > >>> ><br>
>> > > > >>><br>
>> > > > >>><br>
>> > > > >>><br>
>> > > > >>> --<br>
>> > > > >>><br>
>> > > > >>><br>
>> > > > >>><br>
>> > > > >>><br>
>> > > > >>> Regards<br>
>> > > > >>> Abhishek Paliwal<br>
>> > > > >>><br>
>> > > > >><br>
>> > > > >><br>
>> > > > >><br>
>> > > > >> --<br>
>> > > > >><br>
>> > > > >><br>
>> > > > >><br>
>> > > > >><br>
>> > > > >> Regards<br>
>> > > > >> Abhishek Paliwal<br>
>> > > > >><br>
>> > > > >><br>
>> > > > ><br>
>> > > > ><br>
>> > > > ><br>
>> > > > ><br>
>> > > ><br>
>> > > ><br>
>> > > > --<br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > > Regards<br>
>> > > > Abhishek Paliwal<br>
>> > > ><br>
>> > ><br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > Regards<br>
>> > Abhishek Paliwal<br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>><br>
>><br>
>><br>
>><br>
>> Regards<br>
>> Abhishek Paliwal<br>
>><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
><br>
<br>
<br>
<br>
--<br>
<br>
<br>
<br>
<br>
Regards<br>
Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>