<div dir="ltr"><div><div><div><div><div><div>Hi Gaurav,<br><br></div>For the network connectivity I am doing peer probe to the <font face="sans-serif" size="2">10.32.1.144 i.e. 2nd board thats working fine means connectivity is there.<br></font></div><font face="sans-serif" size="2"><br>#peer probe </font><font face="sans-serif" size="2">10.32.1.144<br></font></div><font face="sans-serif" size="2"><br>if the above command get success<br></font></div><font face="sans-serif" size="2"><br>I executed the the remove-brick command which is getting failed.<br><br></font></div><font face="sans-serif" size="2">So, now it seems the the peer probe will not give the correct connectivity status to execute the remove-brick command. <br><br></font></div><div><font face="sans-serif" size="2">But after analyzing the following logs from the 1st board seems that the process which will update the second brick in output of "</font><font face="sans-serif" size="2"><font face="sans-serif" size="2"># gluster volume status c_glusterfs</font>" takes sometime to update this table and before the updation of this table remove-brick is getting executed that is why it is getting failed.<br><br></font>++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br><font face="sans-serif" size="2"><b>1st board:<br></b><br># gluster volume info
<br>status
<br>gluster volume status c_glusterfs
<br>Volume Name: c_glusterfs
<br>Type: Replicate
<br>Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99
<br>Status: Started
<br>Number of Bricks: 1 x 2 = 2
<br>Transport-type: tcp
<br>Bricks:
<br>Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
<br>Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
<br>Options Reconfigured:
<br>nfs.disable: on
<br>network.ping-timeout: 4
<br>performance.readdir-ahead: on
<br># gluster peer status
<br>Number of Peers: 1
<br> <br>Hostname: 10.32.1.144
<br>Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1
<br><span class="im">State: Peer in Cluster (Connected)
<br></span># gluster volume status c_glusterfs
<br>Status of volume: c_glusterfs
<br>Gluster process TCP Port RDMA Port Online Pid
<br>------------------------------------------------------------------------------
<br>Brick 10.32.0.48:/opt/lvmdir/c2/brick 49153 0 Y 2537
<br>Self-heal Daemon on localhost N/A N/A Y 5577
<br>Self-heal Daemon on 10.32.1.144 N/A N/A Y 3850
<br>
<br>Task Status of Volume c_glusterfs
<br>------------------------------------------------------------------------------
<br>There are no active volume tasks<br><br>+++++++++++++++++++++++++++++++++++++++++++++++<br><br></font></div><div><font face="sans-serif" size="2">I'll try this with some delay or wait to remove-brick until the </font><font face="sans-serif" size="2"><font face="sans-serif" size="2"># gluster volume status c_glusterfs command show second brick in the list.<br><br></font></font></div><div><font face="sans-serif" size="2"><font face="sans-serif" size="2">May we this approach will resolve the issue.<br><br></font></font></div><div><font face="sans-serif" size="2"><font face="sans-serif" size="2">Please comment, If you are agree with my observation<br></font></font></div><div><font face="sans-serif" size="2"><font face="sans-serif" size="2"><br></font></font></div><div><font face="sans-serif" size="2"><font face="sans-serif" size="2">Regards,<br></font></font></div><div><font face="sans-serif" size="2"><font face="sans-serif" size="2">Abhishek</font><br></font></div><font face="sans-serif" size="2"></font></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 23, 2016 at 1:10 PM, ABHISHEK PALIWAL <span dir="ltr"><<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi Gaurav,<br><br></div>In my case we are removing the brick in the offline state with the force option like in the following way:<b><br><br><font face="sans-serif" size="2">gluster volume remove-brick %s replica 1 %s:%s force --mode=script<br><br></font></b></div><font face="sans-serif" size="2">but still getting the </font><font face="sans-serif" size="2">failure or remove-brick<br><br></font></div><font face="sans-serif" size="2">it seems that brick is not present which we are trying to remove here are the log snippet of both of the boards<br><br></font></div><font face="sans-serif" size="2"><b>1st board:<br></b><br># gluster volume info
<br>status
<br>gluster volume status c_glusterfs
<br>Volume Name: c_glusterfs
<br>Type: Replicate
<br>Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99
<br>Status: Started
<br>Number of Bricks: 1 x 2 = 2
<br>Transport-type: tcp
<br>Bricks:
<br>Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
<br>Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
<br>Options Reconfigured:
<br>nfs.disable: on
<br>network.ping-timeout: 4
<br>performance.readdir-ahead: on
<br># gluster peer status
<br>Number of Peers: 1
<br> <br>Hostname: 10.32.1.144
<br>Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1
<br><span class="">State: Peer in Cluster (Connected)
<br></span># gluster volume status c_glusterfs
<br>Status of volume: c_glusterfs
<br>Gluster process TCP Port RDMA Port Online Pid
<br>------------------------------------------------------------------------------
<br>Brick 10.32.0.48:/opt/lvmdir/c2/brick 49153 0 Y 2537
<br>Self-heal Daemon on localhost N/A N/A Y 5577
<br>Self-heal Daemon on 10.32.1.144 N/A N/A Y 3850
<br>
<br>Task Status of Volume c_glusterfs
<br>------------------------------------------------------------------------------
<br>There are no active volume tasks<br><br></font></div><font face="sans-serif" size="2"><b>2nd Board</b>:<br><br># gluster volume info
<br>status
<br>gluster volume status c_glusterfs
<br>gluster volume heal c_glusterfs info
<br>
<br>Volume Name: c_glusterfs
<br>Type: Replicate
<br>Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99
<br>Status: Started
<br>Number of Bricks: 1 x 2 = 2
<br>Transport-type: tcp
<br>Bricks:
<br>Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
<br>Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
<br>Options Reconfigured:
<br>performance.readdir-ahead: on
<br>network.ping-timeout: 4
<br>nfs.disable: on
<br># gluster peer status
<br>Number of Peers: 1
<br> <br>Hostname: 10.32.0.48
<br>Uuid: e7c4494e-aa04-4909-81c9-27a462f6f9e7
<br><span class="">State: Peer in Cluster (Connected)
<br></span># gluster volume status c_glusterfs
<br>Status of volume: c_glusterfs
<br>Gluster process TCP Port RDMA Port Online Pid
<br>------------------------------------------------------------------------------
<br>Brick 10.32.0.48:/opt/lvmdir/c2/brick 49153 0 Y 2537
<br>Self-heal Daemon on localhost N/A N/A Y 3850
<br>Self-heal Daemon on 10.32.0.48 N/A N/A Y 5577
<br>
<br>Task Status of Volume c_glusterfs
<br>------------------------------------------------------------------------------
<br>There are no active volume tasks<br><br></font></div><div><font face="sans-serif" size="2">Do you know why these logs are not showing the Brick info at the time of gluster volume status.<br></font></div><div><font face="sans-serif" size="2">Because we are not able to collect the logs of cmd_history.log file from the 2nd board.<br><br></font></div><font face="sans-serif" size="2">Regards,<br></font></div><div><font face="sans-serif" size="2">Abhishek<br></font></div><font face="sans-serif" size="2"><br></font></div><div class="gmail_extra"><div><div class="h5"><br><div class="gmail_quote">On Tue, Feb 23, 2016 at 12:02 PM, Gaurav Garg <span dir="ltr"><<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi abhishek,<br>
<span><br>
>> Can we perform remove-brick operation on the offline brick? what is the<br>
meaning of offline and online brick?<br>
<br>
</span>No, you can't perform remove-brick operation on the offline brick. brick is offline means brick process is not running. you can see it by executing #gluster volume status. If brick is offline then respective brick will show "N" entry in Online column of #gluster volume status command. Alternatively you can also check whether glusterfsd process for that brick is running or not by executing #ps aux | grep glusterfsd, this command will list out all the brick process you can filter out from them, which one is online, which one is not.<br>
<br>
But if you want to perform remove-brick operation on the offline brick then you need to execute it with force option. #gluster volume remove-brick <volname> hostname:/brick_name force. This might lead to data loss.<br>
<span><br>
<br>
<br>
>> Also, Is there any logic in gluster through which we can check the<br>
connectivity of node established or not before performing the any operation<br>
on brick?<br>
<br>
</span>Yes, you can check it by executing #gluster peer status command.<br>
<br>
<br>
Thanks,<br>
<span><br>
~Gaurav<br>
<br>
<br>
----- Original Message -----<br>
From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>><br>
To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
Cc: <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
</span><div><div>Sent: Tuesday, February 23, 2016 11:50:43 AM<br>
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
<br>
Hi Gaurav,<br>
<br>
one general question related to gluster bricks.<br>
<br>
Can we perform remove-brick operation on the offline brick? what is the<br>
meaning of offline and online brick?<br>
Also, Is there any logic in gluster through which we can check the<br>
connectivity of node established or not before performing the any operation<br>
on brick?<br>
<br>
Regards,<br>
Abhishek<br>
<br>
On Mon, Feb 22, 2016 at 2:42 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>> wrote:<br>
<br>
> Hi abhishek,<br>
><br>
> I went through your logs of node 1 and by looking glusterd logs its<br>
> clearly indicate that your 2nd node (10.32.1.144) have disconnected from<br>
> the cluster, because of that remove-brick operation failed. I think you<br>
> need to check your network interface.<br>
><br>
> But surprising things is that i did not see duplicate peer entry in<br>
> #gluster peer status command output.<br>
><br>
> May be i will get some more information from your (10.32.1.144) 2nd node<br>
> logs. Could you also attach your 2nd node logs.<br>
><br>
> after restarting glusterd, are you seeing duplicate peer entry in #gluster<br>
> peer status command output ?<br>
><br>
> will wait for 2nd node logs for further analyzing duplicate peer entry<br>
> problem.<br>
><br>
> Thanks,<br>
><br>
> ~Gaurav<br>
><br>
> ----- Original Message -----<br>
> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>><br>
> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> Cc: <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
> Sent: Monday, February 22, 2016 12:48:55 PM<br>
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
><br>
> Hi Gaurav,<br>
><br>
> Here, You can find the attached logs for the boards in case of remove-brick<br>
> failure.<br>
> In these logs we do not have the cmd_history and<br>
> etc-glusterfs-glusterd.vol.log for the second board.<br>
><br>
> May be for that we need to some more time.<br>
><br>
><br>
> Regards,<br>
> Abhishek<br>
><br>
> On Mon, Feb 22, 2016 at 10:18 AM, Gaurav Garg <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>> wrote:<br>
><br>
> > Hi Abhishek,<br>
> ><br>
> > >> I'll provide the required log to you.<br>
> ><br>
> > sure<br>
> ><br>
> > on both node. do "pkill glusterd" and then start glusterd services.<br>
> ><br>
> > Thanks,<br>
> ><br>
> > ~Gaurav<br>
> ><br>
> > ----- Original Message -----<br>
> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>><br>
> > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> > Cc: <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
> > Sent: Monday, February 22, 2016 10:11:48 AM<br>
> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
> ><br>
> > Hi Gaurav,<br>
> ><br>
> > Thanks for your prompt reply.<br>
> ><br>
> > I'll provide the required log to you.<br>
> ><br>
> > As a workaround you suggested that restart the glusterd service. Could<br>
> you<br>
> > please tell me the point where I can do this?<br>
> ><br>
> > Regards,<br>
> > Abhishek<br>
> ><br>
> > On Fri, Feb 19, 2016 at 6:11 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>> wrote:<br>
> ><br>
> > > Hi Abhishek,<br>
> > ><br>
> > > Peer status output looks interesting where it have stale entry,<br>
> > > technically it should not happen. Here few thing need to ask<br>
> > ><br>
> > > Did you perform any manual operation with GlusterFS configuration file<br>
> > > which resides in /var/lib/glusterd/* folder.<br>
> > ><br>
> > > Can you provide output of "ls /var/lib/glusterd/peers" from both of<br>
> your<br>
> > > nodes.<br>
> > ><br>
> > > Could you provide output of #gluster peer status command when 2nd node<br>
> is<br>
> > > down<br>
> > ><br>
> > > Can you provide output of #gluster volume info command<br>
> > ><br>
> > > Can you provide full logs details of cmd_history.log and<br>
> > > etc-glusterfs-glusterd.vol.log from both the nodes.<br>
> > ><br>
> > ><br>
> > > You can restart your glusterd as of now as a workaround but we need to<br>
> > > analysis this issue further.<br>
> > ><br>
> > > Thanks,<br>
> > > Gaurav<br>
> > ><br>
> > > ----- Original Message -----<br>
> > > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>><br>
> > > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> > > Cc: <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
> > > Sent: Friday, February 19, 2016 5:27:21 PM<br>
> > > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
> > ><br>
> > > Hi Gaurav,<br>
> > ><br>
> > > After the failure of add-brick following is outcome "gluster peer<br>
> status"<br>
> > > command<br>
> > ><br>
> > > Number of Peers: 2<br>
> > ><br>
> > > Hostname: 10.32.1.144<br>
> > > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
> > > State: Peer in Cluster (Connected)<br>
> > ><br>
> > > Hostname: 10.32.1.144<br>
> > > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
> > > State: Peer in Cluster (Connected)<br>
> > ><br>
> > > Regards,<br>
> > > Abhishek<br>
> > ><br>
> > > On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <<br>
> > <a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a><br>
> > > ><br>
> > > wrote:<br>
> > ><br>
> > > > Hi Gaurav,<br>
> > > ><br>
> > > > Both are the board connect through the backplane using ethernet.<br>
> > > ><br>
> > > > Even this inconsistency also occurs when I am trying to bringing back<br>
> > the<br>
> > > > node in slot. Means some time add-brick executes without failure but<br>
> > some<br>
> > > > time following error occurs.<br>
> > > ><br>
> > > > volume add-brick c_glusterfs replica 2 <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
> > /opt/lvmdir/c2/brick<br>
> > > > force : FAILED : Another transaction is in progress for c_glusterfs.<br>
> > > Please<br>
> > > > try again after sometime.<br>
> > > ><br>
> > > ><br>
> > > > You can also see the attached logs for add-brick failure scenario.<br>
> > > ><br>
> > > > Please let me know if you need more logs.<br>
> > > ><br>
> > > > Regards,<br>
> > > > Abhishek<br>
> > > ><br>
> > > ><br>
> > > > On Fri, Feb 19, 2016 at 5:03 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> wrote:<br>
> > > ><br>
> > > >> Hi Abhishek,<br>
> > > >><br>
> > > >> How are you connecting two board, and how are you removing it<br>
> manually<br>
> > > >> that need to know because if you are removing your 2nd board from<br>
> the<br>
> > > >> cluster (abrupt shutdown) then you can't perform remove brick<br>
> > operation<br>
> > > in<br>
> > > >> 2nd node from first node and its happening successfully in your<br>
> case.<br>
> > > could<br>
> > > >> you ensure your network connection once again while removing and<br>
> > > bringing<br>
> > > >> back your node again.<br>
> > > >><br>
> > > >> Thanks,<br>
> > > >> Gaurav<br>
> > > >><br>
> > > >> ------------------------------<br>
> > > >> *From: *"ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>><br>
> > > >> *To: *"Gaurav Garg" <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> > > >> *Cc: *<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
> > > >> *Sent: *Friday, February 19, 2016 3:36:21 PM<br>
> > > >><br>
> > > >> *Subject: *Re: [Gluster-users] Issue in Adding/Removing the gluster<br>
> > node<br>
> > > >><br>
> > > >> Hi Gaurav,<br>
> > > >><br>
> > > >> Thanks for reply<br>
> > > >><br>
> > > >> 1. Here, I removed the board manually here but this time it works<br>
> fine<br>
> > > >><br>
> > > >> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs<br>
> > replica<br>
> > > 1<br>
> > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > > >> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS<br>
> > > >><br>
> > > >> Yes this time board is reachable but how? don't know because board<br>
> is<br>
> > > >> detached.<br>
> > > >><br>
> > > >> 2. Here, I attached the board this time its works fine in add-bricks<br>
> > > >><br>
> > > >> 2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS<br>
> > > >> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs<br>
> replica 2<br>
> > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > > >><br>
> > > >> 3.Here, again I removed the board this time failed occur<br>
> > > >><br>
> > > >> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs<br>
> > replica<br>
> > > 1<br>
> > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick<br>
> > > >> 10.32.1.144:/opt<br>
> > > >> /lvmdir/c2/brick for volume c_glusterfs<br>
> > > >><br>
> > > >> but here board is not reachable.<br>
> > > >><br>
> > > >> why this inconsistency is there while doing the same step multiple<br>
> > time.<br>
> > > >><br>
> > > >> Hope you are getting my point.<br>
> > > >><br>
> > > >> Regards,<br>
> > > >> Abhishek<br>
> > > >><br>
> > > >> On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> > wrote:<br>
> > > >><br>
> > > >>> Abhishek,<br>
> > > >>><br>
> > > >>> when sometime its working fine means 2nd board network connection<br>
> is<br>
> > > >>> reachable to first node. you can conform this by executing same<br>
> > > #gluster<br>
> > > >>> peer status command.<br>
> > > >>><br>
> > > >>> Thanks,<br>
> > > >>> Gaurav<br>
> > > >>><br>
> > > >>> ----- Original Message -----<br>
> > > >>> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>><br>
> > > >>> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> > > >>> Cc: <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
> > > >>> Sent: Friday, February 19, 2016 3:12:22 PM<br>
> > > >>> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster<br>
> > node<br>
> > > >>><br>
> > > >>> Hi Gaurav,<br>
> > > >>><br>
> > > >>> Yes, you are right actually I am force fully detaching the node<br>
> from<br>
> > > the<br>
> > > >>> slave and when we removed the board it disconnected from the<br>
> another<br>
> > > >>> board.<br>
> > > >>><br>
> > > >>> but my question is I am doing this process multiple time some time<br>
> it<br>
> > > >>> works<br>
> > > >>> fine but some time it gave these errors.<br>
> > > >>><br>
> > > >>><br>
> > > >>> you can see the following logs from cmd_history.log file<br>
> > > >>><br>
> > > >>> [2016-02-18 10:03:34.497996] : volume set c_glusterfs nfs.disable<br>
> > on :<br>
> > > >>> SUCCESS<br>
> > > >>> [2016-02-18 10:03:34.915036] : volume start c_glusterfs force :<br>
> > > SUCCESS<br>
> > > >>> [2016-02-18 10:03:40.250326] : volume status : SUCCESS<br>
> > > >>> [2016-02-18 10:03:40.273275] : volume status : SUCCESS<br>
> > > >>> [2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs<br>
> > > replica 1<br>
> > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > > >>> [2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS<br>
> > > >>> [2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS<br>
> > > >>> [2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs<br>
> replica<br>
> > 2<br>
> > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > > >>> [2016-02-18 10:30:53.297415] : volume status : SUCCESS<br>
> > > >>> [2016-02-18 10:30:53.313096] : volume status : SUCCESS<br>
> > > >>> [2016-02-18 10:37:02.748714] : volume status : SUCCESS<br>
> > > >>> [2016-02-18 10:37:02.762091] : volume status : SUCCESS<br>
> > > >>> [2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs<br>
> > > replica 1<br>
> > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick<br>
> > > >>> 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs<br>
> > > >>><br>
> > > >>><br>
> > > >>> On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>><br>
> > wrote:<br>
> > > >>><br>
> > > >>> > Hi Abhishek,<br>
> > > >>> ><br>
> > > >>> > Seems your peer 10.32.1.144 have disconnected while doing remove<br>
> > > brick.<br>
> > > >>> > see the below logs in glusterd:<br>
> > > >>> ><br>
> > > >>> > [2016-02-18 10:37:02.816009] E [MSGID: 106256]<br>
> > > >>> > [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick]<br>
> > > >>> 0-management:<br>
> > > >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume<br>
> > > >>> c_glusterfs<br>
> > > >>> > [Invalid argument]<br>
> > > >>> > [2016-02-18 10:37:02.816061] E [MSGID: 106265]<br>
> > > >>> > [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick]<br>
> > > >>> 0-management:<br>
> > > >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume<br>
> > > >>> c_glusterfs<br>
> > > >>> > The message "I [MSGID: 106004]<br>
> > > >>> > [glusterd-handler.c:5065:__glusterd_peer_rpc_notify]<br>
> 0-management:<br>
> > > Peer<br>
> > > >>> > <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state<br>
> > > <Peer<br>
> > > >>> in<br>
> > > >>> > Cluster>, has disconnected from glusterd." repeated 25 times<br>
> > between<br>
> > > >>> > [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]<br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> > If you are facing the same issue now, could you paste your #<br>
> > gluster<br>
> > > >>> peer<br>
> > > >>> > status command output here.<br>
> > > >>> ><br>
> > > >>> > Thanks,<br>
> > > >>> > ~Gaurav<br>
> > > >>> ><br>
> > > >>> > ----- Original Message -----<br>
> > > >>> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>><br>
> > > >>> > To: <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
> > > >>> > Sent: Friday, February 19, 2016 2:46:35 PM<br>
> > > >>> > Subject: [Gluster-users] Issue in Adding/Removing the gluster<br>
> node<br>
> > > >>> ><br>
> > > >>> > Hi,<br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> > I am working on two board setup connecting to each other. Gluster<br>
> > > >>> version<br>
> > > >>> > 3.7.6 is running and added two bricks in replica 2 mode but when<br>
> I<br>
> > > >>> manually<br>
> > > >>> > removed (detach) the one board from the setup I am getting the<br>
> > > >>> following<br>
> > > >>> > error.<br>
> > > >>> ><br>
> > > >>> > volume remove-brick c_glusterfs replica 1 <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
> > > >>> /opt/lvmdir/c2/brick<br>
> > > >>> > force : FAILED : Incorrect brick <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
> /opt/lvmdir/c2/brick<br>
> > > for<br>
> > > >>> > volume c_glusterfs<br>
> > > >>> ><br>
> > > >>> > Please find the logs file as an attachment.<br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> > Regards,<br>
> > > >>> > Abhishek<br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> > _______________________________________________<br>
> > > >>> > Gluster-users mailing list<br>
> > > >>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > > >>> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
> > > >>> ><br>
> > > >>><br>
> > > >>><br>
> > > >>><br>
> > > >>> --<br>
> > > >>><br>
> > > >>><br>
> > > >>><br>
> > > >>><br>
> > > >>> Regards<br>
> > > >>> Abhishek Paliwal<br>
> > > >>><br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >> --<br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >> Regards<br>
> > > >> Abhishek Paliwal<br>
> > > >><br>
> > > >><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Regards<br>
> > > Abhishek Paliwal<br>
> > ><br>
> ><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
><br>
<br>
<br>
<br>
--<br>
<br>
<br>
<br>
<br>
Regards<br>
Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br></div></div><span class="HOEnZb"><font color="#888888">-- <br><div><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</font></span></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>