<div dir="ltr"><div><div><div><div><div><div><div><div><div><div>Hi Team,<br><br></div>I am facing some issue with peer status and because of that remove-brick on replica volume is getting failed.<br><br></div>Here. is the scenario what I am doing with gluster:<br><br></div>1. I have two boards A & B and gluster is running on both of the boards.<br></div>2. On board I have created a replicated volume with one brick on each board.<br></div>3. Created one glusterfs mount point where both of brick are mounted.<br></div>4. start the volume with nfs.disable=true.<br></div>5. Till now everything is in sync between both of bricks.<br><br></div>Now when I manually plug-out the board B from the slot and plug-in it again.<br><br></div>1. After bootup the board B I have started the glusted on the board B.<br><br></div>Following are the some gluster command output on Board B after the step 1.<br clear="all"><div><div><div><div><div><div><div><div><div><div><div><div><div><div><div><div><br># gluster peer status
<br>Number of Peers: 2
<br> <br>Hostname: 10.32.0.48
<br>Uuid: f4ebe3c5-b6a4-4795-98e0-732337f76faf
<br>State: Accepted peer request (Connected)
<br> <br>Hostname: 10.32.0.48
<br>Uuid: 4bf982c0-b21b-415c-b870-e72f36c7f2e7
<br>State: Peer is connected and Accepted (Connected)
<br><br>Why this peer status is showing two peer with different UUID?<br><br># gluster volume info
<br>
<br>Volume Name: c_glusterfs
<br>Type: Replicate
<br>Volume ID: c11f1f13-64a0-4aca-98b5-91d609a4a18d
<br>Status: Started
<br>Number of Bricks: 1 x 2 = 2
<br>Transport-type: tcp
<br>Bricks:
<br>Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
<br>Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
<br>Options Reconfigured:
<br>performance.readdir-ahead: on
<br>network.ping-timeout: 4
<br>nfs.disable: on
<br># gluster volume heal c_glusterfs info
<br>c_glusterfs: Not able to fetch volfile from glusterd
<br>Volume heal failed.
<br># gluster volume status c_glusterfs
<br>Status of volume: c_glusterfs
<br>Gluster process TCP Port RDMA Port Online Pid
<br>------------------------------------------------------------------------------
<br>Brick 10.32.1.144:/opt/lvmdir/c2/brick N/A N/A N N/A
<br>Self-heal Daemon on localhost N/A N/A Y 3922
<br>
<br>Task Status of Volume c_glusterfs
<br>------------------------------------------------------------------------------
<br>There are no active volume tasks<br>-- <br><div class="gmail_signature"><div dir="ltr"><br></div><div>At the same time Board A have the following gluster commands outcome:<br><br># gluster peer status
<br>Number of Peers: 1
<br> <br>Hostname: 10.32.1.144
<br>Uuid: c6b64e36-76da-4e98-a616-48e0e52c7006
<br>State: Peer in Cluster (Connected)
<br><br>Why it is showing the older UUID of host 10.32.1.144 when this UUID has been changed and new UUID is 267a92c3-fd28-4811-903c-c1d54854bda9<br><br><br># gluster volume heal c_glusterfs info
<br>c_glusterfs: Not able to fetch volfile from glusterd
<br>Volume heal failed.
<br># gluster volume status c_glusterfs
<br>Status of volume: c_glusterfs
<br>Gluster process TCP Port RDMA Port Online Pid
<br>------------------------------------------------------------------------------
<br>Brick 10.32.0.48:/opt/lvmdir/c2/brick 49169 0 Y 2427
<br>Brick 10.32.1.144:/opt/lvmdir/c2/brick N/A N/A N N/A
<br>Self-heal Daemon on localhost N/A N/A Y 3388
<br>Self-heal Daemon on 10.32.1.144 N/A N/A Y 3922
<br>
<br>Task Status of Volume c_glusterfs
<br>------------------------------------------------------------------------------
<br>There are no active volume tasks
<br> <br></div><div>As you see in the "gluster volume status" showing that Brick "10.32.1.144:/opt/lvmdir/c2/brick " is offline so We have tried to remove it but getting "volume remove-brick c_glusterfs replica 1 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs" error on the Board A.<br><br></div><div>Please reply on this post because I am always getting this error in this scenario.<br><br></div><div>For more detail I am also adding the logs of both of the board which having some manual created file in which you can find the output of glulster command from both of the boards<br><br></div><div>in logs <br>00030 is board A<br></div><div>00250 is board B.<br><br></div><div>Thanks in advance waiting for the reply.<br></div><div><br></div><div>Regards,<br></div><div>Abhishek<br></div><div> <br></div><div dir="ltr"><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>