<div dir="ltr"><div><div>I have only these glusterd files available on the nodes<br><br></div>Regards,<br></div>Abhishek<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 14, 2016 at 3:43 PM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 03/14/2016 02:18 PM, ABHISHEK PALIWAL wrote:<br>
><br>
><br>
> On Mon, Mar 14, 2016 at 12:12 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
</span><div><div class="h5">> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>> wrote:<br>
><br>
><br>
><br>
> On 03/14/2016 10:52 AM, ABHISHEK PALIWAL wrote:<br>
> > Hi Team,<br>
> ><br>
> > I am facing some issue with peer status and because of that remove-brick<br>
> > on replica volume is getting failed.<br>
> ><br>
> > Here. is the scenario what I am doing with gluster:<br>
> ><br>
> > 1. I have two boards A & B and gluster is running on both of the boards.<br>
> > 2. On board I have created a replicated volume with one brick on each<br>
> > board.<br>
> > 3. Created one glusterfs mount point where both of brick are mounted.<br>
> > 4. start the volume with nfs.disable=true.<br>
> > 5. Till now everything is in sync between both of bricks.<br>
> ><br>
> > Now when I manually plug-out the board B from the slot and plug-in it again.<br>
> ><br>
> > 1. After bootup the board B I have started the glusted on the board B.<br>
> ><br>
> > Following are the some gluster command output on Board B after the step 1.<br>
> ><br>
> > # gluster peer status<br>
> > Number of Peers: 2<br>
> ><br>
> > Hostname: 10.32.0.48<br>
> > Uuid: f4ebe3c5-b6a4-4795-98e0-732337f76faf<br>
> > State: Accepted peer request (Connected)<br>
> ><br>
> > Hostname: 10.32.0.48<br>
> > Uuid: 4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
> > State: Peer is connected and Accepted (Connected)<br>
> ><br>
> > Why this peer status is showing two peer with different UUID?<br>
> GlusterD doesn't generate a new UUID on init if it has already generated<br>
> an UUID earlier. This clearly indicates that on reboot of board B<br>
> content of /var/lib/glusterd were wiped off. I've asked this question to<br>
> you multiple times that is it the case?<br>
><br>
><br>
> Yes I am following the same which is mentioned in the link:<br>
><br>
> <a href="http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected" rel="noreferrer" target="_blank">http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected</a><br>
><br>
> but why it is showing two peer enteries?<br>
><br>
> ><br>
> > # gluster volume info<br>
> ><br>
> > Volume Name: c_glusterfs<br>
> > Type: Replicate<br>
> > Volume ID: c11f1f13-64a0-4aca-98b5-91d609a4a18d<br>
> > Status: Started<br>
> > Number of Bricks: 1 x 2 = 2<br>
> > Transport-type: tcp<br>
> > Bricks:<br>
> > Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
> > Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > Options Reconfigured:<br>
> > performance.readdir-ahead: on<br>
> > network.ping-timeout: 4<br>
> > nfs.disable: on<br>
> > # gluster volume heal c_glusterfs info<br>
> > c_glusterfs: Not able to fetch volfile from glusterd<br>
> > Volume heal failed.<br>
> > # gluster volume status c_glusterfs<br>
> > Status of volume: c_glusterfs<br>
> > Gluster process TCP Port RDMA Port<br>
> Online<br>
> > Pid<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> ><br>
> > Brick 10.32.1.144:/opt/lvmdir/c2/brick N/A N/A N<br>
> > N/A<br>
> > Self-heal Daemon on localhost N/A N/A Y<br>
> > 3922<br>
> ><br>
> > Task Status of Volume c_glusterfs<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> ><br>
> > There are no active volume tasks<br>
> > --<br>
> ><br>
> > At the same time Board A have the following gluster commands outcome:<br>
> ><br>
> > # gluster peer status<br>
> > Number of Peers: 1<br>
> ><br>
> > Hostname: 10.32.1.144<br>
> > Uuid: c6b64e36-76da-4e98-a616-48e0e52c7006<br>
> > State: Peer in Cluster (Connected)<br>
> ><br>
> > Why it is showing the older UUID of host 10.32.1.144 when this<br>
> UUID has<br>
> > been changed and new UUID is 267a92c3-fd28-4811-903c-c1d54854bda9<br>
> ><br>
> ><br>
> > # gluster volume heal c_glusterfs info<br>
> > c_glusterfs: Not able to fetch volfile from glusterd<br>
> > Volume heal failed.<br>
> > # gluster volume status c_glusterfs<br>
> > Status of volume: c_glusterfs<br>
> > Gluster process TCP Port RDMA Port<br>
> Online<br>
> > Pid<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> ><br>
> > Brick 10.32.0.48:/opt/lvmdir/c2/brick 49169 0 Y<br>
> > 2427<br>
> > Brick 10.32.1.144:/opt/lvmdir/c2/brick N/A N/A N<br>
> > N/A<br>
> > Self-heal Daemon on localhost N/A N/A Y<br>
> > 3388<br>
> > Self-heal Daemon on 10.32.1.144 N/A N/A Y<br>
> > 3922<br>
> ><br>
> > Task Status of Volume c_glusterfs<br>
> ><br>
> ------------------------------------------------------------------------------<br>
> ><br>
> > There are no active volume tasks<br>
> ><br>
> > As you see in the "gluster volume status" showing that Brick<br>
> > "10.32.1.144:/opt/lvmdir/c2/brick " is offline so We have tried to<br>
> > remove it but getting "volume remove-brick c_glusterfs replica 1<br>
> > 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick<br>
> > 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs" error on the<br>
> > Board A.<br>
> ><br>
> > Please reply on this post because I am always getting this error<br>
> in this<br>
> > scenario.<br>
> ><br>
> > For more detail I am also adding the logs of both of the board which<br>
> > having some manual created file in which you can find the output of<br>
> > glulster command from both of the boards<br>
> ><br>
> > in logs<br>
> > 00030 is board A<br>
> > 00250 is board B.<br>
> This attachment doesn't help much. Could you attach full glusterd log<br>
> files from both the nodes?<br>
> ><br>
><br>
> inside this attachment you will found full glusterd log file<br>
> 00300/glusterd/ and 002500/glusterd/<br>
</div></div>No, that contains the configuration files.<br>
<span class="">><br>
> > Thanks in advance waiting for the reply.<br>
> ><br>
> > Regards,<br>
> > Abhishek<br>
> ><br>
> ><br>
> > Regards<br>
> > Abhishek Paliwal<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Gluster-devel mailing list<br>
</span>> > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
<div class="HOEnZb"><div class="h5">> > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> ><br>
><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>