<div dir="ltr"><div><div><div>Hi Atin,<br><br></div><span style="color:rgb(0,0,0)">Please tell me the line number where you are<span style="color:rgb(255,0,0)"> </span><span style="color:rgb(0,0,0)"><span class="im"><span style="color:rgb(0,0,0)">seeing that glusterd has restored value from the disk files in Board B file.</span><br><br></span></span></span></div><span style="color:rgb(0,0,0)"><span class="im"><span style="color:rgb(0,0,0)">Regards,</span><br></span></span></div><div><span class="im"><span style="color:rgb(0,0,0)">Abhishek</span><br></span></div><span class="im"></span></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Mar 15, 2016 at 11:31 AM, ABHISHEK PALIWAL <span dir="ltr"><<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Tue, Mar 15, 2016 at 11:10 AM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span><br>
<br>
On 03/15/2016 10:54 AM, ABHISHEK PALIWAL wrote:<br>
> Hi Atin,<br>
><br>
> Is these files are ok? or you need some other files.<br>
</span>I just started going through the log files you shared. I've few<br>
questions for you looking at the log:<br>
1. Are you sure the log what you have provided from board B is post a<br>
reboot? If you claim that a reboot wipes of /var/lib/glusterd/ then why<br>
am I seeing that glusterd has restored value from the disk files?<br></blockquote></span><div><br>Yes these logs from Board B after reboot. Could you please explain me the line number where you are seeing that glusterd has restored value from the disk files.</div><span class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
2. From the content of glusterd configurations which you shared earlier<br>
the peer UUIDs are 4bf982c0-b21b-415c-b870-e72f36c7f2e7,<br>
4bf982c0-b21b-415c-b870-e72f36c7f2e7 002500/glusterd/peers &<br>
c6b64e36-76da-4e98-a616-48e0e52c7006 from 000300/glusterd/peers. They<br>
don't even exist in glusterd.log.<br>
<br>
Somehow I have a feeling that the sequence of log and configurations<br>
files you shared don't match!<br></blockquote><div><br></div></span><div>There is two UUID file present in 002500/glusterd/peers <br>1. 4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>Content of this file is:<br>uuid=4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>state=10<br>hostname1=10.32.0.48<br>I have a question from where this UUID is coming?<br><br>2. 98a28041-f853-48ac-bee0-34c592eeb827 <br>Content of this file is:<br>uuid=f4ebe3c5-b6a4-4795-98e0-732337f76faf //This uuid is belogs to 000300(10.32.0.48) board you can check this in both of the glusterd log file<br>state=4 //what this state field display in this file?<br>hostname1=10.32.0.48<br><br><br></div><div>There is only one UUID file is present on 00030/glusterd/peers<br><br>c6b64e36-76da-4e98-a616-48e0e52c7006 //This is the old UUID of the 002500 board before reboot<br><br></div><div>content of this file is:<br><br>uuid=267a92c3-fd28-4811-903c-c1d54854bda9 //This is new UUID generated by the 002500 board after reboot you can check this as well in glusterd file of 00030 board. <br>state=3<br>hostname1=10.32.1.144<br></div><div><div class="h5"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span><font color="#888888"><br>
~Atin<br>
</font></span><span><br>
><br>
> Regards,<br>
> Abhishek<br>
><br>
> On Mon, Mar 14, 2016 at 6:12 PM, ABHISHEK PALIWAL<br>
</span><span>> <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a> <mailto:<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>>> wrote:<br>
><br>
> You mean etc*-glusterd-*.log file from both of the boards?<br>
><br>
> if yes please find the attachment for the same.<br>
><br>
> On Mon, Mar 14, 2016 at 5:27 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
</span><span>> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>> wrote:<br>
><br>
><br>
><br>
> On 03/14/2016 05:09 PM, ABHISHEK PALIWAL wrote:<br>
> > I am not getting you which glusterd directory you are asking. if you are<br>
> > asking about the /var/lib/glusterd directory then which I shared earlier<br>
> > is the same.<br>
> 1. Go to /var/log/glusterfs directory<br>
> 2. Look for glusterd log file<br>
> 3. attach the log<br>
> Do it for both the boards.<br>
> ><br>
> > I have two directories related to gluster<br>
> ><br>
> > 1. /var/log/glusterfs<br>
> > 2./var/lib/glusterd<br>
> ><br>
> > On Mon, Mar 14, 2016 at 4:12 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
</span><span>> > <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>>> wrote:<br>
> ><br>
> ><br>
> ><br>
> > On 03/14/2016 03:59 PM, ABHISHEK PALIWAL wrote:<br>
> > > I have only these glusterd files available on the nodes<br>
> > Look for etc-*-glusterd*.log in /var/log/glusterfs, that represents the<br>
> > glusterd log file.<br>
> > ><br>
> > > Regards,<br>
> > > Abhishek<br>
> > ><br>
> > > On Mon, Mar 14, 2016 at 3:43 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>><br>
</span><div><div>> > > <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>>>> wrote:<br>
> > ><br>
> > ><br>
> > ><br>
> > > On 03/14/2016 02:18 PM, ABHISHEK PALIWAL wrote:<br>
> > > ><br>
> > > ><br>
> > > > On Mon, Mar 14, 2016 at 12:12 PM, Atin Mukherjee<br>
> <<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>>><br>
> > > > <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>>>>>> wrote:<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > On 03/14/2016 10:52 AM, ABHISHEK PALIWAL wrote:<br>
> > > > > Hi Team,<br>
> > > > ><br>
> > > > > I am facing some issue with peer status and<br>
> because of<br>
> > that<br>
> > > remove-brick<br>
> > > > > on replica volume is getting failed.<br>
> > > > ><br>
> > > > > Here. is the scenario what I am doing with<br>
> gluster:<br>
> > > > ><br>
> > > > > 1. I have two boards A & B and gluster is<br>
> running on<br>
> > both of<br>
> > > the boards.<br>
> > > > > 2. On board I have created a replicated<br>
> volume with one<br>
> > > brick on each<br>
> > > > > board.<br>
> > > > > 3. Created one glusterfs mount point where<br>
> both of<br>
> > brick are<br>
> > > mounted.<br>
> > > > > 4. start the volume with nfs.disable=true.<br>
> > > > > 5. Till now everything is in sync between<br>
> both of bricks.<br>
> > > > ><br>
> > > > > Now when I manually plug-out the board B<br>
> from the slot and<br>
> > > plug-in it again.<br>
> > > > ><br>
> > > > > 1. After bootup the board B I have started<br>
> the glusted on<br>
> > > the board B.<br>
> > > > ><br>
> > > > > Following are the some gluster command<br>
> output on Board B<br>
> > > after the step 1.<br>
> > > > ><br>
> > > > > # gluster peer status<br>
> > > > > Number of Peers: 2<br>
> > > > ><br>
> > > > > Hostname: 10.32.0.48<br>
> > > > > Uuid: f4ebe3c5-b6a4-4795-98e0-732337f76faf<br>
> > > > > State: Accepted peer request (Connected)<br>
> > > > ><br>
> > > > > Hostname: 10.32.0.48<br>
> > > > > Uuid: 4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
> > > > > State: Peer is connected and Accepted<br>
> (Connected)<br>
> > > > ><br>
> > > > > Why this peer status is showing two peer with<br>
> > different UUID?<br>
> > > > GlusterD doesn't generate a new UUID on init<br>
> if it has<br>
> > already<br>
> > > generated<br>
> > > > an UUID earlier. This clearly indicates that<br>
> on reboot<br>
> > of board B<br>
> > > > content of /var/lib/glusterd were wiped off.<br>
> I've asked this<br>
> > > question to<br>
> > > > you multiple times that is it the case?<br>
> > > ><br>
> > > ><br>
> > > > Yes I am following the same which is mentioned in<br>
> the link:<br>
> > > ><br>
> > > ><br>
> > ><br>
> ><br>
> <a href="http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected" rel="noreferrer" target="_blank">http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected</a><br>
> > > ><br>
> > > > but why it is showing two peer enteries?<br>
> > > ><br>
> > > > ><br>
> > > > > # gluster volume info<br>
> > > > ><br>
> > > > > Volume Name: c_glusterfs<br>
> > > > > Type: Replicate<br>
> > > > > Volume ID: c11f1f13-64a0-4aca-98b5-91d609a4a18d<br>
> > > > > Status: Started<br>
> > > > > Number of Bricks: 1 x 2 = 2<br>
> > > > > Transport-type: tcp<br>
> > > > > Bricks:<br>
> > > > > Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
> > > > > Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > > > > Options Reconfigured:<br>
> > > > > performance.readdir-ahead: on<br>
> > > > > network.ping-timeout: 4<br>
> > > > > nfs.disable: on<br>
> > > > > # gluster volume heal c_glusterfs info<br>
> > > > > c_glusterfs: Not able to fetch volfile from<br>
> glusterd<br>
> > > > > Volume heal failed.<br>
> > > > > # gluster volume status c_glusterfs<br>
> > > > > Status of volume: c_glusterfs<br>
> > > > > Gluster process<br>
> TCP Port<br>
> > RDMA Port<br>
> > > > Online<br>
> > > > > Pid<br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > ><br>
> > > > > Brick 10.32.1.144:/opt/lvmdir/c2/brick<br>
> N/A N/A<br>
> > > N<br>
> > > > > N/A<br>
> > > > > Self-heal Daemon on localhost<br>
> N/A N/A<br>
> > > Y<br>
> > > > > 3922<br>
> > > > ><br>
> > > > > Task Status of Volume c_glusterfs<br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > ><br>
> > > > > There are no active volume tasks<br>
> > > > > --<br>
> > > > ><br>
> > > > > At the same time Board A have the following<br>
> gluster<br>
> > commands<br>
> > > outcome:<br>
> > > > ><br>
> > > > > # gluster peer status<br>
> > > > > Number of Peers: 1<br>
> > > > ><br>
> > > > > Hostname: 10.32.1.144<br>
> > > > > Uuid: c6b64e36-76da-4e98-a616-48e0e52c7006<br>
> > > > > State: Peer in Cluster (Connected)<br>
> > > > ><br>
> > > > > Why it is showing the older UUID of host<br>
> 10.32.1.144<br>
> > when this<br>
> > > > UUID has<br>
> > > > > been changed and new UUID is<br>
> > > 267a92c3-fd28-4811-903c-c1d54854bda9<br>
> > > > ><br>
> > > > ><br>
> > > > > # gluster volume heal c_glusterfs info<br>
> > > > > c_glusterfs: Not able to fetch volfile from<br>
> glusterd<br>
> > > > > Volume heal failed.<br>
> > > > > # gluster volume status c_glusterfs<br>
> > > > > Status of volume: c_glusterfs<br>
> > > > > Gluster process<br>
> TCP Port<br>
> > RDMA Port<br>
> > > > Online<br>
> > > > > Pid<br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > ><br>
> > > > > Brick 10.32.0.48:/opt/lvmdir/c2/brick<br>
> 49169 0<br>
> > > Y<br>
> > > > > 2427<br>
> > > > > Brick 10.32.1.144:/opt/lvmdir/c2/brick<br>
> N/A N/A<br>
> > > N<br>
> > > > > N/A<br>
> > > > > Self-heal Daemon on localhost<br>
> N/A N/A<br>
> > > Y<br>
> > > > > 3388<br>
> > > > > Self-heal Daemon on 10.32.1.144<br>
> N/A N/A<br>
> > > Y<br>
> > > > > 3922<br>
> > > > ><br>
> > > > > Task Status of Volume c_glusterfs<br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > ><br>
> > > > > There are no active volume tasks<br>
> > > > ><br>
> > > > > As you see in the "gluster volume status"<br>
> showing that<br>
> > Brick<br>
> > > > > "10.32.1.144:/opt/lvmdir/c2/brick " is<br>
> offline so We have<br>
> > > tried to<br>
> > > > > remove it but getting "volume remove-brick<br>
> c_glusterfs<br>
> > replica 1<br>
> > > > > 10.32.1.144:/opt/lvmdir/c2/brick force :<br>
> FAILED :<br>
> > Incorrect<br>
> > > brick<br>
> > > > > 10.32.1.144:/opt/lvmdir/c2/brick for volume<br>
> c_glusterfs"<br>
> > > error on the<br>
> > > > > Board A.<br>
> > > > ><br>
> > > > > Please reply on this post because I am<br>
> always getting<br>
> > this error<br>
> > > > in this<br>
> > > > > scenario.<br>
> > > > ><br>
> > > > > For more detail I am also adding the logs of<br>
> both of the<br>
> > > board which<br>
> > > > > having some manual created file in which you<br>
> can find the<br>
> > > output of<br>
> > > > > glulster command from both of the boards<br>
> > > > ><br>
> > > > > in logs<br>
> > > > > 00030 is board A<br>
> > > > > 00250 is board B.<br>
> > > > This attachment doesn't help much. Could you<br>
> attach full<br>
> > > glusterd log<br>
> > > > files from both the nodes?<br>
> > > > ><br>
> > > ><br>
> > > > inside this attachment you will found full<br>
> glusterd log file<br>
> > > > 00300/glusterd/ and 002500/glusterd/<br>
> > > No, that contains the configuration files.<br>
> > > ><br>
> > > > > Thanks in advance waiting for the reply.<br>
> > > > ><br>
> > > > > Regards,<br>
> > > > > Abhishek<br>
> > > > ><br>
> > > > ><br>
> > > > > Regards<br>
> > > > > Abhishek Paliwal<br>
> > > > ><br>
> > > > ><br>
> > > > > _______________________________________________<br>
> > > > > Gluster-devel mailing list<br>
> > > > > <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>>><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>>>><br>
> > > <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>>><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>>>>><br>
> > > > ><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> > > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > --<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > Regards<br>
> > > > Abhishek Paliwal<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Regards<br>
> > > Abhishek Paliwal<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > Regards<br>
> > Abhishek Paliwal<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
</div></div></blockquote></div></div></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</font></span></div></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>