<div dir="ltr"><div><div><div>Hi Atin,<br><br></div>Is these files are ok? or you need some other files.<br><br></div>Regards,<br></div><div>Abhishek<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 14, 2016 at 6:12 PM, ABHISHEK PALIWAL <span dir="ltr">&lt;<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>You mean etc*-glusterd-*.log file from both of the boards?<br><br></div>if yes please find the attachment for the same.<br></div><div class="gmail_extra"><div><div class="h5"><br><div class="gmail_quote">On Mon, Mar 14, 2016 at 5:27 PM, Atin Mukherjee <span dir="ltr">&lt;<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
<br>
On 03/14/2016 05:09 PM, ABHISHEK PALIWAL wrote:<br>
&gt; I am not getting you which glusterd directory you are asking. if you are<br>
&gt; asking about the /var/lib/glusterd directory then which I shared earlier<br>
&gt; is the same.<br>
</span>1. Go to /var/log/glusterfs directory<br>
2. Look for glusterd log file<br>
3. attach the log<br>
Do it for both the boards.<br>
<span>&gt;<br>
&gt; I have two directories related to gluster<br>
&gt;<br>
&gt; 1. /var/log/glusterfs<br>
&gt; 2./var/lib/glusterd<br>
&gt;<br>
&gt; On Mon, Mar 14, 2016 at 4:12 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
</span><span>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     On 03/14/2016 03:59 PM, ABHISHEK PALIWAL wrote:<br>
&gt;     &gt; I have only these glusterd files available on the nodes<br>
&gt;     Look for etc-*-glusterd*.log in /var/log/glusterfs, that represents the<br>
&gt;     glusterd log file.<br>
&gt;     &gt;<br>
&gt;     &gt; Regards,<br>
&gt;     &gt; Abhishek<br>
&gt;     &gt;<br>
&gt;     &gt; On Mon, Mar 14, 2016 at 3:43 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;<br>
</span><div><div>&gt;     &gt; &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt;&gt; wrote:<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;     On 03/14/2016 02:18 PM, ABHISHEK PALIWAL wrote:<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt; On Mon, Mar 14, 2016 at 12:12 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;<br>
&gt;     &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt;<br>
&gt;     &gt;     &gt; &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;<br>
&gt;     &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt;&gt;&gt; wrote:<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     On 03/14/2016 10:52 AM, ABHISHEK PALIWAL wrote:<br>
&gt;     &gt;     &gt;     &gt; Hi Team,<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; I am facing some issue with peer status and because of<br>
&gt;     that<br>
&gt;     &gt;     remove-brick<br>
&gt;     &gt;     &gt;     &gt; on replica volume is getting failed.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Here. is the scenario what I am doing with gluster:<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; 1. I have two boards A &amp; B and gluster is running on<br>
&gt;     both of<br>
&gt;     &gt;     the boards.<br>
&gt;     &gt;     &gt;     &gt; 2. On  board I have created a replicated volume with one<br>
&gt;     &gt;     brick on each<br>
&gt;     &gt;     &gt;     &gt; board.<br>
&gt;     &gt;     &gt;     &gt; 3. Created one glusterfs mount point where both of<br>
&gt;     brick are<br>
&gt;     &gt;     mounted.<br>
&gt;     &gt;     &gt;     &gt; 4. start the volume with nfs.disable=true.<br>
&gt;     &gt;     &gt;     &gt; 5. Till now everything is in sync between both of bricks.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Now when I manually plug-out the board B from the slot and<br>
&gt;     &gt;     plug-in it again.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; 1. After bootup the board B I have started the glusted on<br>
&gt;     &gt;     the board B.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Following are the some gluster command output on Board B<br>
&gt;     &gt;     after the step 1.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; # gluster peer status<br>
&gt;     &gt;     &gt;     &gt; Number of Peers: 2<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Hostname: 10.32.0.48<br>
&gt;     &gt;     &gt;     &gt; Uuid: f4ebe3c5-b6a4-4795-98e0-732337f76faf<br>
&gt;     &gt;     &gt;     &gt; State: Accepted peer request (Connected)<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Hostname: 10.32.0.48<br>
&gt;     &gt;     &gt;     &gt; Uuid: 4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
&gt;     &gt;     &gt;     &gt; State: Peer is connected and Accepted (Connected)<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Why this peer status is showing two peer with<br>
&gt;     different UUID?<br>
&gt;     &gt;     &gt;     GlusterD doesn&#39;t generate a new UUID on init if it has<br>
&gt;     already<br>
&gt;     &gt;     generated<br>
&gt;     &gt;     &gt;     an UUID earlier. This clearly indicates that on reboot<br>
&gt;     of board B<br>
&gt;     &gt;     &gt;     content of /var/lib/glusterd were wiped off. I&#39;ve asked this<br>
&gt;     &gt;     question to<br>
&gt;     &gt;     &gt;     you multiple times that is it the case?<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt; Yes I am following the same which is mentioned in the link:<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;<br>
&gt;      <a href="http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected" rel="noreferrer" target="_blank">http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected</a><br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt; but why it is showing two peer enteries?<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; # gluster volume info<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Volume Name: c_glusterfs<br>
&gt;     &gt;     &gt;     &gt; Type: Replicate<br>
&gt;     &gt;     &gt;     &gt; Volume ID: c11f1f13-64a0-4aca-98b5-91d609a4a18d<br>
&gt;     &gt;     &gt;     &gt; Status: Started<br>
&gt;     &gt;     &gt;     &gt; Number of Bricks: 1 x 2 = 2<br>
&gt;     &gt;     &gt;     &gt; Transport-type: tcp<br>
&gt;     &gt;     &gt;     &gt; Bricks:<br>
&gt;     &gt;     &gt;     &gt; Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
&gt;     &gt;     &gt;     &gt; Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;     &gt;     &gt;     &gt; Options Reconfigured:<br>
&gt;     &gt;     &gt;     &gt; performance.readdir-ahead: on<br>
&gt;     &gt;     &gt;     &gt; network.ping-timeout: 4<br>
&gt;     &gt;     &gt;     &gt; nfs.disable: on<br>
&gt;     &gt;     &gt;     &gt; # gluster volume heal c_glusterfs info<br>
&gt;     &gt;     &gt;     &gt; c_glusterfs: Not able to fetch volfile from glusterd<br>
&gt;     &gt;     &gt;     &gt; Volume heal failed.<br>
&gt;     &gt;     &gt;     &gt; # gluster volume status c_glusterfs<br>
&gt;     &gt;     &gt;     &gt; Status of volume: c_glusterfs<br>
&gt;     &gt;     &gt;     &gt; Gluster process                             TCP Port<br>
&gt;     RDMA Port<br>
&gt;     &gt;     &gt;     Online<br>
&gt;     &gt;     &gt;     &gt; Pid<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     ------------------------------------------------------------------------------<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Brick 10.32.1.144:/opt/lvmdir/c2/brick      N/A       N/A<br>
&gt;     &gt;         N<br>
&gt;     &gt;     &gt;     &gt; N/A<br>
&gt;     &gt;     &gt;     &gt; Self-heal Daemon on localhost               N/A       N/A<br>
&gt;     &gt;         Y<br>
&gt;     &gt;     &gt;     &gt; 3922<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Task Status of Volume c_glusterfs<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     ------------------------------------------------------------------------------<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; There are no active volume tasks<br>
&gt;     &gt;     &gt;     &gt; --<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; At the same time Board A have the following gluster<br>
&gt;     commands<br>
&gt;     &gt;     outcome:<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; # gluster peer status<br>
&gt;     &gt;     &gt;     &gt; Number of Peers: 1<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Hostname: 10.32.1.144<br>
&gt;     &gt;     &gt;     &gt; Uuid: c6b64e36-76da-4e98-a616-48e0e52c7006<br>
&gt;     &gt;     &gt;     &gt; State: Peer in Cluster (Connected)<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Why it is showing the older UUID of host 10.32.1.144<br>
&gt;     when this<br>
&gt;     &gt;     &gt;     UUID has<br>
&gt;     &gt;     &gt;     &gt; been changed and new UUID is<br>
&gt;     &gt;     267a92c3-fd28-4811-903c-c1d54854bda9<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; # gluster volume heal c_glusterfs info<br>
&gt;     &gt;     &gt;     &gt; c_glusterfs: Not able to fetch volfile from glusterd<br>
&gt;     &gt;     &gt;     &gt; Volume heal failed.<br>
&gt;     &gt;     &gt;     &gt; # gluster volume status c_glusterfs<br>
&gt;     &gt;     &gt;     &gt; Status of volume: c_glusterfs<br>
&gt;     &gt;     &gt;     &gt; Gluster process                             TCP Port<br>
&gt;     RDMA Port<br>
&gt;     &gt;     &gt;     Online<br>
&gt;     &gt;     &gt;     &gt; Pid<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     ------------------------------------------------------------------------------<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Brick 10.32.0.48:/opt/lvmdir/c2/brick       49169     0<br>
&gt;     &gt;         Y<br>
&gt;     &gt;     &gt;     &gt; 2427<br>
&gt;     &gt;     &gt;     &gt; Brick 10.32.1.144:/opt/lvmdir/c2/brick      N/A       N/A<br>
&gt;     &gt;         N<br>
&gt;     &gt;     &gt;     &gt; N/A<br>
&gt;     &gt;     &gt;     &gt; Self-heal Daemon on localhost               N/A       N/A<br>
&gt;     &gt;         Y<br>
&gt;     &gt;     &gt;     &gt; 3388<br>
&gt;     &gt;     &gt;     &gt; Self-heal Daemon on 10.32.1.144             N/A       N/A<br>
&gt;     &gt;         Y<br>
&gt;     &gt;     &gt;     &gt; 3922<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Task Status of Volume c_glusterfs<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     ------------------------------------------------------------------------------<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; There are no active volume tasks<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; As you see in the &quot;gluster volume status&quot; showing that<br>
&gt;     Brick<br>
&gt;     &gt;     &gt;     &gt; &quot;10.32.1.144:/opt/lvmdir/c2/brick &quot; is offline so We have<br>
&gt;     &gt;     tried to<br>
&gt;     &gt;     &gt;     &gt; remove it but getting &quot;volume remove-brick c_glusterfs<br>
&gt;     replica 1<br>
&gt;     &gt;     &gt;     &gt; 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED :<br>
&gt;     Incorrect<br>
&gt;     &gt;     brick<br>
&gt;     &gt;     &gt;     &gt; 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs&quot;<br>
&gt;     &gt;     error on the<br>
&gt;     &gt;     &gt;     &gt; Board A.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Please reply on this post because I am always getting<br>
&gt;     this error<br>
&gt;     &gt;     &gt;     in this<br>
&gt;     &gt;     &gt;     &gt; scenario.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; For more detail I am also adding the logs of both of the<br>
&gt;     &gt;     board which<br>
&gt;     &gt;     &gt;     &gt; having some manual created file in which you can find the<br>
&gt;     &gt;     output of<br>
&gt;     &gt;     &gt;     &gt; glulster command from both of the boards<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; in logs<br>
&gt;     &gt;     &gt;     &gt; 00030 is board A<br>
&gt;     &gt;     &gt;     &gt; 00250 is board B.<br>
&gt;     &gt;     &gt;     This attachment doesn&#39;t help much. Could you attach full<br>
&gt;     &gt;     glusterd log<br>
&gt;     &gt;     &gt;     files from both the nodes?<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt; inside this attachment you will found full glusterd log file<br>
&gt;     &gt;     &gt; 00300/glusterd/ and 002500/glusterd/<br>
&gt;     &gt;     No, that contains the configuration files.<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Thanks in advance waiting for the reply.<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Regards,<br>
&gt;     &gt;     &gt;     &gt; Abhishek<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; Regards<br>
&gt;     &gt;     &gt;     &gt; Abhishek Paliwal<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;     &gt; _______________________________________________<br>
&gt;     &gt;     &gt;     &gt; Gluster-devel mailing list<br>
&gt;     &gt;     &gt;     &gt; <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
&gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>&gt; &lt;mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
&gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>&gt;&gt;<br>
&gt;     &gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
&gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>&gt; &lt;mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
&gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>&gt;&gt;&gt;<br>
&gt;     &gt;     &gt;     &gt; <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
&gt;     &gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt; --<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt;<br>
&gt;     &gt;     &gt; Regards<br>
&gt;     &gt;     &gt; Abhishek Paliwal<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; --<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; Regards<br>
&gt;     &gt; Abhishek Paliwal<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Regards<br>
&gt; Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br></div></div><span class="HOEnZb"><font color="#888888">-- <br><div><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</font></span></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>