<div dir="ltr"><div><div>Hi Atin,<br><br>I have the board present in faulty state can we setup the live session to debug it?<br><br></div><div>Please provide the steps to setup hangout session.<br></div><div><br></div>Regards,<br></div>Abhishek<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 16, 2016 at 11:23 AM, Atin Mukherjee <span dir="ltr">&lt;<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">[1970-01-01 00:02:05.860202] D [MSGID: 0]<br>
[store.c:501:gf_store_iter_new] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860518] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860545] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= type value = 2<br>
[1970-01-01 00:02:05.860583] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860609] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= count value = 2<br>
[1970-01-01 00:02:05.860650] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860676] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= status value = 1<br>
[1970-01-01 00:02:05.860717] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860743] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= sub_count value = 2<br>
[1970-01-01 00:02:05.860780] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860806] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= stripe_count value = 1<br>
[1970-01-01 00:02:05.860842] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860868] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= replica_count value = 2<br>
[1970-01-01 00:02:05.860905] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860931] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= disperse_count value = 0<br>
[1970-01-01 00:02:05.860967] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860994] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= redundancy_count value = 0<br>
[1970-01-01 00:02:05.861030] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861056] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= version value = 42<br>
[1970-01-01 00:02:05.861093] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861118] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= transport-type value = 0<br>
[1970-01-01 00:02:05.861155] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861182] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= volume-id value = d86e215c-1710-4b33-8076-fbf8e075d3e7<br>
[1970-01-01 00:02:05.861290] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861317] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= username value = db1d21cb-3feb-41da-88d0-2fc7a34cdb3a<br>
[1970-01-01 00:02:05.861361] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861387] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= password value = df5bf0b7-34dd-4f0d-a01b-62d2b67aa8b0<br>
[1970-01-01 00:02:05.861426] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861455] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= op-version value = 3<br>
[1970-01-01 00:02:05.861503] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861530] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= client-op-version value = 3<br>
[1970-01-01 00:02:05.861568] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861594] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= quota-version value = 0<br>
[1970-01-01 00:02:05.861632] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861658] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= parent_volname value = N/A<br>
[1970-01-01 00:02:05.861696] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861722] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= restored_from_snap value = 00000000-0000-0000-0000-000000000000<br>
[1970-01-01 00:02:05.861762] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861788] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= snap-max-hard-limit value = 256<br>
[1970-01-01 00:02:05.861825] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861851] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= nfs.disable value = on<br>
[1970-01-01 00:02:05.861940] D [MSGID: 0]<br>
[glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:<br>
Parsed as Volume-set:key=nfs.disable,value:on<br>
[1970-01-01 00:02:05.861978] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862004] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= network.ping-timeout value = 4<br>
[1970-01-01 00:02:05.862039] D [MSGID: 0]<br>
[glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:<br>
Parsed as Volume-set:key=network.ping-timeout,value:4<br>
[1970-01-01 00:02:05.862077] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862104] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= performance.readdir-ahead value = on<br>
[1970-01-01 00:02:05.862140] D [MSGID: 0]<br>
[glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:<br>
Parsed as Volume-set:key=performance.readdir-ahead,value:on<br>
[1970-01-01 00:02:05.862178] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862217] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= brick-0 value = 10.32.0.48:-opt-lvmdir-c2-brick<br>
[1970-01-01 00:02:05.862257] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862283] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= brick-1 value = 10.32.1.144:-opt-lvmdir-c2-brick<br>
<span class=""><br>
On 03/16/2016 11:04 AM, ABHISHEK PALIWAL wrote:<br>
&gt; Hi Atin,<br>
&gt;<br>
</span>&gt; Please tell me the line number where you areseeing that glusterd has<br>
<span class="">&gt; restored value from the disk files in Board B file.<br>
&gt;<br>
&gt; Regards,<br>
&gt; Abhishek<br>
&gt;<br>
&gt; On Tue, Mar 15, 2016 at 11:31 AM, ABHISHEK PALIWAL<br>
</span><span class="">&gt; &lt;<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a> &lt;mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     On Tue, Mar 15, 2016 at 11:10 AM, Atin Mukherjee<br>
</span><div><div class="h5">&gt;     &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;         On 03/15/2016 10:54 AM, ABHISHEK PALIWAL wrote:<br>
&gt;         &gt; Hi Atin,<br>
&gt;         &gt;<br>
&gt;         &gt; Is these files are ok? or you need some other files.<br>
&gt;         I just started going through the log files you shared. I&#39;ve few<br>
&gt;         questions for you looking at the log:<br>
&gt;         1. Are you sure the log what you have provided from board B is<br>
&gt;         post a<br>
&gt;         reboot? If you claim that a reboot wipes of /var/lib/glusterd/<br>
&gt;         then why<br>
&gt;         am I seeing that glusterd has restored value from the disk files?<br>
&gt;<br>
&gt;<br>
&gt;     Yes these logs from Board B after reboot. Could you please explain<br>
&gt;     me the line number where you are seeing that glusterd has restored<br>
&gt;     value from the disk files.<br>
&gt;<br>
&gt;<br>
&gt;         2. From the content of glusterd configurations which you shared<br>
&gt;         earlier<br>
&gt;         the peer UUIDs are 4bf982c0-b21b-415c-b870-e72f36c7f2e7,<br>
&gt;         4bf982c0-b21b-415c-b870-e72f36c7f2e7 002500/glusterd/peers &amp;<br>
&gt;         c6b64e36-76da-4e98-a616-48e0e52c7006 from 000300/glusterd/peers.<br>
&gt;         They<br>
&gt;         don&#39;t even exist in glusterd.log.<br>
&gt;<br>
&gt;         Somehow I have a feeling that the sequence of log and configurations<br>
&gt;         files you shared don&#39;t match!<br>
&gt;<br>
&gt;<br>
&gt;     There is two UUID file present in 002500/glusterd/peers<br>
&gt;     1. 4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
&gt;     Content of this file is:<br>
&gt;     uuid=4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
&gt;     state=10<br>
&gt;     hostname1=10.32.0.48<br>
&gt;     I have a question from where this UUID is coming?<br>
&gt;<br>
&gt;     2. 98a28041-f853-48ac-bee0-34c592eeb827<br>
&gt;     Content of this file is:<br>
&gt;     uuid=f4ebe3c5-b6a4-4795-98e0-732337f76faf //This uuid is belogs to<br>
&gt;     000300(10.32.0.48) board you can check this in both of the glusterd<br>
&gt;     log file<br>
&gt;     state=4 //what this state field display in this file?<br>
&gt;     hostname1=10.32.0.48<br>
&gt;<br>
&gt;<br>
&gt;     There is only one UUID file is present on 00030/glusterd/peers<br>
&gt;<br>
&gt;     c6b64e36-76da-4e98-a616-48e0e52c7006 //This is the old UUID of the<br>
&gt;     002500 board before reboot<br>
&gt;<br>
&gt;     content of this file is:<br>
&gt;<br>
&gt;     uuid=267a92c3-fd28-4811-903c-c1d54854bda9 //This is new UUID<br>
&gt;     generated by the 002500 board after reboot you can check this as<br>
&gt;     well in glusterd file of 00030 board.<br>
&gt;     state=3<br>
&gt;     hostname1=10.32.1.144<br>
&gt;<br>
&gt;<br>
&gt;         ~Atin<br>
&gt;<br>
&gt;         &gt;<br>
&gt;         &gt; Regards,<br>
&gt;         &gt; Abhishek<br>
&gt;         &gt;<br>
&gt;         &gt; On Mon, Mar 14, 2016 at 6:12 PM, ABHISHEK PALIWAL<br>
&gt;         &gt; &lt;<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a> &lt;mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>&gt;<br>
</div></div>&gt;         &lt;mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
<span class="im HOEnZb">&gt;         &lt;mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>&gt;&gt;&gt; wrote:<br>
&gt;         &gt;<br>
&gt;         &gt;     You mean etc*-glusterd-*.log file from both of the boards?<br>
&gt;         &gt;<br>
&gt;         &gt;     if yes please find the attachment for the same.<br>
&gt;         &gt;<br>
&gt;         &gt;     On Mon, Mar 14, 2016 at 5:27 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
</span><span class="im HOEnZb">&gt;         &gt;     &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt; wrote:<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;         On 03/14/2016 05:09 PM, ABHISHEK PALIWAL wrote:<br>
&gt;         &gt;         &gt; I am not getting you which glusterd directory you are asking. if you are<br>
&gt;         &gt;         &gt; asking about the /var/lib/glusterd directory then which I shared earlier<br>
&gt;         &gt;         &gt; is the same.<br>
&gt;         &gt;         1. Go to /var/log/glusterfs directory<br>
&gt;         &gt;         2. Look for glusterd log file<br>
&gt;         &gt;         3. attach the log<br>
&gt;         &gt;         Do it for both the boards.<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; I have two directories related to gluster<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; 1. /var/log/glusterfs<br>
&gt;         &gt;         &gt; 2./var/lib/glusterd<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; On Mon, Mar 14, 2016 at 4:12 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;<br>
</span><span class="im HOEnZb">&gt;         &gt;         &gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;&gt; wrote:<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;     On 03/14/2016 03:59 PM, ABHISHEK PALIWAL wrote:<br>
&gt;         &gt;         &gt;     &gt; I have only these glusterd files available on the nodes<br>
&gt;         &gt;         &gt;     Look for etc-*-glusterd*.log in /var/log/glusterfs, that represents the<br>
&gt;         &gt;         &gt;     glusterd log file.<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt; Regards,<br>
&gt;         &gt;         &gt;     &gt; Abhishek<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt; On Mon, Mar 14, 2016 at 3:43 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;<br>
</span><div class="HOEnZb"><div class="h5">&gt;         &gt;         &gt;     &gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;&gt;&gt; wrote:<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     On 03/14/2016 02:18 PM, ABHISHEK PALIWAL<br>
&gt;         wrote:<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt; On Mon, Mar 14, 2016 at 12:12 PM, Atin<br>
&gt;         Mukherjee<br>
&gt;         &gt;         &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;<br>
&gt;         &gt;         &gt;     &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;&gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;<br>
&gt;         &gt;         &gt;     &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;         &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;&gt;&gt;&gt; wrote:<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     On 03/14/2016 10:52 AM, ABHISHEK<br>
&gt;         PALIWAL wrote:<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Hi Team,<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; I am facing some issue with peer<br>
&gt;         status and<br>
&gt;         &gt;         because of<br>
&gt;         &gt;         &gt;     that<br>
&gt;         &gt;         &gt;     &gt;     remove-brick<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; on replica volume is getting failed.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Here. is the scenario what I am<br>
&gt;         doing with<br>
&gt;         &gt;         gluster:<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 1. I have two boards A &amp; B and<br>
&gt;         gluster is<br>
&gt;         &gt;         running on<br>
&gt;         &gt;         &gt;     both of<br>
&gt;         &gt;         &gt;     &gt;     the boards.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 2. On  board I have created a<br>
&gt;         replicated<br>
&gt;         &gt;         volume with one<br>
&gt;         &gt;         &gt;     &gt;     brick on each<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; board.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 3. Created one glusterfs mount<br>
&gt;         point where<br>
&gt;         &gt;         both of<br>
&gt;         &gt;         &gt;     brick are<br>
&gt;         &gt;         &gt;     &gt;     mounted.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 4. start the volume with<br>
&gt;         nfs.disable=true.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 5. Till now everything is in sync<br>
&gt;         between<br>
&gt;         &gt;         both of bricks.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Now when I manually plug-out the<br>
&gt;         board B<br>
&gt;         &gt;         from the slot and<br>
&gt;         &gt;         &gt;     &gt;     plug-in it again.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 1. After bootup the board B I have<br>
&gt;         started<br>
&gt;         &gt;         the glusted on<br>
&gt;         &gt;         &gt;     &gt;     the board B.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Following are the some gluster command<br>
&gt;         &gt;         output on Board B<br>
&gt;         &gt;         &gt;     &gt;     after the step 1.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; # gluster peer status<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Number of Peers: 2<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Hostname: 10.32.0.48<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Uuid:<br>
&gt;         f4ebe3c5-b6a4-4795-98e0-732337f76faf<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; State: Accepted peer request<br>
&gt;         (Connected)<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Hostname: 10.32.0.48<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Uuid:<br>
&gt;         4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; State: Peer is connected and Accepted<br>
&gt;         &gt;         (Connected)<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Why this peer status is showing<br>
&gt;         two peer with<br>
&gt;         &gt;         &gt;     different UUID?<br>
&gt;         &gt;         &gt;     &gt;     &gt;     GlusterD doesn&#39;t generate a new UUID<br>
&gt;         on init<br>
&gt;         &gt;         if it has<br>
&gt;         &gt;         &gt;     already<br>
&gt;         &gt;         &gt;     &gt;     generated<br>
&gt;         &gt;         &gt;     &gt;     &gt;     an UUID earlier. This clearly<br>
&gt;         indicates that<br>
&gt;         &gt;         on reboot<br>
&gt;         &gt;         &gt;     of board B<br>
&gt;         &gt;         &gt;     &gt;     &gt;     content of /var/lib/glusterd were<br>
&gt;         wiped off.<br>
&gt;         &gt;         I&#39;ve asked this<br>
&gt;         &gt;         &gt;     &gt;     question to<br>
&gt;         &gt;         &gt;     &gt;     &gt;     you multiple times that is it the case?<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt; Yes I am following the same which is<br>
&gt;         mentioned in<br>
&gt;         &gt;         the link:<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;<br>
&gt;          <a href="http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected" rel="noreferrer" target="_blank">http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected</a><br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt; but why it is showing two peer enteries?<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; # gluster volume info<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Volume Name: c_glusterfs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Type: Replicate<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Volume ID:<br>
&gt;         c11f1f13-64a0-4aca-98b5-91d609a4a18d<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Status: Started<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Number of Bricks: 1 x 2 = 2<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Transport-type: tcp<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Bricks:<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Brick1:<br>
&gt;         10.32.0.48:/opt/lvmdir/c2/brick<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Brick2:<br>
&gt;         10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Options Reconfigured:<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; performance.readdir-ahead: on<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; network.ping-timeout: 4<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; nfs.disable: on<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; # gluster volume heal c_glusterfs info<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; c_glusterfs: Not able to fetch<br>
&gt;         volfile from<br>
&gt;         &gt;         glusterd<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Volume heal failed.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; # gluster volume status c_glusterfs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Status of volume: c_glusterfs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Gluster process<br>
&gt;         &gt;          TCP Port<br>
&gt;         &gt;         &gt;     RDMA Port<br>
&gt;         &gt;         &gt;     &gt;     &gt;     Online<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Pid<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         ------------------------------------------------------------------------------<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Brick 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;         &gt;         N/A       N/A<br>
&gt;         &gt;         &gt;     &gt;         N<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; N/A<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Self-heal Daemon on localhost<br>
&gt;         &gt;          N/A       N/A<br>
&gt;         &gt;         &gt;     &gt;         Y<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 3922<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Task Status of Volume c_glusterfs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         ------------------------------------------------------------------------------<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; There are no active volume tasks<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; --<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; At the same time Board A have the<br>
&gt;         following<br>
&gt;         &gt;         gluster<br>
&gt;         &gt;         &gt;     commands<br>
&gt;         &gt;         &gt;     &gt;     outcome:<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; # gluster peer status<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Number of Peers: 1<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Hostname: 10.32.1.144<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Uuid:<br>
&gt;         c6b64e36-76da-4e98-a616-48e0e52c7006<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; State: Peer in Cluster (Connected)<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Why it is showing the older UUID<br>
&gt;         of host<br>
&gt;         &gt;         10.32.1.144<br>
&gt;         &gt;         &gt;     when this<br>
&gt;         &gt;         &gt;     &gt;     &gt;     UUID has<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; been changed and new UUID is<br>
&gt;         &gt;         &gt;     &gt;     267a92c3-fd28-4811-903c-c1d54854bda9<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; # gluster volume heal c_glusterfs info<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; c_glusterfs: Not able to fetch<br>
&gt;         volfile from<br>
&gt;         &gt;         glusterd<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Volume heal failed.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; # gluster volume status c_glusterfs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Status of volume: c_glusterfs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Gluster process<br>
&gt;         &gt;          TCP Port<br>
&gt;         &gt;         &gt;     RDMA Port<br>
&gt;         &gt;         &gt;     &gt;     &gt;     Online<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Pid<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         ------------------------------------------------------------------------------<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Brick 10.32.0.48:/opt/lvmdir/c2/brick<br>
&gt;         &gt;          49169     0<br>
&gt;         &gt;         &gt;     &gt;         Y<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 2427<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Brick 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;         &gt;         N/A       N/A<br>
&gt;         &gt;         &gt;     &gt;         N<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; N/A<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Self-heal Daemon on localhost<br>
&gt;         &gt;          N/A       N/A<br>
&gt;         &gt;         &gt;     &gt;         Y<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 3388<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Self-heal Daemon on 10.32.1.144<br>
&gt;         &gt;          N/A       N/A<br>
&gt;         &gt;         &gt;     &gt;         Y<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 3922<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Task Status of Volume c_glusterfs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         ------------------------------------------------------------------------------<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; There are no active volume tasks<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; As you see in the &quot;gluster volume<br>
&gt;         status&quot;<br>
&gt;         &gt;         showing that<br>
&gt;         &gt;         &gt;     Brick<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; &quot;10.32.1.144:/opt/lvmdir/c2/brick &quot; is<br>
&gt;         &gt;         offline so We have<br>
&gt;         &gt;         &gt;     &gt;     tried to<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; remove it but getting &quot;volume<br>
&gt;         remove-brick<br>
&gt;         &gt;         c_glusterfs<br>
&gt;         &gt;         &gt;     replica 1<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;         force :<br>
&gt;         &gt;         FAILED :<br>
&gt;         &gt;         &gt;     Incorrect<br>
&gt;         &gt;         &gt;     &gt;     brick<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;         for volume<br>
&gt;         &gt;         c_glusterfs&quot;<br>
&gt;         &gt;         &gt;     &gt;     error on the<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Board A.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Please reply on this post because I am<br>
&gt;         &gt;         always getting<br>
&gt;         &gt;         &gt;     this error<br>
&gt;         &gt;         &gt;     &gt;     &gt;     in this<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; scenario.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; For more detail I am also adding<br>
&gt;         the logs of<br>
&gt;         &gt;         both of the<br>
&gt;         &gt;         &gt;     &gt;     board which<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; having some manual created file in<br>
&gt;         which you<br>
&gt;         &gt;         can find the<br>
&gt;         &gt;         &gt;     &gt;     output of<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; glulster command from both of the<br>
&gt;         boards<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; in logs<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 00030 is board A<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; 00250 is board B.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     This attachment doesn&#39;t help much.<br>
&gt;         Could you<br>
&gt;         &gt;         attach full<br>
&gt;         &gt;         &gt;     &gt;     glusterd log<br>
&gt;         &gt;         &gt;     &gt;     &gt;     files from both the nodes?<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt; inside this attachment you will found full<br>
&gt;         &gt;         glusterd log file<br>
&gt;         &gt;         &gt;     &gt;     &gt; 00300/glusterd/ and 002500/glusterd/<br>
&gt;         &gt;         &gt;     &gt;     No, that contains the configuration files.<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Thanks in advance waiting for the<br>
&gt;         reply.<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Regards,<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Abhishek<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Regards<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Abhishek Paliwal<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         _______________________________________________<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; Gluster-devel mailing list<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt; <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;<br>
&gt;         &gt;         &gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;<br>
&gt;         &gt;         &gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;&gt;&gt;<br>
&gt;         &gt;         &gt;     &gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;<br>
&gt;         &gt;         &gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;<br>
&gt;         &gt;         &gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;         &gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;         &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
&gt;         &gt;         &gt;     &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt; --<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;     &gt; Regards<br>
&gt;         &gt;         &gt;     &gt;     &gt; Abhishek Paliwal<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt; --<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt;<br>
&gt;         &gt;         &gt;     &gt; Regards<br>
&gt;         &gt;         &gt;     &gt; Abhishek Paliwal<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; --<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt;<br>
&gt;         &gt;         &gt; Regards<br>
&gt;         &gt;         &gt; Abhishek Paliwal<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;     --<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;     Regards<br>
&gt;         &gt;     Abhishek Paliwal<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt; --<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt;<br>
&gt;         &gt; Regards<br>
&gt;         &gt; Abhishek Paliwal<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     --<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     Regards<br>
&gt;     Abhishek Paliwal<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Regards<br>
&gt; Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>