<div dir="ltr"><div><div>Hi Atin,<br><br>I have the board present in faulty state can we setup the live session to debug it?<br><br></div><div>Please provide the steps to setup hangout session.<br></div><div><br></div>Regards,<br></div>Abhishek<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 16, 2016 at 11:23 AM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">[1970-01-01 00:02:05.860202] D [MSGID: 0]<br>
[store.c:501:gf_store_iter_new] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860518] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860545] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= type value = 2<br>
[1970-01-01 00:02:05.860583] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860609] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= count value = 2<br>
[1970-01-01 00:02:05.860650] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860676] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= status value = 1<br>
[1970-01-01 00:02:05.860717] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860743] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= sub_count value = 2<br>
[1970-01-01 00:02:05.860780] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860806] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= stripe_count value = 1<br>
[1970-01-01 00:02:05.860842] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860868] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= replica_count value = 2<br>
[1970-01-01 00:02:05.860905] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860931] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= disperse_count value = 0<br>
[1970-01-01 00:02:05.860967] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.860994] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= redundancy_count value = 0<br>
[1970-01-01 00:02:05.861030] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861056] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= version value = 42<br>
[1970-01-01 00:02:05.861093] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861118] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= transport-type value = 0<br>
[1970-01-01 00:02:05.861155] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861182] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= volume-id value = d86e215c-1710-4b33-8076-fbf8e075d3e7<br>
[1970-01-01 00:02:05.861290] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861317] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= username value = db1d21cb-3feb-41da-88d0-2fc7a34cdb3a<br>
[1970-01-01 00:02:05.861361] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861387] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= password value = df5bf0b7-34dd-4f0d-a01b-62d2b67aa8b0<br>
[1970-01-01 00:02:05.861426] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861455] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= op-version value = 3<br>
[1970-01-01 00:02:05.861503] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861530] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= client-op-version value = 3<br>
[1970-01-01 00:02:05.861568] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861594] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= quota-version value = 0<br>
[1970-01-01 00:02:05.861632] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861658] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= parent_volname value = N/A<br>
[1970-01-01 00:02:05.861696] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861722] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= restored_from_snap value = 00000000-0000-0000-0000-000000000000<br>
[1970-01-01 00:02:05.861762] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861788] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= snap-max-hard-limit value = 256<br>
[1970-01-01 00:02:05.861825] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.861851] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= nfs.disable value = on<br>
[1970-01-01 00:02:05.861940] D [MSGID: 0]<br>
[glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:<br>
Parsed as Volume-set:key=nfs.disable,value:on<br>
[1970-01-01 00:02:05.861978] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862004] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= network.ping-timeout value = 4<br>
[1970-01-01 00:02:05.862039] D [MSGID: 0]<br>
[glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:<br>
Parsed as Volume-set:key=network.ping-timeout,value:4<br>
[1970-01-01 00:02:05.862077] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862104] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= performance.readdir-ahead value = on<br>
[1970-01-01 00:02:05.862140] D [MSGID: 0]<br>
[glusterd-store.c:2725:glusterd_store_update_volinfo] 0-management:<br>
Parsed as Volume-set:key=performance.readdir-ahead,value:on<br>
[1970-01-01 00:02:05.862178] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862217] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= brick-0 value = 10.32.0.48:-opt-lvmdir-c2-brick<br>
[1970-01-01 00:02:05.862257] D [MSGID: 0]<br>
[store.c:613:gf_store_iter_get_next] 0-: Returning with 0<br>
[1970-01-01 00:02:05.862283] D [MSGID: 0]<br>
[glusterd-store.c:2567:glusterd_store_update_volinfo] 0-management: key<br>
= brick-1 value = 10.32.1.144:-opt-lvmdir-c2-brick<br>
<span class=""><br>
On 03/16/2016 11:04 AM, ABHISHEK PALIWAL wrote:<br>
> Hi Atin,<br>
><br>
</span>> Please tell me the line number where you areseeing that glusterd has<br>
<span class="">> restored value from the disk files in Board B file.<br>
><br>
> Regards,<br>
> Abhishek<br>
><br>
> On Tue, Mar 15, 2016 at 11:31 AM, ABHISHEK PALIWAL<br>
</span><span class="">> <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a> <mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>>> wrote:<br>
><br>
><br>
><br>
> On Tue, Mar 15, 2016 at 11:10 AM, Atin Mukherjee<br>
</span><div><div class="h5">> <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>> wrote:<br>
><br>
><br>
><br>
> On 03/15/2016 10:54 AM, ABHISHEK PALIWAL wrote:<br>
> > Hi Atin,<br>
> ><br>
> > Is these files are ok? or you need some other files.<br>
> I just started going through the log files you shared. I've few<br>
> questions for you looking at the log:<br>
> 1. Are you sure the log what you have provided from board B is<br>
> post a<br>
> reboot? If you claim that a reboot wipes of /var/lib/glusterd/<br>
> then why<br>
> am I seeing that glusterd has restored value from the disk files?<br>
><br>
><br>
> Yes these logs from Board B after reboot. Could you please explain<br>
> me the line number where you are seeing that glusterd has restored<br>
> value from the disk files.<br>
><br>
><br>
> 2. From the content of glusterd configurations which you shared<br>
> earlier<br>
> the peer UUIDs are 4bf982c0-b21b-415c-b870-e72f36c7f2e7,<br>
> 4bf982c0-b21b-415c-b870-e72f36c7f2e7 002500/glusterd/peers &<br>
> c6b64e36-76da-4e98-a616-48e0e52c7006 from 000300/glusterd/peers.<br>
> They<br>
> don't even exist in glusterd.log.<br>
><br>
> Somehow I have a feeling that the sequence of log and configurations<br>
> files you shared don't match!<br>
><br>
><br>
> There is two UUID file present in 002500/glusterd/peers<br>
> 1. 4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
> Content of this file is:<br>
> uuid=4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
> state=10<br>
> hostname1=10.32.0.48<br>
> I have a question from where this UUID is coming?<br>
><br>
> 2. 98a28041-f853-48ac-bee0-34c592eeb827<br>
> Content of this file is:<br>
> uuid=f4ebe3c5-b6a4-4795-98e0-732337f76faf //This uuid is belogs to<br>
> 000300(10.32.0.48) board you can check this in both of the glusterd<br>
> log file<br>
> state=4 //what this state field display in this file?<br>
> hostname1=10.32.0.48<br>
><br>
><br>
> There is only one UUID file is present on 00030/glusterd/peers<br>
><br>
> c6b64e36-76da-4e98-a616-48e0e52c7006 //This is the old UUID of the<br>
> 002500 board before reboot<br>
><br>
> content of this file is:<br>
><br>
> uuid=267a92c3-fd28-4811-903c-c1d54854bda9 //This is new UUID<br>
> generated by the 002500 board after reboot you can check this as<br>
> well in glusterd file of 00030 board.<br>
> state=3<br>
> hostname1=10.32.1.144<br>
><br>
><br>
> ~Atin<br>
><br>
> ><br>
> > Regards,<br>
> > Abhishek<br>
> ><br>
> > On Mon, Mar 14, 2016 at 6:12 PM, ABHISHEK PALIWAL<br>
> > <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a> <mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
</div></div>> <mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
<span class="im HOEnZb">> <mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>>>> wrote:<br>
> ><br>
> > You mean etc*-glusterd-*.log file from both of the boards?<br>
> ><br>
> > if yes please find the attachment for the same.<br>
> ><br>
> > On Mon, Mar 14, 2016 at 5:27 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
</span><span class="im HOEnZb">> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>> wrote:<br>
> ><br>
> ><br>
> ><br>
> > On 03/14/2016 05:09 PM, ABHISHEK PALIWAL wrote:<br>
> > > I am not getting you which glusterd directory you are asking. if you are<br>
> > > asking about the /var/lib/glusterd directory then which I shared earlier<br>
> > > is the same.<br>
> > 1. Go to /var/log/glusterfs directory<br>
> > 2. Look for glusterd log file<br>
> > 3. attach the log<br>
> > Do it for both the boards.<br>
> > ><br>
> > > I have two directories related to gluster<br>
> > ><br>
> > > 1. /var/log/glusterfs<br>
> > > 2./var/lib/glusterd<br>
> > ><br>
> > > On Mon, Mar 14, 2016 at 4:12 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
</span><span class="im HOEnZb">> > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>> wrote:<br>
> > ><br>
> > ><br>
> > ><br>
> > > On 03/14/2016 03:59 PM, ABHISHEK PALIWAL wrote:<br>
> > > > I have only these glusterd files available on the nodes<br>
> > > Look for etc-*-glusterd*.log in /var/log/glusterfs, that represents the<br>
> > > glusterd log file.<br>
> > > ><br>
> > > > Regards,<br>
> > > > Abhishek<br>
> > > ><br>
> > > > On Mon, Mar 14, 2016 at 3:43 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>><br>
</span><div class="HOEnZb"><div class="h5">> > > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>>> wrote:<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > On 03/14/2016 02:18 PM, ABHISHEK PALIWAL<br>
> wrote:<br>
> > > > ><br>
> > > > ><br>
> > > > > On Mon, Mar 14, 2016 at 12:12 PM, Atin<br>
> Mukherjee<br>
> > <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>><br>
> > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>><br>
> > > > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>><br>
> > > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>><br>
> > <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>>>>>> wrote:<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > On 03/14/2016 10:52 AM, ABHISHEK<br>
> PALIWAL wrote:<br>
> > > > > > Hi Team,<br>
> > > > > ><br>
> > > > > > I am facing some issue with peer<br>
> status and<br>
> > because of<br>
> > > that<br>
> > > > remove-brick<br>
> > > > > > on replica volume is getting failed.<br>
> > > > > ><br>
> > > > > > Here. is the scenario what I am<br>
> doing with<br>
> > gluster:<br>
> > > > > ><br>
> > > > > > 1. I have two boards A & B and<br>
> gluster is<br>
> > running on<br>
> > > both of<br>
> > > > the boards.<br>
> > > > > > 2. On board I have created a<br>
> replicated<br>
> > volume with one<br>
> > > > brick on each<br>
> > > > > > board.<br>
> > > > > > 3. Created one glusterfs mount<br>
> point where<br>
> > both of<br>
> > > brick are<br>
> > > > mounted.<br>
> > > > > > 4. start the volume with<br>
> nfs.disable=true.<br>
> > > > > > 5. Till now everything is in sync<br>
> between<br>
> > both of bricks.<br>
> > > > > ><br>
> > > > > > Now when I manually plug-out the<br>
> board B<br>
> > from the slot and<br>
> > > > plug-in it again.<br>
> > > > > ><br>
> > > > > > 1. After bootup the board B I have<br>
> started<br>
> > the glusted on<br>
> > > > the board B.<br>
> > > > > ><br>
> > > > > > Following are the some gluster command<br>
> > output on Board B<br>
> > > > after the step 1.<br>
> > > > > ><br>
> > > > > > # gluster peer status<br>
> > > > > > Number of Peers: 2<br>
> > > > > ><br>
> > > > > > Hostname: 10.32.0.48<br>
> > > > > > Uuid:<br>
> f4ebe3c5-b6a4-4795-98e0-732337f76faf<br>
> > > > > > State: Accepted peer request<br>
> (Connected)<br>
> > > > > ><br>
> > > > > > Hostname: 10.32.0.48<br>
> > > > > > Uuid:<br>
> 4bf982c0-b21b-415c-b870-e72f36c7f2e7<br>
> > > > > > State: Peer is connected and Accepted<br>
> > (Connected)<br>
> > > > > ><br>
> > > > > > Why this peer status is showing<br>
> two peer with<br>
> > > different UUID?<br>
> > > > > GlusterD doesn't generate a new UUID<br>
> on init<br>
> > if it has<br>
> > > already<br>
> > > > generated<br>
> > > > > an UUID earlier. This clearly<br>
> indicates that<br>
> > on reboot<br>
> > > of board B<br>
> > > > > content of /var/lib/glusterd were<br>
> wiped off.<br>
> > I've asked this<br>
> > > > question to<br>
> > > > > you multiple times that is it the case?<br>
> > > > ><br>
> > > > ><br>
> > > > > Yes I am following the same which is<br>
> mentioned in<br>
> > the link:<br>
> > > > ><br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> <a href="http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected" rel="noreferrer" target="_blank">http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected</a><br>
> > > > ><br>
> > > > > but why it is showing two peer enteries?<br>
> > > > ><br>
> > > > > ><br>
> > > > > > # gluster volume info<br>
> > > > > ><br>
> > > > > > Volume Name: c_glusterfs<br>
> > > > > > Type: Replicate<br>
> > > > > > Volume ID:<br>
> c11f1f13-64a0-4aca-98b5-91d609a4a18d<br>
> > > > > > Status: Started<br>
> > > > > > Number of Bricks: 1 x 2 = 2<br>
> > > > > > Transport-type: tcp<br>
> > > > > > Bricks:<br>
> > > > > > Brick1:<br>
> 10.32.0.48:/opt/lvmdir/c2/brick<br>
> > > > > > Brick2:<br>
> 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > > > > > Options Reconfigured:<br>
> > > > > > performance.readdir-ahead: on<br>
> > > > > > network.ping-timeout: 4<br>
> > > > > > nfs.disable: on<br>
> > > > > > # gluster volume heal c_glusterfs info<br>
> > > > > > c_glusterfs: Not able to fetch<br>
> volfile from<br>
> > glusterd<br>
> > > > > > Volume heal failed.<br>
> > > > > > # gluster volume status c_glusterfs<br>
> > > > > > Status of volume: c_glusterfs<br>
> > > > > > Gluster process<br>
> > TCP Port<br>
> > > RDMA Port<br>
> > > > > Online<br>
> > > > > > Pid<br>
> > > > > ><br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > > ><br>
> > > > > > Brick 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > N/A N/A<br>
> > > > N<br>
> > > > > > N/A<br>
> > > > > > Self-heal Daemon on localhost<br>
> > N/A N/A<br>
> > > > Y<br>
> > > > > > 3922<br>
> > > > > ><br>
> > > > > > Task Status of Volume c_glusterfs<br>
> > > > > ><br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > > ><br>
> > > > > > There are no active volume tasks<br>
> > > > > > --<br>
> > > > > ><br>
> > > > > > At the same time Board A have the<br>
> following<br>
> > gluster<br>
> > > commands<br>
> > > > outcome:<br>
> > > > > ><br>
> > > > > > # gluster peer status<br>
> > > > > > Number of Peers: 1<br>
> > > > > ><br>
> > > > > > Hostname: 10.32.1.144<br>
> > > > > > Uuid:<br>
> c6b64e36-76da-4e98-a616-48e0e52c7006<br>
> > > > > > State: Peer in Cluster (Connected)<br>
> > > > > ><br>
> > > > > > Why it is showing the older UUID<br>
> of host<br>
> > 10.32.1.144<br>
> > > when this<br>
> > > > > UUID has<br>
> > > > > > been changed and new UUID is<br>
> > > > 267a92c3-fd28-4811-903c-c1d54854bda9<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > # gluster volume heal c_glusterfs info<br>
> > > > > > c_glusterfs: Not able to fetch<br>
> volfile from<br>
> > glusterd<br>
> > > > > > Volume heal failed.<br>
> > > > > > # gluster volume status c_glusterfs<br>
> > > > > > Status of volume: c_glusterfs<br>
> > > > > > Gluster process<br>
> > TCP Port<br>
> > > RDMA Port<br>
> > > > > Online<br>
> > > > > > Pid<br>
> > > > > ><br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > > ><br>
> > > > > > Brick 10.32.0.48:/opt/lvmdir/c2/brick<br>
> > 49169 0<br>
> > > > Y<br>
> > > > > > 2427<br>
> > > > > > Brick 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > N/A N/A<br>
> > > > N<br>
> > > > > > N/A<br>
> > > > > > Self-heal Daemon on localhost<br>
> > N/A N/A<br>
> > > > Y<br>
> > > > > > 3388<br>
> > > > > > Self-heal Daemon on 10.32.1.144<br>
> > N/A N/A<br>
> > > > Y<br>
> > > > > > 3922<br>
> > > > > ><br>
> > > > > > Task Status of Volume c_glusterfs<br>
> > > > > ><br>
> > > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > > > > ><br>
> > > > > > There are no active volume tasks<br>
> > > > > ><br>
> > > > > > As you see in the "gluster volume<br>
> status"<br>
> > showing that<br>
> > > Brick<br>
> > > > > > "10.32.1.144:/opt/lvmdir/c2/brick " is<br>
> > offline so We have<br>
> > > > tried to<br>
> > > > > > remove it but getting "volume<br>
> remove-brick<br>
> > c_glusterfs<br>
> > > replica 1<br>
> > > > > > 10.32.1.144:/opt/lvmdir/c2/brick<br>
> force :<br>
> > FAILED :<br>
> > > Incorrect<br>
> > > > brick<br>
> > > > > > 10.32.1.144:/opt/lvmdir/c2/brick<br>
> for volume<br>
> > c_glusterfs"<br>
> > > > error on the<br>
> > > > > > Board A.<br>
> > > > > ><br>
> > > > > > Please reply on this post because I am<br>
> > always getting<br>
> > > this error<br>
> > > > > in this<br>
> > > > > > scenario.<br>
> > > > > ><br>
> > > > > > For more detail I am also adding<br>
> the logs of<br>
> > both of the<br>
> > > > board which<br>
> > > > > > having some manual created file in<br>
> which you<br>
> > can find the<br>
> > > > output of<br>
> > > > > > glulster command from both of the<br>
> boards<br>
> > > > > ><br>
> > > > > > in logs<br>
> > > > > > 00030 is board A<br>
> > > > > > 00250 is board B.<br>
> > > > > This attachment doesn't help much.<br>
> Could you<br>
> > attach full<br>
> > > > glusterd log<br>
> > > > > files from both the nodes?<br>
> > > > > ><br>
> > > > ><br>
> > > > > inside this attachment you will found full<br>
> > glusterd log file<br>
> > > > > 00300/glusterd/ and 002500/glusterd/<br>
> > > > No, that contains the configuration files.<br>
> > > > ><br>
> > > > > > Thanks in advance waiting for the<br>
> reply.<br>
> > > > > ><br>
> > > > > > Regards,<br>
> > > > > > Abhishek<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > Regards<br>
> > > > > > Abhishek Paliwal<br>
> > > > > ><br>
> > > > > ><br>
> > > > > ><br>
> _______________________________________________<br>
> > > > > > Gluster-devel mailing list<br>
> > > > > > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>><br>
> > > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>><br>
> > > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>>>><br>
> > > > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>><br>
> > > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>><br>
> > > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
> > <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>>>>>><br>
> > > > > ><br>
> > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> > > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > --<br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > ><br>
> > > > > Regards<br>
> > > > > Abhishek Paliwal<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > --<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > Regards<br>
> > > > Abhishek Paliwal<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Regards<br>
> > > Abhishek Paliwal<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > Regards<br>
> > Abhishek Paliwal<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > Regards<br>
> > Abhishek Paliwal<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>