<div dir="ltr"><div class="gmail_extra"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><br></div></div></div><div class="gmail_quote">On 15 June 2016 at 06:48, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 06/15/2016 11:06 AM, Gandalf Corvotempesta wrote:<br>
> Il 15 giu 2016 07:09, "Atin Mukherjee" <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
</span>> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>> ha scritto:<br>
<span class="">>> To get rid of this situation you'd need to stop all the running glusterd<br>
>> instances and go into /var/lib/glusterd/peers folder on all the nodes<br>
>> and manually correct the UUID file names and their content if required.<br>
><br>
> If i understood properly the only way to fix this is by bringing the<br>
> whole cluster down? "you'd need to stop all the running glusterd instances"<br>
><br>
> I hope you are referring to all instances on the failed node...<br>
<br>
</span>No, since the configuration are synced across all the nodes, any<br>
incorrect data gets replicated through out. So in this case to be on the<br>
safer side and validate the correctness all glusterd instances on *all*<br>
the nodes should be brought down. Having said that, this doesn't impact<br>
I/O as the management path is different than I/O.<br><br></blockquote><div><br></div><div>As a sanity, one of the things I did last night, was to reboot the whole gluster system, when I had downtime arranged. I thought this is something would be asked, as I had seen similar requests on the mailing list previously</div><div><br></div><div>Unfortunately though, it didn't fix the problem.</div><div><br></div><div>Any other suggestions are welcome</div></div><br></div></div>