<div dir="ltr"><br><div>Hi Dominique,</div><div><br></div><div>Thanks for the logs. I will go through the logs. I have also CCed Pranith who is the maintainer of the replicate feature.</div><div><br></div><div><br></div><div>Regards,</div><div>Raghavendra</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 9, 2016 at 11:45 AM, Dominique Roux <span dir="ltr"><<a href="mailto:dominique.roux@ungleich.ch" target="_blank">dominique.roux@ungleich.ch</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Logs are attached<br>
<br>
For claryfication:<br>
vmhost1-cluster1 -> Brick 1<br>
vmhost2-cluster2 -> Brick 2<br>
entrance -> Peer<br>
<br>
Time of testing (31.01.2016 16:13)<br>
<br>
Thanks for your help<br>
<br>
Regards,<br>
Dominique<br>
<br>
<br>
</span>Werde Teil des modernen Arbeitens im Glarnerland auf <a href="http://www.digitalglarus.ch" rel="noreferrer" target="_blank">www.digitalglarus.ch</a>!<br>
Lese Neuigkeiten auf Twitter: <a href="http://www.twitter.com/DigitalGlarus" rel="noreferrer" target="_blank">www.twitter.com/DigitalGlarus</a><br>
Diskutiere mit auf Facebook: <a href="http://www.facebook.com/digitalglarus" rel="noreferrer" target="_blank">www.facebook.com/digitalglarus</a><br>
<span class=""><br>
On 02/08/2016 04:40 PM, FNU Raghavendra Manjunath wrote:<br>
</span><span class="">> + Pranith<br>
><br>
> In the meantime, can you please provide the logs of all the gluster<br>
> server machines and the client machines?<br>
><br>
> Logs can be found in /var/log/glusterfs directory.<br>
><br>
> Regards,<br>
> Raghavendra<br>
><br>
> On Mon, Feb 8, 2016 at 9:20 AM, Dominique Roux<br>
</span><div><div class="h5">> <<a href="mailto:dominique.roux@ungleich.ch">dominique.roux@ungleich.ch</a> <mailto:<a href="mailto:dominique.roux@ungleich.ch">dominique.roux@ungleich.ch</a>>> wrote:<br>
><br>
> Hi guys,<br>
><br>
> I faced a problem a week ago.<br>
> In our environment we have three servers in a quorum. The gluster volume<br>
> is spreaded over two bricks and has the type replicated.<br>
><br>
> We now, for simulating a fail of one brick, isolated one of the two<br>
> bricks with iptables, so that communication to the other two peers<br>
> wasn't possible anymore.<br>
> After that VMs (opennebula) which had I/O in this time crashed.<br>
> We stopped the glusterfsd hard (kill -9) and restarted it, what made<br>
> things work again (Certainly we also had to restart the failed VMs). But<br>
> I think this shouldn't happen. Since quorum was not reached (2/3 hosts<br>
> were still up and connected).<br>
><br>
> Here some infos of our system:<br>
> OS: CentOS Linux release 7.1.1503<br>
> Glusterfs version: glusterfs 3.7.3<br>
><br>
> gluster volume info:<br>
><br>
> Volume Name: cluster1<br>
> Type: Replicate<br>
> Volume ID:<br>
> Status: Started<br>
> Number of Bricks: 1 x 2 = 2<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: srv01:/home/gluster<br>
> Brick2: srv02:/home/gluster<br>
> Options Reconfigured:<br>
> cluster.self-heal-daemon: enable<br>
> cluster.server-quorum-type: server<br>
> network.remote-dio: enable<br>
> cluster.eager-lock: enable<br>
> performance.stat-prefetch: on<br>
> performance.io-cache: off<br>
> performance.read-ahead: off<br>
> performance.quick-read: off<br>
> server.allow-insecure: on<br>
> nfs.disable: 1<br>
><br>
> Hope you can help us.<br>
><br>
> Thanks a lot.<br>
><br>
> Best regards<br>
> Dominique<br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
</div></div>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
><br>
<br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>