<div dir="ltr">I wouldn't think you'd need any 'arbiter' nodes (in quotes because in 3.7+ there is an actual arbiter node at the volume level). You have 4 nodes, and if you lose 1, you're at 3/4 or 75%.<div><br></div><div>Personally I've not had much luck with 2 node (with or without the fake arbiter node) as storage for Ovirt VM's. I found I ran into a slew of storage domain failure issues (no data loss), hanging VM's etc. Instead I went with a replica 3 volume for just VM storage (3 x 1TB SSD's) and bulk storage is distributed replica 2.</div><div><br></div><div>I found when a node in a replica pair goes down and is timing out there was zero IO (no read, no write). After a timeout I end up with a readonly filesystem for whatever data was stored on that replica pair. Not very useful for something stateful like a VM. The only way to get write access was to get the failed node up and running, and usually the VM's in Ovirt ended up in a 'paused' state that couldn't be recovered from.</div><div><br></div><div>I also testing volume level arbiter (replica 2 arbiter 1) with gluster 3.7.3 before going to 3.6.6 and replica 3, and found IO was too slow for my environment. The bug report I filed is here for some write speed references: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1255110">https://bugzilla.redhat.com/show_bug.cgi?id=1255110</a></div><div><br></div><div>In any case I'd stick with a stable release of Gluster, and try to get replica 3 for VM storage if you can.</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Nov 2, 2015 at 9:54 AM, André Bauer <span dir="ltr"><<a href="mailto:abauer@magix.net" target="_blank">abauer@magix.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thanks for the hints guys :-)<br>
<br>
I think i will try to use an arbiter. As i use distributed / replicated<br>
volumes i think i have to add 2 arbiters, right?<br>
<br>
My nodes have 10GBit interfaces. Would be 1 GBit for the arbiter(s) enough?<br>
<br>
Regards<br>
<span class="HOEnZb"><font color="#888888">André<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
Am 28.10.2015 um 14:38 schrieb Diego Remolina:<br>
> I am running Ovirt and self-hosted engine with additional vms on a<br>
> replica two gluster volume. I have an "arbiter" node and set quorum<br>
> ratio to 51%. The arbiter node is just another machine with the<br>
> glusterfs bits installed that is part of the gluster peers but has no<br>
> bricks to it.<br>
><br>
> You will have to be very careful where you put these three machines if<br>
> they are going to go in separate server rooms or buildings. There are<br>
> pros and cons to distribution of the nodes and network topology may<br>
> also influence that.<br>
><br>
> In my case, this is on a campus, I have machines in 3 separate<br>
> buildings and all machines are on the same main campus router (we have<br>
> more than one main router). All machines connected via 10 gbps. If I<br>
> had one node with bricks and the arbiter in the same building and that<br>
> building went down (power/AC/chill water/network), then the other node<br>
> with bricks would be useless. This is why I have machines in 3<br>
> different buildings. Oh, and this is because most of the client<br>
> systems are not even in the same building as the servers. If my client<br>
> machines and servers where in the same building, then doing one node<br>
> with bricks and arbiter in that same building could make sense.<br>
><br>
> HTH,<br>
><br>
> Diego<br>
><br>
><br>
><br>
><br>
> On Wed, Oct 28, 2015 at 5:25 AM, Niels de Vos <<a href="mailto:ndevos@redhat.com">ndevos@redhat.com</a>> wrote:<br>
>> On Tue, Oct 27, 2015 at 07:21:35PM +0100, André Bauer wrote:<br>
>>> -----BEGIN PGP SIGNED MESSAGE-----<br>
>>> Hash: SHA256<br>
>>><br>
>>> Hi Niels,<br>
>>><br>
>>> my network.ping-timeout was already set to 5 seconds.<br>
>>><br>
>>> Unfortunately it seems i dont have the timout setting in Ubuntu 14.04<br>
>>> for my vda disk.<br>
>>><br>
>>> ls -al /sys/block/vda/device/ gives me only:<br>
>>><br>
>>> drwxr-xr-x 4 root root 0 Oct 26 20:21 ./<br>
>>> drwxr-xr-x 5 root root 0 Oct 26 20:21 ../<br>
>>> drwxr-xr-x 3 root root 0 Oct 26 20:21 block/<br>
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 device<br>
>>> lrwxrwxrwx 1 root root 0 Oct 27 18:13 driver -><br>
>>> ../../../../bus/virtio/drivers/virtio_blk/<br>
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 features<br>
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 modalias<br>
>>> drwxr-xr-x 2 root root 0 Oct 27 18:13 power/<br>
>>> - -r--r--r-- 1 root root 4096 Oct 27 18:13 status<br>
>>> lrwxrwxrwx 1 root root 0 Oct 26 20:21 subsystem -><br>
>>> ../../../../bus/virtio/<br>
>>> - -rw-r--r-- 1 root root 4096 Oct 26 20:21 uevent<br>
>>> - -r--r--r-- 1 root root 4096 Oct 26 20:21 vendor<br>
>>><br>
>>><br>
>>> Is the qourum setting a problem, if you only have 2 replicas?<br>
>>><br>
>>> My volume has this quorum options set:<br>
>>><br>
>>> cluster.quorum-type: auto<br>
>>> cluster.server-quorum-type: server<br>
>>><br>
>>> As i understand the documentation (<br>
>>> <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/A" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/A</a><br>
>>> dministration_Guide/sect-User_Guide-Managing_Volumes-Quorum.html<br>
>>> ), cluster.server-quorum-ratio is set to "< 50%" by default, which can<br>
>>> never happen if you only have 2 replicas and one node goes down, right?<br>
>>><br>
>>> Do in need cluster.server-quorum-ratio = 50% in this case?<br>
>><br>
>> Replica 2 for VM storage is troublesome. Sahine just responded very<br>
>> nicely to a very similar email:<br>
>><br>
>> <a href="http://thread.gmane.org/gmane.comp.file-systems.gluster.user/22818/focus=22823" rel="noreferrer" target="_blank">http://thread.gmane.org/gmane.comp.file-systems.gluster.user/22818/focus=22823</a><br>
>><br>
>> HTH,<br>
>> Niels<br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
<br>
<br>
</div></div><span class="im HOEnZb">--<br>
Mit freundlichen Grüßen<br>
André Bauer<br>
<br>
MAGIX Software GmbH<br>
André Bauer<br>
Administrator<br>
August-Bebel-Straße 48<br>
01219 Dresden<br>
GERMANY<br>
<br>
tel.: 0351 41884875<br>
e-mail: <a href="mailto:abauer@magix.net">abauer@magix.net</a><br>
<a href="mailto:abauer@magix.net">abauer@magix.net</a> <mailto:<a href="mailto:Email">Email</a>><br>
<a href="http://www.magix.com" rel="noreferrer" target="_blank">www.magix.com</a> <<a href="http://www.magix.com/" rel="noreferrer" target="_blank">http://www.magix.com/</a>><br>
<br>
Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt<br>
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205<br>
<br>
Find us on:<br>
<br>
<<a href="http://www.facebook.com/MAGIX" rel="noreferrer" target="_blank">http://www.facebook.com/MAGIX</a>> <<a href="http://www.twitter.com/magix_de" rel="noreferrer" target="_blank">http://www.twitter.com/magix_de</a>><br>
<<a href="http://www.youtube.com/wwwmagixcom" rel="noreferrer" target="_blank">http://www.youtube.com/wwwmagixcom</a>> <<a href="http://www.magixmagazin.de" rel="noreferrer" target="_blank">http://www.magixmagazin.de</a>><br>
</span><span class="im HOEnZb">----------------------------------------------------------------------<br>
The information in this email is intended only for the addressee named<br>
above. Access to this email by anyone else is unauthorized. If you are<br>
not the intended recipient of this message any disclosure, copying,<br>
distribution or any action taken in reliance on it is prohibited and<br>
may be unlawful. MAGIX does not warrant that any attachments are free<br>
from viruses or other defects and accepts no liability for any losses<br>
resulting from infected email transmissions. Please note that any<br>
views expressed in this email may be those of the originator and do<br>
not necessarily represent the agenda of the company.<br>
----------------------------------------------------------------------<br>
</span><div class="HOEnZb"><div class="h5">_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></div></div></blockquote></div><br></div>