<div dir="ltr"><div><div><div><div>Hi,<br><br></div>Why don't you connect them to LACP trunk on the same or two clustered switch?<br></div><div>It would be more easy to manage and more profesional.<br><br></div><div>Here are topologies.<br></div><div><br></div><div>SAME SWITCH<br></div>Node interfaces are LACP and connected to single switch.<br><br>Node1 Node 2 Node3<br></div>| | | | | |<br></div>| lacp| | lacp| | lacp| <br><div><div>| | | | | | <br>----------------------------------------------<br></div><div>| SWITCH |<br>----------------------------------------------<br><br></div><div>CLUSTERED SWITCHES<br></div><div><br>Node interfaces are LACP and each interface connected respectively to SW1&SW2, which are clustered.<br><br></div><div> ----------------------------------------------<br>|----| SWITCH 1 |<br>| ---------------------------------------------- <br>| | | | <br>| Node1 Node 2 Node3<br>| | | |<br>| ----------------------------------------------<br>|----| SWITCH 2 |<br> ----------------------------------------------<br><br><br><br><br></div><div>Best Regards<br></div><div>Aytac<br></div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, May 23, 2015 at 4:27 PM, Joop <span dir="ltr"><<a href="mailto:jvdwege@xs4all.nl" target="_blank">jvdwege@xs4all.nl</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 22-5-2015 18:48, Joe Julian wrote:<br>
> This is a concept that needs some work to implement, but could<br>
> potentially see a benefit from that topography:<br>
> <a href="https://docs.google.com/document/d/15IiPVIPMzgGwkt1sKuIusRE2l3pQY6NnA4WmWaMsFJE/edit?usp=sharing" target="_blank">https://docs.google.com/document/d/15IiPVIPMzgGwkt1sKuIusRE2l3pQY6NnA4WmWaMsFJE/edit?usp=sharing</a><br>
><br>
> On 05/22/2015 09:02 AM, Jeff Darcy wrote:<br>
>> ----- Original Message -----<br>
>><br>
>>> Hi there,<br>
>>> I'm planning to setup a 3-Node Cluster for oVirt and would like to<br>
>>> use 56 GBe<br>
>>> (RoCe)<br>
>>> exclusively for GlusterFS. Since 56 GBe switches are far too<br>
>>> expensive and<br>
>>> it's not<br>
>>> planned to add more nodes and furthermore this would add a SPOF I'd<br>
>>> like to<br>
>>> cross connect the nodes as shown in the diagram below:<br>
>>> Node 1 Node 2 Node3<br>
>>> ||_______||________||<br>
>>> |___________________|<br>
>>> This way there's a dedicated 56 Gbit connection to/from each member<br>
>>> node.<br>
>>> Is is possible to do this with GlusterFS?<br>
>>> My first thought was to have different IPs on each node's /etc/host<br>
>>> mapped to<br>
>>> the node<br>
>>> hostnames but I'm unsure if I can force GlusterFS to hostnames<br>
>>> instead of<br>
>>> IPs.<br>
>><br>
</div></div>I think this will work since the VMs will be running on the same 3<br>
gluster hosts making all traffic local to the cluster, it wouldn't work<br>
without extra effort if you had a seperate set of hosts acting as<br>
clients. You can still seperate rebalancing traffic to a seperate<br>
subnet/vlan if you wish.<br>
<br>
Regards,<br>
<br>
Joop<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>