<div dir="ltr"><div>Hi Prasanna,<br><br></div>We are already working on a unix domain socket solution instead of TCP loopback connection for communicating between gluster Client and Server. At first the plan was to consider a rpc call ( <a href="http://review.gluster.org/#/c/12489/">http://review.gluster.org/#/c/12489/</a> , <a href="http://review.gluster.org/#/c/12508/">http://review.gluster.org/#/c/12508/</a>) issued by client and get the bind address from glusterd. Because, in some code paths, we use hardcoded 'localhost' to establish the connection which cause issues when the gluster is bound to a different address via 'transport.socket.bind-addres'. <br><br><br> </div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div>--Humble<br></div><br></div></div></div>
<br><div class="gmail_quote">On Fri, Nov 6, 2015 at 12:38 PM, Prasanna Kumar Kalever <span dir="ltr"><<a href="mailto:pkalever@redhat.com" target="_blank">pkalever@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
Currently we use TCP loopback connection for communicating between gluster Client and Server.<br>
Hyper-convergence may also require communication with a server running in the same hyper-visor.<br>
<br>
I was initially wondering if replacing IPC with Unix Domain sockets will improve the performance?<br>
Or<br>
Is there any theory that proves that Unix Domain socket would give better performance then TCP loopback connection?<br>
<br>
Finally I have to Say:<br>
<br>
Yes, local IPC by unix domain sockets will be faster than communication by loopback connections because you have less TCP overhead<br>
<br>
Postgres core developer Bruce Momjian has blogged @ <a href="http://momjian.us/main/blogs/pgblog/2012.html#June_6_2012" rel="noreferrer" target="_blank">http://momjian.us/main/blogs/pgblog/2012.html#June_6_2012</a> about this topic. Momjian states, "Unix-domain socket communication is measurably faster." He measured query network performance showing that the local domain socket was 33% faster than using the TCP/IP stack.<br>
<br>
Found a paper @ <a href="http://osnet.cs.binghamton.edu/publications/TR-20070820.pdf" rel="noreferrer" target="_blank">http://osnet.cs.binghamton.edu/publications/TR-20070820.pdf</a><br>
<br>
Also See,<br>
<a href="http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001143.html" rel="noreferrer" target="_blank">http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001143.html</a><br>
<a href="http://bhavin.directi.com/unix-domain-sockets-vs-tcp-sockets/" rel="noreferrer" target="_blank">http://bhavin.directi.com/unix-domain-sockets-vs-tcp-sockets/</a><br>
<br>
<br>
The whole Idea is to bring IPC through Unix Domain Sockets in gluster instead of localhost loopback!<br>
<br>
Any suggestions will be appreciated!<br>
<br>
- Prasanna <br>
<br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a></blockquote></div><br></div>