<html><body><div style="font-family: times new roman,new york,times,serif; font-size: 12pt; color: #000000"><div>Hi,<br data-mce-bogus="1"></div><div><br></div><div><div id="magicdomid3"><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Every resource(thread, mem pools) is associated with glusterfs_ctx, hence as the ctxs in the process</span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">grows the resource utilization also grows (most of it being unused). This mostly is an issue with any<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">libgfapi application: USS, NFS Ganesha, Samba, vdsm, qemu.<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">It is normal in any of the libgfapi application to have multiple mounts(ctxs) in the same process,<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">we have seen the number of threads scale from 10s-100s in these applications.</span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt"><br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Solution:</span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">======<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Have a shared resource pool, threads and mem pools. Since they are shared have a scaling policy that<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">scales based on the number of ctxs.<br></span></div><br class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt"><div><div id="magicdomid24"><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Resources that can be shared:</span></div><div id="magicdomid26"><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; - Synctask threads</span></div><div id="magicdomid27"><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; - Timer threads, circular buf timer threads</span></div><div id="magicdomid28"><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; - Sigwaiter thread</span></div><div id="magicdomid29"><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; - poller threads, these aren't as straight forward as others. If we want to share the poll threads,</span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; we will have to reuse ports which is a different topic. Hence keeping poller<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; threads out of this mail as of now.<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; - Mem pools: iobuf, dict, stub, frames and others<br></span></div><div id="magicdomid30"><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt"><br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Once it is tried and tested in libgfapi, it can be extended to other gluster processes.<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Initial patch for this is posted @ http://review.gluster.org/11101<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt"><br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Please provide your thoughts on the same.<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt"><br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Thank You,<br></span></div><div><span class="author-a-z88zolwz66z0s7ez87zz66zz75z4vz76zt">Poornima<br></span></div></div></div></div></body></html>