<div dir="ltr">There is already a patch submitted for moving TBF part to libglusterfs. It is under review.<div><a href="http://review.gluster.org/#/c/12413/">http://review.gluster.org/#/c/12413/</a><br></div><div><br></div><div><br></div><div>Regards,</div><div>Raghavendra</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jan 25, 2016 at 2:26 AM, Venky Shankar <span dir="ltr"><<a href="mailto:vshankar@redhat.com" target="_blank">vshankar@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Mon, Jan 25, 2016 at 11:06:26AM +0530, Ravishankar N wrote:<br>
> Hi,<br>
><br>
> We are planning to introduce a throttling xlator on the server (brick)<br>
> process to regulate FOPS. The main motivation is to solve complaints about<br>
> AFR selfheal taking too much of CPU resources. (due to too many fops for<br>
> entry<br>
> self-heal, rchecksums for data self-heal etc.)<br>
><br>
> The throttling is achieved using the Token Bucket Filter algorithm (TBF).<br>
> TBF<br>
> is already used by bitrot's bitd signer (which is a client process) in<br>
> gluster to regulate the CPU intensive check-sum calculation. By putting the<br>
> logic on the brick side, multiple clients- selfheal, bitrot, rebalance or<br>
> even the mounts themselves can avail the benefits of throttling.<br>
<br>
</span> [Providing current TBF implementation link for completeness]<br>
<br>
<a href="https://github.com/gluster/glusterfs/blob/master/xlators/features/bit-rot/src/bitd/bit-rot-tbf.c" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/blob/master/xlators/features/bit-rot/src/bitd/bit-rot-tbf.c</a><br>
<br>
Also, it would be beneficial to have the core TBF implementation as part of<br>
libglusterfs so as to be consumable by the server side xlator component to<br>
throttle dispatched FOPs and for daemons to throttle anything that's outside<br>
"brick" boundary (such as cpu, etc..).<br>
<div><div class="h5"><br>
><br>
> The TBF algorithm in a nutshell is as follows: There is a bucket which is<br>
> filled<br>
> at a steady (configurable) rate with tokens. Each FOP will need a fixed<br>
> amount<br>
> of tokens to be processed. If the bucket has that many tokens, the FOP is<br>
> allowed and that many tokens are removed from the bucket. If not, the FOP is<br>
> queued until the bucket is filled.<br>
><br>
> The xlator will need to reside above io-threads and can have different<br>
> buckets,<br>
> one per client. There has to be a communication mechanism between the client<br>
> and<br>
> the brick (IPC?) to tell what FOPS need to be regulated from it, and the no.<br>
> of<br>
> tokens needed etc. These need to be re configurable via appropriate<br>
> mechanisms.<br>
> Each bucket will have a token filler thread which will fill the tokens in<br>
> it.<br>
> The main thread will enqueue heals in a list in the bucket if there aren't<br>
> enough tokens. Once the token filler detects some FOPS can be serviced, it<br>
> will<br>
> send a cond-broadcast to a dequeue thread which will process (stack wind)<br>
> all<br>
> the FOPS that have the required no. of tokens from all buckets.<br>
><br>
> This is just a high level abstraction: requesting feedback on any aspect of<br>
> this feature. what kind of mechanism is best between the client/bricks for<br>
> tuning various parameters? What other requirements do you foresee?<br>
><br>
> Thanks,<br>
> Ravi<br>
<br>
</div></div>> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
</blockquote></div><br></div>