<div dir="ltr">Thanks for all your tests and times, it looks promising :)<div><br></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature">Cordialement,<br>Mathieu CHATEAU<br><a href="http://www.lotp.fr" target="_blank">http://www.lotp.fr</a></div></div>
<br><div class="gmail_quote">2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko <span dir="ltr"><<a href="mailto:oleksandr@natalenko.name" target="_blank">oleksandr@natalenko.name</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the following<br>
patches:<br>
<br>
===<br>
Kaleb S KEITHLEY (1):<br>
fuse: use-after-free fix in fuse-bridge, revisited<br>
<br>
Pranith Kumar K (1):<br>
mount/fuse: Fix use-after-free crash<br>
<br>
Soumya Koduri (3):<br>
gfapi: Fix inode nlookup counts<br>
inode: Retire the inodes from the lru list in inode_table_destroy<br>
upcall: free the xdr* allocations<br>
===<br>
<br>
I run rsync from one GlusterFS volume to another. While memory started from<br>
under 100 MiBs, it stalled at around 600 MiBs for source volume and does not<br>
grow further. As for target volume it is ~730 MiBs, and that is why I'm going<br>
to do several rsync rounds to see if it grows more (with no patches bare 3.7.6<br>
could consume more than 20 GiBs).<br>
<br>
No "kernel notifier loop terminated" message so far for both volumes.<br>
<br>
Will report more in several days. I hope current patches will be incorporated<br>
into 3.7.7.<br>
<span class=""><br>
On пʼятниця, 22 січня 2016 р. 12:53:36 EET Kaleb S. KEITHLEY wrote:<br>
> On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:<br>
> > On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:<br>
> >> I presume by this you mean you're not seeing the "kernel notifier loop<br>
> >> terminated" error in your logs.<br>
> ><br>
> > Correct, but only with simple traversing. Have to test under rsync.<br>
><br>
> Without the patch I'd get "kernel notifier loop terminated" within a few<br>
> minutes of starting I/O. With the patch I haven't seen it in 24 hours<br>
> of beating on it.<br>
><br>
> >> Hmmm. My system is not leaking. Last 24 hours the RSZ and VSZ are<br>
> >> stable:<br>
> >> <a href="http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longev" rel="noreferrer" target="_blank">http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longev</a><br>
</span>> >> ity /client.out<br>
<div class="HOEnZb"><div class="h5">> ><br>
> > What ops do you perform on mounted volume? Read, write, stat? Is that<br>
> > 3.7.6 + patches?<br>
><br>
> I'm running an internally developed I/O load generator written by a guy<br>
> on our perf team.<br>
><br>
> it does, create, write, read, rename, stat, delete, and more.<br>
<br>
<br>
</div></div></blockquote></div><br></div>