<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 1, 2016 at 2:24 PM, Soumya Koduri <span dir="ltr">&lt;<a href="mailto:skoduri@redhat.com" target="_blank">skoduri@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 02/01/2016 01:39 PM, Oleksandr Natalenko wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Wait. It seems to be my bad.<br>
<br>
Before unmounting I do drop_caches (2), and glusterfs process CPU usage<br>
goes to 100% for a while. I haven&#39;t waited for it to drop to 0%, and<br>
instead perform unmount. It seems glusterfs is purging inodes and that&#39;s<br>
why it uses 100% of CPU. I&#39;ve re-tested it, waiting for CPU usage to<br>
become normal, and got no leaks.<br>
<br>
Will verify this once again and report more.<br>
<br>
BTW, if that works, how could I limit inode cache for FUSE client? I do<br>
not want it to go beyond 1G, for example, even if I have 48G of RAM on<br>
my server.<br>
</blockquote>
<br></span>
Its hard-coded for now. For fuse the lru limit (of the inodes which are not active) is (32*1024).<br>
One of the ways to address this (which we were discussing earlier) is to have an option to configure inode cache limit.</blockquote><div><br></div><div style="">We cannot set a limit on inode cache in fuse-bridge. As long as kernel is aware of an inode (through a lookup), fuse-client is _forced_ to keep that inode in inode table. The reason is we pass the address of inode object as nodeid to kernel. We cannot send a gfid as nodeid since gfid is 128 bit and nodeid is 64 bit. This is the reason behind setting an infinite lru limit. However, this problem is not there for inode table management in server as, client can communicate with server using 128 bit gfids.</div><div style=""> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> If that sounds good, we can then check on if it has to be global/volume-level, client/server/both.<br>
<br>
Thanks,<br>
Soumya<div class="HOEnZb"><div class="h5"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
01.02.2016 09:54, Soumya Koduri написав:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Unfortunately, this patch doesn&#39;t help.<br>
<br>
RAM usage on &quot;find&quot; finish is ~9G.<br>
<br>
Here is statedump before drop_caches: <a href="https://gist.github.com/" rel="noreferrer" target="_blank">https://gist.github.com/</a><br>
fc1647de0982ab447e20<br>
</blockquote>
<br>
[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]<br>
size=706766688<br>
num_allocs=2454051<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
And after drop_caches: <a href="https://gist.github.com/5eab63bc13f78787ed19" rel="noreferrer" target="_blank">https://gist.github.com/5eab63bc13f78787ed19</a><br>
</blockquote>
<br>
[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]<br>
size=550996416<br>
num_allocs=1913182<br>
<br>
There isn&#39;t much significant drop in inode contexts. One of the<br>
reasons could be because of dentrys holding a refcount on the inodes<br>
which shall result in inodes not getting purged even after<br>
fuse_forget.<br>
<br>
<br>
pool-name=fuse:dentry_t<br>
hot-count=32761<br>
<br>
if  &#39;32761&#39; is the current active dentry count, it still doesn&#39;t seem<br>
to match up to inode count.<br>
<br>
Thanks,<br>
Soumya<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
And here is Valgrind output:<br>
<a href="https://gist.github.com/2490aeac448320d98596" rel="noreferrer" target="_blank">https://gist.github.com/2490aeac448320d98596</a><br>
<br>
On субота, 30 січня 2016 р. 22:56:37 EET Xavier Hernandez wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
There&#39;s another inode leak caused by an incorrect counting of<br>
lookups on directory reads.<br>
<br>
Here&#39;s a patch that solves the problem for<br>
3.7:<br>
<br>
<a href="http://review.gluster.org/13324" rel="noreferrer" target="_blank">http://review.gluster.org/13324</a><br>
<br>
Hopefully with this patch the<br>
memory leaks should disapear.<br>
<br>
Xavi<br>
<br>
On 29.01.2016 19:09, Oleksandr<br>
<br>
Natalenko wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Here is intermediate summary of current memory<br>
</blockquote>
<br>
leaks in FUSE client<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
investigation.<br>
<br>
I use GlusterFS v3.7.6<br>
</blockquote>
<br>
release with the following patches:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
===<br>
</blockquote>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Kaleb S KEITHLEY (1):<br>
</blockquote>
fuse: use-after-free fix in fuse-bridge, revisited<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Pranith Kumar K<br>
</blockquote>
<br>
(1):<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
mount/fuse: Fix use-after-free crash<br>
</blockquote>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Soumya Koduri (3):<br>
</blockquote>
gfapi: Fix inode nlookup counts<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
inode: Retire the inodes from the lru<br>
</blockquote>
<br>
list in inode_table_destroy<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
upcall: free the xdr* allocations<br>
===<br>
<br>
<br>
With those patches we got API leaks fixed (I hope, brief tests show<br>
</blockquote>
<br>
that) and<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
got rid of &quot;kernel notifier loop terminated&quot; message.<br>
</blockquote>
<br>
Nevertheless, FUSE<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
client still leaks.<br>
<br>
I have several test<br>
</blockquote>
<br>
volumes with several million of small files (100K…2M in<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
average). I<br>
</blockquote>
<br>
do 2 types of FUSE client testing:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
1) find /mnt/volume -type d<br>
2)<br>
</blockquote>
<br>
rsync -av -H /mnt/source_volume/* /mnt/target_volume/<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
And most<br>
</blockquote>
<br>
up-to-date results are shown below:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
=== find /mnt/volume -type d<br>
</blockquote>
<br>
===<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Memory consumption: ~4G<br>
</blockquote>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Statedump:<br>
</blockquote>
<a href="https://gist.github.com/10cde83c63f1b4f1dd7a" rel="noreferrer" target="_blank">https://gist.github.com/10cde83c63f1b4f1dd7a</a><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Valgrind:<br>
</blockquote>
<a href="https://gist.github.com/097afb01ebb2c5e9e78d" rel="noreferrer" target="_blank">https://gist.github.com/097afb01ebb2c5e9e78d</a><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I guess,<br>
</blockquote>
<br>
fuse-bridge/fuse-resolve. related.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
=== rsync -av -H<br>
</blockquote>
<br>
/mnt/source_volume/* /mnt/target_volume/ ===<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Memory consumption:<br>
</blockquote>
~3.3...4G<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Statedump (target volume):<br>
</blockquote>
<a href="https://gist.github.com/31e43110eaa4da663435" rel="noreferrer" target="_blank">https://gist.github.com/31e43110eaa4da663435</a><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Valgrind (target volume):<br>
</blockquote>
<a href="https://gist.github.com/f8e0151a6878cacc9b1a" rel="noreferrer" target="_blank">https://gist.github.com/f8e0151a6878cacc9b1a</a><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I guess,<br>
</blockquote>
<br>
DHT-related.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Give me more patches to test :).<br>
</blockquote>
<br>
_______________________________________________<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Gluster-devel mailing<br>
</blockquote>
<br>
list<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
</blockquote>
<br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
</blockquote>
<br>
<br>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
<br>
</blockquote></blockquote></blockquote>
_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Raghavendra G<br></div>
</div></div>