<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Le 02/08/2016 à 05:11, Pranith Kumar
Karampuri a écrit :<br>
</div>
<blockquote
cite="mid:CAOgeEnZebt2Hu9V_Ubii8hWW_snd93yNhFdPLC-QMDamZ8pEyQ@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Aug 1, 2016 at 3:40 PM,
Yannick Perret <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:yannick.perret@liris.cnrs.fr"
target="_blank">yannick.perret@liris.cnrs.fr</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class="">
<div>Le 29/07/2016 à 18:39, Pranith Kumar Karampuri a
écrit :<br>
</div>
</span>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote"><span class="">On Fri,
Jul 29, 2016 at 2:26 PM, Yannick Perret <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:yannick.perret@liris.cnrs.fr"
target="_blank">yannick.perret@liris.cnrs.fr</a>></span>
wrote:<br>
</span>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"><span
class="">Ok, last try:<br>
after investigating more versions I found
that FUSE client leaks memory on all of
them.<br>
I tested:<br>
- 3.6.7 client on debian 7 32bit and on
debian 8 64bit (with 3.6.7 serveurs on
debian 8 64bit)<br>
- 3.6.9 client on debian 7 32bit and on
debian 8 64bit (with 3.6.7 serveurs on
debian 8 64bit)<br>
</span><span class=""> - 3.7.13 client on
debian 8 64bit (with 3.8.1 serveurs on
debian 8 64bit)<br>
- 3.8.1 client on debian 8 64bit (with 3.8.1
serveurs on debian 8 64bit)<br>
In all cases compiled from sources, appart
for 3.8.1 where .deb were used (due to a
configure runtime error).<br>
For 3.7 it was compiled with
--disable-tiering. I also tried to compile
with --disable-fusermount (no change).<br>
<br>
In all of these cases the memory (resident
& virtual) of glusterfs process on
client grows on each activity and never
reach a max (and never reduce).<br>
"Activity" for these tests is cp -Rp and ls
-lR.<br>
The client I let grows the most overreached
~4Go RAM. On smaller machines it ends by OOM
killer killing glusterfs process or
glusterfs dying due to allocation error.<br>
<br>
In 3.6 mem seems to grow continusly, whereas
in 3.8.1 it grows by "steps" (430400 ko →
629144 (~1min) → 762324 (~1min) → 827860…).<br>
<br>
All tests performed on a single test volume
used only by my test client. Volume in a
basic x2 replica. The only parameters I
changed on this volume (without any effect)
are diagnostics.client-log-level set to
ERROR and network.inode-lru-limit set to
1024.<br>
</span></blockquote>
<span class="">
<div><br>
</div>
<div>Could you attach statedumps of your runs?<br>
The following link has steps to capture
this(<a moz-do-not-send="true"
href="https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/"
target="_blank">https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/</a>
). We basically need to see what are the
memory types that are increasing. If you
could help find the issue, we can send the
fixes for your workload. There is a 3.8.2
release in around 10 days I think. We can
probably target this issue for that?<br>
</div>
</span></div>
</div>
</div>
</blockquote>
<span class=""> Here are statedumps.<br>
Steps:<br>
1. mount -t glusterfs ldap1.my.domain:SHARE /root/MNT/
(here VSZ and RSS are 381896 35828)<br>
2. take a dump with kill -USR1
<pid-of-glusterfs-process> (file
glusterdump.n1.dump.1470042769)<br>
3. perform a 'ls -lR /root/MNT | wc -l' (btw result of
wc -l is 518396 :)) and a 'cp -Rp /usr/*
/root/MNT/boo' (VSZ/RSS are 1301536/711992 at end of
these operations)<br>
4. take a dump with kill -USR1
<pid-of-glusterfs-process> (file
glusterdump.n2.dump.1470043929)<br>
5. do 'cp -Rp * /root/MNT/toto/', so on an other
directory (VSZ/RSS are 1432608/909968 at end of this
operation)<br>
6. take a dump with kill -USR1
<pid-of-glusterfs-process> (file
glusterdump.n3.dump.)<br>
</span></div>
</blockquote>
<div><br>
</div>
<div>Hey,<br>
</div>
<div> Thanks a lot for providing this information.
Looking at these steps, I don't see any problem for the
increase in memory. Both ls -lR and cp -Rp commands you
did in the step-3 will add new inodes in memory which
increase the memory. What happens is as long as the kernel
thinks these inodes need to be in memory gluster keeps
them in memory. Once kernel doesn't think the inode is
necessary, it sends 'inode-forgets'. At this point the
memory starts reducing. So it kind of depends on the
memory pressure kernel is under. But you said it lead to
OOM-killers on smaller machines which means there could be
some leaks. Could you modify the steps as follows to check
to confirm there are leaks? Please do this test on those
smaller machines which lead to OOM-killers.<br>
</div>
<div><br>
</div>
</div>
</div>
</div>
</blockquote>
Thanks for your feedback. I will send these statedumps today.<br>
--<br>
Y.<br>
<br>
<blockquote
cite="mid:CAOgeEnZebt2Hu9V_Ubii8hWW_snd93yNhFdPLC-QMDamZ8pEyQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div><span class="">Steps:<br>
1. mount -t glusterfs ldap1.my.domain:SHARE /root/MNT/
(here VSZ and RSS are 381896 35828)<br>
2. perform a 'ls -lR /root/MNT | wc -l' (btw result of
wc -l is 518396 :)) and a 'cp -Rp /usr/* /root/MNT/boo'
(VSZ/RSS are 1301536/711992 at end of these operations)<br>
3. do 'cp -Rp * /root/MNT/toto/', so on an other
directory (VSZ/RSS are 1432608/909968 at end of this
operation)<br>
</span></div>
<div><span class=""> 4. Delete all the files and directories
you created in steps 2, 3 above<br>
</span></div>
<div><span class="">5. Take statedump with kill -USR1
<pid-of-glusterfs-process><br>
</span></div>
<div><span class="">6. Repeat steps from 2-5<br>
<br>
</span></div>
<div><span class="">Attach these two statedumps. I think the
statedumps will be even more affective if the mount does
not have any data when you start the experiment.<br>
</span></div>
<div><br>
</div>
</div>
</div>
</div>
</blockquote>
<blockquote
cite="mid:CAOgeEnZebt2Hu9V_Ubii8hWW_snd93yNhFdPLC-QMDamZ8pEyQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div>HTH<br>
<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class=""> <br>
</span> Dump files are gzip'ed because they are very
large.<br>
Dump files are here (too big for email):<br>
<a moz-do-not-send="true"
href="http://wikisend.com/download/623430/glusterdump.n1.dump.1470042769.gz"
target="_blank">http://wikisend.com/download/623430/glusterdump.n1.dump.1470042769.gz</a><br>
<a moz-do-not-send="true"
href="http://wikisend.com/download/771220/glusterdump.n2.dump.1470043929.gz"
target="_blank">http://wikisend.com/download/771220/glusterdump.n2.dump.1470043929.gz</a><br>
<a moz-do-not-send="true"
href="http://wikisend.com/download/428752/glusterdump.n3.dump.1470045181.gz"
target="_blank">http://wikisend.com/download/428752/glusterdump.n3.dump.1470045181.gz</a><br>
(I keep the files if someone whats them in an other
format)<span class=""><br>
<br>
Client and servers are installed from .deb files
(glusterfs-client_3.8.1-1_amd64.deb and
glusterfs-common_3.8.1-1_amd64.deb on client side).<br>
They are all Debian 8 64bit. Servers are test machines
that serve only one volume to this sole client. Volume
is a simple x2 replica. I just changed for test
network.inode-lru-limit value to 1024. Mount point
/root/MNT is only used for these tests.<br>
<br>
--<br>
Y.<br>
<br>
<br>
</span></div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="gmail_signature" data-smartmail="gmail_signature">
<div dir="ltr">Pranith<br>
</div>
</div>
</div>
</div>
</blockquote>
<p><br>
</p>
</body>
</html>