<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Hello,<br>
      <br>
      back after holidays. I don't saw any new relies after this last
      mail, I hope I don't missed mails (too many mails to parse…).<br>
      <br>
      BTW it seems that my problem is very similar to this opened bug:
      <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1369364">https://bugzilla.redhat.com/show_bug.cgi?id=1369364</a><br>
      -&gt; memory usage always increasing for (here) read ops until
      reaching all mem/swap, using the fuse client.<br>
      <br>
      Regards,<br>
      --<br>
      Y.<br>
      <br>
      Le 02/08/2016 à 19:15, Yannick Perret a écrit :<br>
    </div>
    <blockquote
      cite="mid:6675b7b3-aa4f-dc3e-c028-2529fbb88235@liris.cnrs.fr"
      type="cite">
      <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
      <div class="moz-cite-prefix">In order to prevent too many swap
        usage I removed swap on this machine (swapoff -a).<br>
        Memory usage was still growing.<br>
        After that I started an other program that takes memory (in
        order to accelerate things) and I got the OOM-killer.<br>
        <br>
        Here is the syslog:<br>
        [1246854.291996] Out of memory: Kill process 931 (glusterfs)
        score 742 or sacrifice child<br>
        [1246854.292102] Killed process 931 (glusterfs)
        total-vm:3527624kB, anon-rss:3100328kB, file-rss:0kB<br>
        <br>
        Last VSZ/RSS was: 3527624 / 3097096<br>
        <br>
        <br>
        Here is the rest of the OOM-killer data:<br>
        [1246854.291847] active_anon:600785 inactive_anon:377188
        isolated_anon:0<br>
         active_<a moz-do-not-send="true" class="moz-txt-link-freetext"
          href="file:97">file:97</a> inactive_<a moz-do-not-send="true"
          class="moz-txt-link-freetext" href="file:137">file:137</a>
        isolated_<a moz-do-not-send="true" class="moz-txt-link-freetext"
          href="file:0">file:0</a><br>
         unevictable:0 dirty:0 writeback:1 unstable:0<br>
         free:21740 slab_reclaimable:3309 slab_unreclaimable:3728<br>
         mapped:255 shmem:4267 pagetables:3286 bounce:0<br>
         free_cma:0<br>
        [1246854.291851] Node 0 DMA free:15876kB min:264kB low:328kB
        high:396kB active_anon:0kB inactive_anon:0kB active_<a
          moz-do-not-send="true" class="moz-txt-link-freetext"
          href="file:0kB">file:0kB</a> inactive_<a
          moz-do-not-send="true" class="moz-txt-link-freetext"
          href="file:0kB">file:0kB</a> unevictable:0kB
        isolated(anon):0kB isolated(file):0kB present:15992kB
        managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB
        shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB
        kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB
        free_cma:0kB writeback_tmp:0kB pages_scanned:0
        all_unreclaimable? yes<br>
        [1246854.291858] lowmem_reserve[]: 0 2980 3948 3948<br>
        [1246854.291861] Node 0 DMA32 free:54616kB min:50828kB
        low:63532kB high:76240kB active_anon:1940432kB
        inactive_anon:1020924kB active_<a moz-do-not-send="true"
          class="moz-txt-link-freetext" href="file:248kB">file:248kB</a>
        inactive_<a moz-do-not-send="true" class="moz-txt-link-freetext"
          href="file:260kB">file:260kB</a> unevictable:0kB
        isolated(anon):0kB isolated(file):0kB present:3129280kB
        managed:3054836kB mlocked:0kB dirty:0kB writeback:0kB
        mapped:760kB shmem:14616kB slab_reclaimable:9660kB
        slab_unreclaimable:8244kB kernel_stack:1456kB pagetables:10056kB
        unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
        pages_scanned:803 all_unreclaimable? yes<br>
        [1246854.291865] lowmem_reserve[]: 0 0 967 967<br>
        [1246854.291867] Node 0 Normal free:16468kB min:16488kB
        low:20608kB high:24732kB active_anon:462708kB
        inactive_anon:487828kB active_<a moz-do-not-send="true"
          class="moz-txt-link-freetext" href="file:140kB">file:140kB</a>
        inactive_<a moz-do-not-send="true" class="moz-txt-link-freetext"
          href="file:288kB">file:288kB</a> unevictable:0kB
        isolated(anon):0kB isolated(file):0kB present:1048576kB
        managed:990356kB mlocked:0kB dirty:0kB writeback:4kB
        mapped:260kB shmem:2452kB slab_reclaimable:3576kB
        slab_unreclaimable:6668kB kernel_stack:560kB pagetables:3088kB
        unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
        pages_scanned:975 all_unreclaimable? yes<br>
        [1246854.291872] lowmem_reserve[]: 0 0 0 0<br>
        [1246854.291874] Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 2*32kB (U)
        3*64kB (U) 0*128kB 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (R)
        3*4096kB (EM) = 15876kB<br>
        [1246854.291882] Node 0 DMA32: 1218*4kB (UEM) 848*8kB (UE)
        621*16kB (UE) 314*32kB (UEM) 189*64kB (UEM) 49*128kB (UEM)
        2*256kB (E) 0*512kB 0*1024kB 0*2048kB 1*4096kB (R) = 54616kB<br>
        [1246854.291891] Node 0 Normal: 3117*4kB (UE) 0*8kB 0*16kB
        3*32kB (R) 1*64kB (R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB
        (R) 1*2048kB (R) 0*4096kB = 16468kB<br>
        [1246854.291900] Node 0 hugepages_total=0 hugepages_free=0
        hugepages_surp=0 hugepages_size=2048kB<br>
        [1246854.291902] 4533 total pagecache pages<br>
        [1246854.291903] 0 pages in swap cache<br>
        [1246854.291905] Swap cache stats: add 343501, delete 343501,
        find 7730690/7732743<br>
        [1246854.291906] Free swap  = 0kB<br>
        [1246854.291907] Total swap = 0kB<br>
        [1246854.291908] 1048462 pages RAM<br>
        [1246854.291909] 0 pages HighMem/MovableOnly<br>
        [1246854.291909] 14555 pages reserved<br>
        [1246854.291910] 0 pages hwpoisoned<br>
        <br>
        Regards,<br>
        --<br>
        Y.<br>
        <br>
        <br>
        <br>
        Le 02/08/2016 à 17:00, Yannick Perret a écrit :<br>
      </div>
      <blockquote
        cite="mid:654351cf-d708-fde0-4394-d13fbeb83b2a@liris.cnrs.fr"
        type="cite">
        <meta content="text/html; charset=utf-8"
          http-equiv="Content-Type">
        <div class="moz-cite-prefix">So here are the dumps, gzip'ed.<br>
          <br>
          What I did:<br>
          1. mounting the volume, removing all its content, umounting it<br>
          2. mounting the volume<br>
          3. performing a cp -Rp /usr/* /root/MNT<br>
          4. performing a rm -rf /root/MNT/*<br>
          5. taking a dump (glusterdump.p1.dump)<br>
          6. re-doing 3, 4 and 5 (glusterdump.p2.dump)<br>
          <br>
          VSZ/RSS are respectively:<br>
          - 381896 / 35688 just after mount<br>
          - 644040 / 309240 after 1st cp -Rp<br>
          - 644040 / 310128 after 1st rm -rf<br>
          - 709576 / 310128 after 1st kill -USR1<br>
          - 840648 / 421964 after 2nd cp -Rp<br>
          - 840648 / 422224 after 2nd rm -rf<br>
          <br>
          I created a small script that performs these actions in an
          infinite loop:<br>
          while /bin/true<br>
          do<br>
            cp -Rp /usr/* /root/MNT/<br>
            + get VSZ/RSS of glusterfs process<br>
            rm -rf /root/MNT/*<br>
            + get VSZ/RSS of glusterfs process<br>
          done<br>
          <br>
          At this time here are the values so far:<br>
          971720 533988<br>
          1037256 645500<br>
          1037256 645840<br>
          1168328 757348<br>
          1168328 757620<br>
          1299400 869128<br>
          1299400 869328<br>
          1364936 980712<br>
          1364936 980944<br>
          1496008 1092384<br>
          1496008 1092404<br>
          1627080 1203796<br>
          1627080 1203996<br>
          1692616 1315572<br>
          1692616 1315504<br>
          1823688 1426812<br>
          1823688 1427340<br>
          1954760 1538716<br>
          1954760 1538772<br>
          2085832 1647676<br>
          2085832 1647708<br>
          2151368 1750392<br>
          2151368 1750708<br>
          2282440 1853864<br>
          2282440 1853764<br>
          2413512 1952668<br>
          2413512 1952704<br>
          2479048 2056500<br>
          2479048 2056712<br>
          <br>
          So at this time glusterfs process takes not far from 2Gb of
          resident memory, only performing exactly the same actions 'cp
          -Rp /usr/* /root/MNT' + 'rm -rf /root/MNT/*'.<br>
          <br>
          Swap usage is starting to increase a little, and I don't saw
          any memory dropping at this time.<br>
          I can understand that kernel may not release the removed files
          (after rm -rf) immediatly, but the fist 'rm' occured at ~12:00
          today and it is ~17:00 here so I can't understand why so much
          memory is used.<br>
          I would expect the memory to grow during 'cp -Rp', then reduce
          after 'rm', but it stays the same. Even if it stays the same I
          would expect it to not grow more while cp-ing again.<br>
          <br>
          I let the cp/rm loop running to see what will happen. Feel
          free to ask for other data if it may help.<br>
          <br>
          Please note that I'll be in hollidays at the end of this week
          for 3 weeks so I will mostly not be able to perform tests
          during this time (network connection is too bad where I go).<br>
          <br>
          Regards,<br>
          --<br>
          Y.<br>
          <br>
          Le 02/08/2016 à 05:11, Pranith Kumar Karampuri a écrit :<br>
        </div>
        <blockquote
cite="mid:CAOgeEnZebt2Hu9V_Ubii8hWW_snd93yNhFdPLC-QMDamZ8pEyQ@mail.gmail.com"
          type="cite">
          <div dir="ltr"><br>
            <div class="gmail_extra"><br>
              <div class="gmail_quote">On Mon, Aug 1, 2016 at 3:40 PM,
                Yannick Perret <span dir="ltr">&lt;<a
                    moz-do-not-send="true"
                    href="mailto:yannick.perret@liris.cnrs.fr"
                    target="_blank">yannick.perret@liris.cnrs.fr</a>&gt;</span>
                wrote:<br>
                <blockquote class="gmail_quote" style="margin:0px 0px
                  0px 0.8ex;border-left:1px solid
                  rgb(204,204,204);padding-left:1ex">
                  <div bgcolor="#FFFFFF" text="#000000"><span class="">
                      <div>Le 29/07/2016 à 18:39, Pranith Kumar
                        Karampuri a écrit :<br>
                      </div>
                    </span>
                    <blockquote type="cite">
                      <div dir="ltr"><br>
                        <div class="gmail_extra"><br>
                          <div class="gmail_quote"><span class="">On
                              Fri, Jul 29, 2016 at 2:26 PM, Yannick
                              Perret <span dir="ltr">&lt;<a
                                  moz-do-not-send="true"
                                  href="mailto:yannick.perret@liris.cnrs.fr"
                                  target="_blank">yannick.perret@liris.cnrs.fr</a>&gt;</span>
                              wrote:<br>
                            </span>
                            <blockquote class="gmail_quote"
                              style="margin:0px 0px 0px
                              0.8ex;border-left:1px solid
                              rgb(204,204,204);padding-left:1ex"><span
                                class="">Ok, last try:<br>
                                after investigating more versions I
                                found that FUSE client leaks memory on
                                all of them.<br>
                                I tested:<br>
                                - 3.6.7 client on debian 7 32bit and on
                                debian 8 64bit (with 3.6.7 serveurs on
                                debian 8 64bit)<br>
                                - 3.6.9 client on debian 7 32bit and on
                                debian 8 64bit (with 3.6.7 serveurs on
                                debian 8 64bit)<br>
                              </span><span class=""> - 3.7.13 client on
                                debian 8 64bit (with 3.8.1 serveurs on
                                debian 8 64bit)<br>
                                - 3.8.1 client on debian 8 64bit (with
                                3.8.1 serveurs on debian 8 64bit)<br>
                                In all cases compiled from sources,
                                appart for 3.8.1 where .deb were used
                                (due to a configure runtime error).<br>
                                For 3.7 it was compiled with
                                --disable-tiering. I also tried to
                                compile with --disable-fusermount (no
                                change).<br>
                                <br>
                                In all of these cases the memory
                                (resident &amp; virtual) of glusterfs
                                process on client grows on each activity
                                and never reach a max (and never
                                reduce).<br>
                                "Activity" for these tests is cp -Rp and
                                ls -lR.<br>
                                The client I let grows the most
                                overreached ~4Go RAM. On smaller
                                machines it ends by OOM killer killing
                                glusterfs process or glusterfs dying due
                                to allocation error.<br>
                                <br>
                                In 3.6 mem seems to grow continusly,
                                whereas in 3.8.1 it grows by "steps"
                                (430400 ko → 629144 (~1min) → 762324
                                (~1min) → 827860…).<br>
                                <br>
                                All tests performed on a single test
                                volume used only by my test client.
                                Volume in a basic x2 replica. The only
                                parameters I changed on this volume
                                (without any effect) are
                                diagnostics.client-log-level set to
                                ERROR and network.inode-lru-limit set to
                                1024.<br>
                              </span></blockquote>
                            <span class="">
                              <div><br>
                              </div>
                              <div>Could you attach statedumps of your
                                runs?<br>
                                The following link has steps to capture
                                this(<a moz-do-not-send="true"
href="https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/"
                                  target="_blank">https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/</a>
                                ). We basically need to see what are the
                                memory types that are increasing. If you
                                could help find the issue, we can send
                                the fixes for your workload. There is a
                                3.8.2 release in around 10 days I think.
                                We can probably target this issue for
                                that?<br>
                              </div>
                            </span></div>
                        </div>
                      </div>
                    </blockquote>
                    <span class=""> Here are statedumps.<br>
                      Steps:<br>
                      1. mount -t glusterfs ldap1.my.domain:SHARE
                      /root/MNT/ (here VSZ and RSS are 381896 35828)<br>
                      2. take a dump with kill -USR1
                      &lt;pid-of-glusterfs-process&gt; (file
                      glusterdump.n1.dump.1470042769)<br>
                      3. perform a 'ls -lR /root/MNT | wc -l' (btw
                      result of wc -l is 518396 :)) and a 'cp -Rp /usr/*
                      /root/MNT/boo' (VSZ/RSS are 1301536/711992 at end
                      of these operations)<br>
                      4. take a dump with kill -USR1
                      &lt;pid-of-glusterfs-process&gt; (file
                      glusterdump.n2.dump.1470043929)<br>
                      5. do 'cp -Rp * /root/MNT/toto/', so on an other
                      directory (VSZ/RSS are 1432608/909968 at end of
                      this operation)<br>
                      6. take a dump with kill -USR1
                      &lt;pid-of-glusterfs-process&gt; (file
                      glusterdump.n3.dump.)<br>
                    </span></div>
                </blockquote>
                <div><br>
                </div>
                <div>Hey,<br>
                </div>
                <div>      Thanks a lot for providing this information.
                  Looking at these steps, I don't see any problem for
                  the increase in memory. Both ls -lR and cp -Rp
                  commands you did in the step-3 will add new inodes in
                  memory which increase the memory. What happens is as
                  long as the kernel thinks these inodes need to be in
                  memory gluster keeps them in memory. Once kernel
                  doesn't think the inode is necessary, it sends
                  'inode-forgets'. At this point the memory starts
                  reducing. So it kind of depends on the memory pressure
                  kernel is under. But you said it lead to OOM-killers
                  on smaller machines which means there could be some
                  leaks. Could you modify the steps as follows to check
                  to confirm there are leaks? Please do this test on
                  those smaller machines which lead to OOM-killers.<br>
                </div>
                <div><br>
                  <span class="">Steps:<br>
                    1. mount -t glusterfs ldap1.my.domain:SHARE
                    /root/MNT/ (here VSZ and RSS are 381896 35828)<br>
                    2. perform a 'ls -lR /root/MNT | wc -l' (btw result
                    of wc -l is 518396 :)) and a 'cp -Rp /usr/*
                    /root/MNT/boo' (VSZ/RSS are 1301536/711992 at end of
                    these operations)<br>
                    3. do 'cp -Rp * /root/MNT/toto/', so on an other
                    directory (VSZ/RSS are 1432608/909968 at end of this
                    operation)<br>
                  </span></div>
                <div><span class=""> 4. Delete all the files and
                    directories you created in steps 2, 3 above<br>
                  </span></div>
                <div><span class="">5. Take statedump with kill -USR1
                    &lt;pid-of-glusterfs-process&gt;<br>
                  </span></div>
                <div><span class="">6. Repeat steps from 2-5<br>
                    <br>
                  </span></div>
                <div><span class="">Attach these two statedumps. I think
                    the statedumps will be even more affective if the
                    mount does not have any data when you start the
                    experiment.<br>
                  </span></div>
                <div><br>
                </div>
                <div>HTH<br>
                   <br>
                </div>
                <blockquote class="gmail_quote" style="margin:0px 0px
                  0px 0.8ex;border-left:1px solid
                  rgb(204,204,204);padding-left:1ex">
                  <div bgcolor="#FFFFFF" text="#000000"><span class="">
                      <br>
                    </span> Dump files are gzip'ed because they are very
                    large.<br>
                    Dump files are here (too big for email):<br>
                    <a moz-do-not-send="true"
href="http://wikisend.com/download/623430/glusterdump.n1.dump.1470042769.gz"
                      target="_blank">http://wikisend.com/download/623430/glusterdump.n1.dump.1470042769.gz</a><br>
                    <a moz-do-not-send="true"
href="http://wikisend.com/download/771220/glusterdump.n2.dump.1470043929.gz"
                      target="_blank">http://wikisend.com/download/771220/glusterdump.n2.dump.1470043929.gz</a><br>
                    <a moz-do-not-send="true"
href="http://wikisend.com/download/428752/glusterdump.n3.dump.1470045181.gz"
                      target="_blank">http://wikisend.com/download/428752/glusterdump.n3.dump.1470045181.gz</a><br>
                    (I keep the files if someone whats them in an other
                    format)<span class=""><br>
                      <br>
                      Client and servers are installed from .deb files
                      (glusterfs-client_3.8.1-1_amd64.deb and
                      glusterfs-common_3.8.1-1_amd64.deb on client
                      side).<br>
                      They are all Debian 8 64bit. Servers are test
                      machines that serve only one volume to this sole
                      client. Volume is a simple x2 replica. I just
                      changed for test network.inode-lru-limit value to
                      1024. Mount point /root/MNT is only used for these
                      tests.<br>
                      <br>
                      --<br>
                      Y.<br>
                      <br>
                      <br>
                    </span></div>
                </blockquote>
              </div>
              <br>
              <br clear="all">
              <br>
              -- <br>
              <div class="gmail_signature"
                data-smartmail="gmail_signature">
                <div dir="ltr">Pranith<br>
                </div>
              </div>
            </div>
          </div>
        </blockquote>
        <p><br>
        </p>
        <br>
        <fieldset class="mimeAttachmentHeader"></fieldset>
        <br>
        <pre wrap="">_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
      </blockquote>
      <p><br>
      </p>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>