<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Le 29/07/2016 20:27, Pranith Kumar
      Karampuri a écrit :<br>
    </div>
    <blockquote
cite="mid:CAOgeEnZB5LmD_F8BaDtdvMOykaJudA3irNeMvGSm_Pr_KyfzgQ@mail.gmail.com"
      type="cite">
      <div dir="ltr"><br>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Fri, Jul 29, 2016 at 10:09 PM,
            Pranith Kumar Karampuri <span dir="ltr">&lt;<a
                moz-do-not-send="true" href="mailto:pkarampu@redhat.com"
                target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a></a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div dir="ltr"><br>
                <div class="gmail_extra"><br>
                  <div class="gmail_quote"><span class="">On Fri, Jul
                      29, 2016 at 2:26 PM, Yannick Perret <span
                        dir="ltr">&lt;<a moz-do-not-send="true"
                          href="mailto:yannick.perret@liris.cnrs.fr"
                          target="_blank">yannick.perret@liris.cnrs.fr</a>&gt;</span>
                      wrote:<br>
                      <blockquote class="gmail_quote" style="margin:0px
                        0px 0px 0.8ex;border-left:1px solid
                        rgb(204,204,204);padding-left:1ex">Ok, last try:<br>
                        after investigating more versions I found that
                        FUSE client leaks memory on all of them.<br>
                        I tested:<br>
                        - 3.6.7 client on debian 7 32bit and on debian 8
                        64bit (with 3.6.7 serveurs on debian 8 64bit)<br>
                        - 3.6.9 client on debian 7 32bit and on debian 8
                        64bit (with 3.6.7 serveurs on debian 8 64bit)=<br>
                        - 3.7.13 client on debian 8 64bit (with 3.8.1
                        serveurs on debian 8 64bit)<br>
                        - 3.8.1 client on debian 8 64bit (with 3.8.1
                        serveurs on debian 8 64bit)<br>
                        In all cases compiled from sources, appart for
                        3.8.1 where .deb were used (due to a configure
                        runtime error).<br>
                        For 3.7 it was compiled with --disable-tiering.
                        I also tried to compile with
                        --disable-fusermount (no change).<br>
                        <br>
                        In all of these cases the memory (resident &amp;
                        virtual) of glusterfs process on client grows on
                        each activity and never reach a max (and never
                        reduce).<br>
                        "Activity" for these tests is cp -Rp and ls -lR.<br>
                        The client I let grows the most overreached ~4Go
                        RAM. On smaller machines it ends by OOM killer
                        killing glusterfs process or glusterfs dying due
                        to allocation error.<br>
                        <br>
                        In 3.6 mem seems to grow continusly, whereas in
                        3.8.1 it grows by "steps" (430400 ko → 629144
                        (~1min) → 762324 (~1min) → 827860…).<br>
                        <br>
                        All tests performed on a single test volume used
                        only by my test client. Volume in a basic x2
                        replica. The only parameters I changed on this
                        volume (without any effect) are
                        diagnostics.client-log-level set to ERROR and
                        network.inode-lru-limit set to 1024.<br>
                      </blockquote>
                      <div><br>
                      </div>
                    </span>
                    <div>Could you attach statedumps of your runs?<br>
                      The following link has steps to capture this(<a
                        moz-do-not-send="true"
href="https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/"
                        target="_blank"><a class="moz-txt-link-freetext" href="https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/">https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/</a></a>
                      ). We basically need to see what are the memory
                      types that are increasing. If you could help find
                      the issue, we can send the fixes for your
                      workload. There is a 3.8.2 release in around 10
                      days I think. We can probably target this issue
                      for that?<br>
                    </div>
                  </div>
                </div>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>hi,<br>
            </div>
            <div>         We found a problem here: <a
                moz-do-not-send="true"
                href="https://bugzilla.redhat.com/show_bug.cgi?id=1361681#c0"><a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1361681#c0">https://bugzilla.redhat.com/show_bug.cgi?id=1361681#c0</a></a>,
              Based on git-blame this bug is in existence from 2012-Aug
              may be even longer. I am wondering if you guys are running
              into this. Do you guys want to help test the fix if we
              provide this? I don't think lot of others ran into this
              problem I guess.<br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    Yes I saw that this seems to be a long-running bug.<br>
    I'm surprise that it don't hit too much other people because I'm
    really using a very simple and basic configuration (replica x2
    servers + fuse clients which is a basic tuto in glusterfs docs).
    Maybe few people use the fuse client, or maybe only in a
    mount-use-umount manner.<br>
    <br>
    I will send reports as explained in your previous mail.<br>
    I have 2 servers and 1 client that are tests machines so I can do
    what I want on them. I can also apply patches as I use
    build-from-sources servers/client (and the memory leak is easy and
    fast to check: with intensive activity I can go from ~140Mo to
    &gt;2Go in less than 2 hours).<br>
    <br>
    Note: I had a problem with 3.8.1 sources → running ./configure
    claims about:<br>
    configure: WARNING: cache variable ac_cv_build contains a newline<br>
    configure: WARNING: cache variable ac_cv_host contains a newline<br>
    and calling 'make' tells me:<br>
    Makefile:90: *** missing separator (did you mean TAB instead of 8
    spaces?). Stop.<br>
    That's why I used the .deb from gusterfs downloads instead of
    sources for this version.<br>
    <br>
    --<br>
    Y.<br>
    <blockquote
cite="mid:CAOgeEnZB5LmD_F8BaDtdvMOykaJudA3irNeMvGSm_Pr_KyfzgQ@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div> <br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div dir="ltr">
                <div class="gmail_extra">
                  <div class="gmail_quote">
                    <div><br>
                    </div>
                    <blockquote class="gmail_quote" style="margin:0px
                      0px 0px 0.8ex;border-left:1px solid
                      rgb(204,204,204);padding-left:1ex"><span class="">
                        <br>
                        <br>
                        This clearly prevent us to use glusterfs on our
                        clients. Any way to prevent this to happen? I
                        still switched back to NFS mounts but it is not
                        what we're looking for.<br>
                        <br>
                        Regards,<br>
                        --<br>
                        Y.<br>
                        <br>
                        <br>
                        <br>
                      </span>_______________________________________________<br>
                      Gluster-users mailing list<br>
                      <a moz-do-not-send="true"
                        href="mailto:Gluster-users@gluster.org"
                        target="_blank">Gluster-users@gluster.org</a><br>
                      <a moz-do-not-send="true"
                        href="http://www.gluster.org/mailman/listinfo/gluster-users"
                        rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><span
                        class=""><font color="#888888"><br>
                        </font></span></blockquote>
                  </div>
                  <span class=""><font color="#888888"><br>
                      <br clear="all">
                      <br>
                      -- <br>
                      <div data-smartmail="gmail_signature">
                        <div dir="ltr">Pranith<br>
                        </div>
                      </div>
                    </font></span></div>
              </div>
            </blockquote>
          </div>
          <br>
          <br clear="all">
          <br>
          -- <br>
          <div class="gmail_signature" data-smartmail="gmail_signature">
            <div dir="ltr">Pranith<br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>