<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">There have been alot of fixes since
      3.6.9. <br>
      <br>
      Specifically, <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/1311377">https://bugzilla.redhat.com/1311377</a> was fixed in
      3.7.9. re:
<a class="moz-txt-link-freetext" href="https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.9.md">https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.9.md</a><br>
      <br>
      And another memory bug was fixed in 3.7.10. <br>
      <br>
      <br>
      On 26/04/16 22:14, Alessandro Ipe wrote:<br>
    </div>
    <blockquote cite="mid:2812629.D21ix5EE1b@snow" type="cite">
      <meta http-equiv="Context-Type" content="text/html;
        charset=iso-8859-15">
      <meta name="qrichtext" content="1">
      <p>Hi,</p>
      <p> </p>
      <p> </p>
      <p>Apparently, version 3.6.9 is suffering from a SERIOUS memory
        leak as illustrated in the following logs:</p>
      <p>2016-04-26T11:54:27.971564+00:00 tsunami1 kernel:
        [698635.210069] glusterfsd invoked oom-killer: gfp_mask=0x201da,
        order=0, oom_score_adj=0</p>
      <p>2016-04-26T11:54:27.974133+00:00 tsunami1 kernel:
        [698635.210076] Pid: 28111, comm: glusterfsd Tainted: G W O
        3.7.10-1.1-desktop #1</p>
      <p>2016-04-26T11:54:27.974136+00:00 tsunami1 kernel:
        [698635.210077] Call Trace:</p>
      <p>2016-04-26T11:54:27.974137+00:00 tsunami1 kernel:
        [698635.210090] [&lt;ffffffff81004818&gt;] dump_trace+0x88/0x300</p>
      <p>2016-04-26T11:54:27.974137+00:00 tsunami1 kernel:
        [698635.210096] [&lt;ffffffff8158b033&gt;] dump_stack+0x69/0x6f</p>
      <p>2016-04-26T11:54:27.974138+00:00 tsunami1 kernel:
        [698635.210101] [&lt;ffffffff8158db39&gt;]
        dump_header+0x70/0x200</p>
      <p>2016-04-26T11:54:27.974139+00:00 tsunami1 kernel:
        [698635.210105] [&lt;ffffffff81112ad4&gt;]
        oom_kill_process+0x244/0x390</p>
      <p>2016-04-26T11:54:28.113125+00:00 tsunami1 kernel:
        [698635.210111] [&lt;ffffffff81113211&gt;]
        out_of_memory+0x451/0x490</p>
      <p>2016-04-26T11:54:28.113142+00:00 tsunami1 kernel:
        [698635.210116] [&lt;ffffffff81118afe&gt;]
        __alloc_pages_nodemask+0x8ae/0x9f0</p>
      <p>2016-04-26T11:54:28.113143+00:00 tsunami1 kernel:
        [698635.210122] [&lt;ffffffff81152fb7&gt;]
        alloc_pages_current+0xb7/0x130</p>
      <p>2016-04-26T11:54:28.113144+00:00 tsunami1 kernel:
        [698635.210127] [&lt;ffffffff81111673&gt;]
        filemap_fault+0x283/0x440</p>
      <p>2016-04-26T11:54:28.113144+00:00 tsunami1 kernel:
        [698635.210131] [&lt;ffffffff811345ee&gt;] __do_fault+0x6e/0x560</p>
      <p>2016-04-26T11:54:28.113145+00:00 tsunami1 kernel:
        [698635.210136] [&lt;ffffffff81137cf7&gt;]
        handle_pte_fault+0x97/0x490</p>
      <p>2016-04-26T11:54:28.113145+00:00 tsunami1 kernel:
        [698635.210141] [&lt;ffffffff8159af8b&gt;]
        __do_page_fault+0x16b/0x4c0</p>
      <p>2016-04-26T11:54:28.113562+00:00 tsunami1 kernel:
        [698635.210145] [&lt;ffffffff815982f8&gt;] page_fault+0x28/0x30</p>
      <p>2016-04-26T11:54:28.113565+00:00 tsunami1 kernel:
        [698635.210158] [&lt;00007fa9d8a8292b&gt;] 0x7fa9d8a8292a</p>
      <p>2016-04-26T11:54:28.120811+00:00 tsunami1 kernel:
        [698635.226243] Out of memory: Kill process 17144 (glusterfsd)
        score 694 or sacrifice child</p>
      <p>2016-04-26T11:54:28.120811+00:00 tsunami1 kernel:
        [698635.226251] Killed process 17144 (glusterfsd)
        total-vm:8956384kB, anon-rss:6670900kB, file-rss:0kB</p>
      <p> </p>
      <p>It makes this version completely useless in production. Bricks
        servers have 8 GB of RAM (but will be upgraded to 16 GB).</p>
      <p> </p>
      <p>gluster volume info &lt;VOLUME&gt; returns:</p>
      <p>Volume Name: home</p>
      <p>Type: Distributed-Replicate</p>
      <p>Volume ID: 501741ed-4146-4022-af0b-41f5b1297766</p>
      <p>Status: Started</p>
      <p>Number of Bricks: 14 x 2 = 28</p>
      <p>Transport-type: tcp</p>
      <p>Bricks:</p>
      <p>Brick1: tsunami1:/data/glusterfs/home/brick1</p>
      <p>Brick2: tsunami2:/data/glusterfs/home/brick1</p>
      <p>Brick3: tsunami1:/data/glusterfs/home/brick2</p>
      <p>Brick4: tsunami2:/data/glusterfs/home/brick2</p>
      <p>Brick5: tsunami1:/data/glusterfs/home/brick3</p>
      <p>Brick6: tsunami2:/data/glusterfs/home/brick3</p>
      <p>Brick7: tsunami1:/data/glusterfs/home/brick4</p>
      <p>Brick8: tsunami2:/data/glusterfs/home/brick4</p>
      <p>Brick9: tsunami3:/data/glusterfs/home/brick1</p>
      <p>Brick10: tsunami4:/data/glusterfs/home/brick1</p>
      <p>Brick11: tsunami3:/data/glusterfs/home/brick2</p>
      <p>Brick12: tsunami4:/data/glusterfs/home/brick2</p>
      <p>Brick13: tsunami3:/data/glusterfs/home/brick3</p>
      <p>Brick14: tsunami4:/data/glusterfs/home/brick3</p>
      <p>Brick15: tsunami3:/data/glusterfs/home/brick4</p>
      <p>Brick16: tsunami4:/data/glusterfs/home/brick4</p>
      <p>Brick17: tsunami5:/data/glusterfs/home/brick1</p>
      <p>Brick18: tsunami6:/data/glusterfs/home/brick1</p>
      <p>Brick19: tsunami5:/data/glusterfs/home/brick2</p>
      <p>Brick20: tsunami6:/data/glusterfs/home/brick2</p>
      <p>Brick21: tsunami5:/data/glusterfs/home/brick3</p>
      <p>Brick22: tsunami6:/data/glusterfs/home/brick3</p>
      <p>Brick23: tsunami5:/data/glusterfs/home/brick4</p>
      <p>Brick24: tsunami6:/data/glusterfs/home/brick4</p>
      <p>Brick25: tsunami7:/data/glusterfs/home/brick1</p>
      <p>Brick26: tsunami8:/data/glusterfs/home/brick1</p>
      <p>Brick27: tsunami7:/data/glusterfs/home/brick2</p>
      <p>Brick28: tsunami8:/data/glusterfs/home/brick2</p>
      <p>Options Reconfigured:</p>
      <p>nfs.export-dir: /gerb-reproc/Archive</p>
      <p>nfs.volume-access: read-only</p>
      <p>cluster.ensure-durability: on</p>
      <p>features.quota: on</p>
      <p>performance.cache-size: 512MB</p>
      <p>performance.io-thread-count: 32</p>
      <p>performance.flush-behind: off</p>
      <p>performance.write-behind-window-size: 4MB</p>
      <p>performance.write-behind: off</p>
      <p>nfs.disable: off</p>
      <p>cluster.read-hash-mode: 2</p>
      <p>diagnostics.brick-log-level: CRITICAL</p>
      <p>cluster.lookup-unhashed: on</p>
      <p>server.allow-insecure: on</p>
      <p>auth.allow: localhost, &lt;COUPLE OF IP ADDRESSES&gt;</p>
      <p>cluster.readdir-optimize: on</p>
      <p>performance.readdir-ahead: on</p>
      <p>nfs.export-volumes: off</p>
      <p> </p>
      <p>Are you aware if this issue ?</p>
      <p> </p>
      <p> </p>
      <p>Thanks,</p>
      <p> </p>
      <p> </p>
      <p>A.</p>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>