<div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial">Could you please tell me your glusterfs version and the mount command that you have used?? My GlusterFS version is 3.3.0, different versions may be exits different results.<br><br><br><br><br><div style="position:relative;zoom:1"></div><div id="divNeteaseMailCard"></div><br>At 2016-09-06 12:35:19, "Ravishankar N" &lt;ravishankar@redhat.com&gt; wrote:<br> <blockquote id="isReplyContent" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
  
    
  
  
    <div class="moz-cite-prefix">That is strange. I tried the experiment
      on a volume with a million files. The client node's memory usage
      did grow, as I observed from the output of free(1)&nbsp;
      <a class="moz-txt-link-freetext" href="http://paste.fedoraproject.org/422551/">http://paste.fedoraproject.org/422551/</a> when I did a `ls`.<br>
      -Ravi<br>
      &nbsp;<br>
      On 09/02/2016 07:31 AM, Keiviw wrote:<br>
    </div>
    <blockquote cite="mid:41fc21e8.642a.156e8a129e6.Coremail.keiviw@163.com" type="cite">
      
      <div id="contentDescription" style="line-height:1.5;text-align:justify;text-justify:inter-ideograph">
        <div>Exactly, I mounted the volume in a no-brick node(nodeB),
          and nodeA was the server. I have set different timeout, but
          when I excute "ls /mnt/glusterfs(about 3 million small files,
          in other words, about 3 million dentries)", the result was the
          same, memory usage in nodeB didn't change at all while nodeA's
          memory usage was changed about 4GB!</div>
        <div><br>
        </div>
        <div class="NETEASEMAILMASTERLOCALSIGNATURE"><span style="color:#888;font-size:15px;">发自 </span><a moz-do-not-send="true" href="http://u.163.com/signature" style="font-size:15px;color:#2e90eb;" target="_blank">网易邮箱大师</a></div>
        <div class="borderFixWidth iMailDoNotReScale" style="background-color:#f2f2f2;color:black;padding-top:6px;padding-bottom:6px;border-radius:3px;-moz-border-radius:3px;-webkit-border-radius:3px;margin-top:45px;margin-bottom:20px;">
          <div style="font-size:14px;line-height:1.5;word-break:break-all;margin-left:10px;margin-right:10px">On
            09/02/2016 09:45, <a moz-do-not-send="true" style="text-decoration:none;color:#2a97ff;" href="mailto:ravishankar@redhat.com">Ravishankar N</a>
            wrote:</div>
        </div>
        <blockquote id="ntes-iosmail-quote" style="margin:0">
          <div class="moz-cite-prefix">On 09/02/2016 05:42 AM, Keiviw
            wrote:<br>
          </div>
          <blockquote cite="mid:6cbf1881.1050.156e83d1e68.Coremail.keiviw@163.com" type="cite">
            <div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial">Even
              if I set the attribute-timeout and entry-timeout to
              3600s(1h), in the nodeB, it didn't cache any metadata
              because the memory usage didn't change. So I was confused
              that why did the client not cache dentries and inodes.<br>
              <br>
            </div>
          </blockquote>
          <tt>If you only want to test fuse's caching, I would try
            mounting the volume on a separate machine (not on the brick
            node itself), disable all gluster performance xlators, do a
            find.|xargs stat on the mount 2 times in succession and see
            what free(1) reports the 1st and 2nd time. You could do this
            experiment with various attr/entry timeout values. Make sure
            your volume has a lot of small files.<br>
            -Ravi<br>
          </tt>
          <blockquote cite="mid:6cbf1881.1050.156e83d1e68.Coremail.keiviw@163.com" type="cite">
            <div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial"><br>
              <br>
              在 2016-09-01 16:37:00,"Ravishankar N" <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:ravishankar@redhat.com">&lt;ravishankar@redhat.com&gt;</a>
              写道:<br>
              <blockquote id="isReplyContent" style="PADDING-LEFT: 1ex;
                MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
                <div class="moz-cite-prefix">On 09/01/2016 01:04 PM,
                  Keiviw wrote:<br>
                </div>
                <blockquote cite="mid:14087c4e.3f73.156e4ab683e.Coremail.keiviw@163.com" type="cite">
                  <div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial">
                    <div>Hi,</div>
                    <div>&nbsp; &nbsp; I have found that GlusterFS client(mounted
                      by FUSE) didn't cache metadata like dentries and
                      inodes. I have installed GlusterFS 3.6.0 in nodeA
                      and nodeB, and the brick1 and brick2 was in nodeA,
                      then in nodeB, I mounted the volume to
                      /mnt/glusterfs by FUSE. From my test, I excuted
                      'ls /mnt/glusterfs' in nodeB, and found that the
                      memory didn't use at all. Here are my questions:</div>
                    <div>&nbsp; &nbsp; 1. In fuse kernel, the author set some
                      attributes to control the time-out about dentry
                      and inode, in other words, the fuse kernel
                      supports metadata cache, but in my test, dentries
                      and inodes were not cached. WHY?</div>
                    <div>&nbsp; &nbsp; 2. Were there some options in GlusterFS
                      mounted to local to enable the metadata cache in
                      fuse kernel?&nbsp;</div>
                  </div>
                  <br>
                  <br>
                </blockquote>
                <tt>You can tweak the attribute-timeout and
                  entry-timeout seconds while mounting the volume.
                  Default is 1 second for both.&nbsp; `man mount.glusterfs`
                  lists various mount options.<br>
                  -Ravi<br>
                </tt>
                <blockquote cite="mid:14087c4e.3f73.156e4ab683e.Coremail.keiviw@163.com" type="cite"><span title="neteasefooter">
                    <p>&nbsp;</p>
                  </span><br>
                  <fieldset class="mimeAttachmentHeader"></fieldset>
                  <br>
                  <pre wrap="">_______________________________________________
Gluster-devel mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-devel">http://www.gluster.org/mailman/listinfo/gluster-devel</a></pre>
                </blockquote>
                <p><br>
                </p>
              </blockquote>
            </div>
            <br>
            <br>
            <span title="neteasefooter">
              <p>&nbsp;</p>
            </span> </blockquote>
          <p><br>
          </p>
        </blockquote>
      </div>
      <br>
      <br>
    </blockquote>
    <p><br>
    </p>
  

</blockquote></div><br><br><span title="neteasefooter"><p>&nbsp;</p></span>