<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    We have open bug 1227724 for the similar problem <br>
    <br>
    Thanks,<br>
    Rajesh<br>
    <br>
    <div class="moz-cite-prefix">On 06/08/2015 12:08 PM, Vijaikumar M
      wrote:<br>
    </div>
    <blockquote cite="mid:55753880.7030101@redhat.com" type="cite">
      <meta content="text/html; charset=windows-1252"
        http-equiv="Content-Type">
      <tt>Hi Alessandro,</tt><tt><br>
      </tt><tt><br>
      </tt><tt>P</tt><tt>lease provide the test-case, so that we can try
        to re-create this problem in-house?</tt><tt><br>
      </tt><tt><br>
      </tt><tt>Thanks,</tt><tt><br>
      </tt><tt>Vijay</tt><tt><br>
      </tt><br>
      <div class="moz-cite-prefix">On Saturday 06 June 2015 05:59 AM,
        Alessandro De Salvo wrote:<br>
      </div>
      <blockquote
        cite="mid:F6674833-6151-415D-B7EE-28F6760DD48F@roma1.infn.it"
        type="cite">
        <pre wrap="">Hi,
just to answer to myself, it really seems the temp files from rsync are the culprit, it seems that their size are summed up to the real contents of the directories I’m synchronizing, or in other terms their size is not removed from the used size after they are removed. I suppose this is someway connected to the error on removexattr I’m seeing. The temporary solution I’ve found is to use rsync with the option to write the temp files to /tmp, but it would be very interesting to understand why this is happening.
Cheers,

        Alessandro

</pre>
        <blockquote type="cite">
          <pre wrap="">Il giorno 06/giu/2015, alle ore 01:19, Alessandro De Salvo <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:Alessandro.DeSalvo@roma1.infn.it">&lt;Alessandro.DeSalvo@roma1.infn.it&gt;</a> ha scritto:

Hi,
I currently have two brick with replica 2 on the same machine, pointing to different disks of a connected SAN.
The volume itself is fine:

# gluster volume info atlas-home-01

Volume Name: atlas-home-01
Type: Replicate
Volume ID: 660db960-31b8-4341-b917-e8b43070148b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: host1:/bricks/atlas/home02/data
Brick2: host2:/bricks/atlas/home01/data
Options Reconfigured:
performance.write-behind-window-size: 4MB
performance.io-thread-count: 32
performance.readdir-ahead: on
server.allow-insecure: on
nfs.disable: true
features.quota: on
features.inode-quota: on


However, when I set a quota on a dir of the volume the size show is twice the physical size of the actual dir:

# gluster volume quota atlas-home-01 list /user1
                 Path                   Hard-limit Soft-limit   Used  Available  Soft-limit exceeded? Hard-limit exceeded?
---------------------------------------------------------------------------------------------------------------------------
/user1                                    4.0GB       80%       3.2GB 853.4MB              No                   No

# du -sh /storage/atlas/home/user1
1.6G    /storage/atlas/home/user1

If I remove one of the bricks the quota shows the correct value.
Is there any double counting in case the bricks are on the same machine?
Also, I see a lot of errors in the logs like the following:

[2015-06-05 21:59:27.450407] E [posix-handle.c:157:posix_make_ancestryfromgfid] 0-atlas-home-01-posix: could not read the link from the gfid handle /bricks/atlas/home01/data/.glusterfs/be/e5/bee5e2b8-c639-4539-a483-96c19cd889eb (No such file or directory)

and also

[2015-06-05 22:52:01.112070] E [marker-quota.c:2363:mq_mark_dirty] 0-atlas-home-01-marker: failed to get inode ctx for /user1/file1

When running rsync I also see the following errors:

[2015-06-05 23:06:22.203968] E [marker-quota.c:2601:mq_remove_contri] 0-atlas-home-01-marker: removexattr trusted.glusterfs.quota.fddf31ba-7f1d-4ba8-a5ad-2ebd6e4030f3.contri failed for /user1/..bashrc.O4kekp: No data available

Those files are the temp files of rsync, I’m not sure why the throw errors in glusterfs.
Any help?
Thanks,

        Alessandro


_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
        </blockquote>
        <br>
        <fieldset class="mimeAttachmentHeader"></fieldset>
        <br>
        <pre wrap="">_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
      </blockquote>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>