<div dir="ltr">Hi Steve,<div><br></div><div><div>Here is how quota usage accounting works</div><div><br></div><div>For each file, below extended attributes are set:</div><div><span style="font-size:12.8px">trusted.glusterfs.quota......contri -> This value tells how much size this file/dir has contributed to its parent (key will have a gfid of parent)</span><br></div><div><br></div><div>For each directory, below extended attributes are set:</div><div><span style="font-size:12.8px">trusted.glusterfs.quota.....</span><span style="font-size:12.8px">.contri (not on root, as root doesn't have parent)</span><br></div><div><span style="font-size:12.8px">trusted.glusterfs.quota.dirty -> this attribute is used for recovery when brick crashes during metadata update<br></span></div><div><span style="font-size:12.8px">trusted.glusterfs.quota.size -> it is the total size of all the files, directories and sub-directories till the leaf node.</span><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><br></div><div>When a file "/a/b/f1" is changed, then the change in the size needs to be</div><div>updated in the extended attributes of file 'f1' and crawl upwards till the root</div><div>and update the extended attributes of the ancestors of file 'f1'. This is done independently on each brick.</div><div><br></div><div>Here is the pseudo code</div><div>1) Begin metadata update file update '/a/b/f1'</div><div>2) inode = f1</div><div>3) parent_inode = b</div><div>4) if parent_inode is root goto end</div><div>5) take lock on parent_inode</div><div>6) get new size of inode</div><div> if inode is a file get size from statbuf</div><div> if inode is a dir get size from extended attribute</div><div>7) get contribution value from inode</div><div>8) find delta value</div><div> delta = size - contri</div><div>9) if delta is zero, no update goto end</div><div>10) set dirty flag on parent_inode</div><div>11) add delta to the inode contri</div><div>12) add delta to size attribute of parent_inode</div><div>13) clear dirty flag on parent_inode</div><div>14) release lock on parent_inode</div><div>15) inode = parent_inode</div><div>16) parent_inode = parent (inode)</div><div>17) goto step 4</div><div>18) End</div><div><br></div><div><br></div><div><div>As mentioned above, if there is a change in any file we get the old metadata value and add the delta to this</div><div>value. So when quota is disable and some stale xattrs are leftover, this value will be used when adding the delta.</div><div>Quota marker cannot identify if the xattr leftover is a newly created attribute or a stale attribute.</div><div>This problem is now solved in 3.7 by using a version number as part of the xattr key. This version number is</div><div>incremented every-time quota is disabled and enabled and even if old entries are not cleaned, looking at the</div><div>version number quota marker identifies that as stale entry and creates a new xattrs with the current version number</div></div><div><br></div><div>Step 11 & 12 should be atomic, hence we use dirty flag. In case if there is a crash during this step.</div><div>When the brick is back online and during a lookup on a directory if dound that dirty flag is set</div><div>below operation is performed</div><div>1) If dirty flag set on a 'inode'</div><div> a) readdir on inode</div><div> b) get sum of contri attribute of all file/dir entries of inode</div><div> c) update size attribute of 'inode'</div><div> d) clear dirty flag</div><div><br></div><div><br></div><div>In our previous email, we have provided a workaround to just fix a selected directory from the back-end.</div><div>If volume if not stopped, make sure that no IO happens. Because when you are manually updating the</div><div>xattrs in the back-end, and if IO is happening brick process can also update the xattrs and again you end-up with</div><div>inconsistent accounting.</div><div><br></div><div><br></div><div>Thanks,</div><div>Vijay</div></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 12, 2016 at 1:28 AM, Steve Dainard <span dir="ltr"><<a href="mailto:sdainard@spd1.com" target="_blank">sdainard@spd1.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">What would happen if I:<br>
- Did not disable quotas<br>
- Did not stop the volume (140T volume takes at least 3-4 days to do<br>
any find operations, which is too much downtime)<br>
- Find and remove all xattrs:<br>
trusted.glusterfs.quota.242dcfd9-6aea-4cb8-beb2-c0ed91ad70d3.contri on<br>
the /brick/volumename/modules<br>
- set the dirty bit on /brick/volumename/modules<br>
<br>
As far as an upgrade to 3.7, I'm not comfortable with running the<br>
newest release - which version is RHGS based on? I typically like to<br>
follow supported product version if I can, so I know most of the kinks<br>
are worked out :)<br>
<br>
On Wed, Feb 10, 2016 at 11:02 PM, Manikandan Selvaganesh<br>
<div class="HOEnZb"><div class="h5"><<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>> wrote:<br>
> Hi Steve,<br>
><br>
> We suspect the mismatching in accounting is probably because of the<br>
> xattr's being not cleaned up properly. Please ensure you do the following<br>
> steps and make sure the xattr's are cleaned up properly before quota<br>
> is enabled for the next time.<br>
><br>
> 1) stop the volume<br>
> 2) on each brick in the backend do<br>
> Find and remove all the xattrs and make sure they are not present<br>
> # find <brickpath>/module | xargs getfattr -d -m . -e hex | grep quota | grep -E 'contri|size'<br>
> # setxattr -x xattrname <path><br>
><br>
> 3) set dirty on <brickpath>/<br>
> # setxattr -n trusted.glusterfs.quota.dirty -v 0x3100<br>
> By setting dirty value on root as 1(0x3100), the contri will be calculated again<br>
> and the proper contri will be crawled and updated again.<br>
><br>
> 4) Start volume and from a fuse mount<br>
> # stat /mountpath<br>
><br>
> If you have ever performed a rename, then there is a possibility of two contributions<br>
> getting created for a single entry.<br>
><br>
> We have fixed quite some rename issues and have refactored the marker approach. Also<br>
> as I have mentioned already we have also done Versioning of xattr's which solves the<br>
> issue you are facing in 3.7. It would be really helpful in a production environment if<br>
> you could upgrade to 3.7<br>
><br>
> --<br>
> Thanks & Regards,<br>
> Manikandan Selvaganesh.<br>
><br>
> ----- Original Message -----<br>
> From: "Steve Dainard" <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>><br>
> To: "Manikandan Selvaganesh" <<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>><br>
> Cc: "Vijaikumar Mallikarjuna" <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>, "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>>, "<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a> List" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
> Sent: Thursday, February 11, 2016 1:48:19 AM<br>
> Subject: Re: [Gluster-users] Quota list not reflecting disk usage<br>
><br>
> So after waiting out the process of disabling quotas, waiting for the<br>
> xattrs to be cleaned up, re-enabling quotas and waiting for the<br>
> xattr's to be created, then applying quotas I'm running into the same<br>
> issue.<br>
><br>
> Yesterday at ~2pm one of the quotas was listed as:<br>
> /modules|100.0GB|18.3GB|81.7GB<br>
><br>
> I initiated a copy from that glusterfs fuse mount to another fuse<br>
> mount for a different volume, and now I'm seeing:<br>
> /modules|100.0GB|27.4GB|72.6GB<br>
><br>
> So an increase of 9GB usage.<br>
><br>
> There were no writes at all to this directory during or after the cp.<br>
><br>
> I did a bit of digging through the /modules directory on one of the<br>
> gluster nodes and created this spreadsheet:<br>
> <a href="https://docs.google.com/spreadsheets/d/1l_6ze68TCOcx6LEh9MFwmqPZ9bM-70CUlSM_8tpQ654/edit?usp=sharing" rel="noreferrer" target="_blank">https://docs.google.com/spreadsheets/d/1l_6ze68TCOcx6LEh9MFwmqPZ9bM-70CUlSM_8tpQ654/edit?usp=sharing</a><br>
><br>
> The /modules/R/3.2.2 directory quota value doesn't come close to<br>
> matching the du value.<br>
><br>
> Funny bit, there are TWO quota contribution attributes:<br>
> # getfattr -d -m quota -e hex 3.2.2<br>
> # file: 3.2.2<br>
> trusted.glusterfs.quota.242dcfd9-6aea-4cb8-beb2-c0ed91ad70d3.contri=0x0000000009af6000<br>
> trusted.glusterfs.quota.c890be20-1bb9-4aec-a8d0-eacab0446f16.contri=0x0000000013fda800<br>
> trusted.glusterfs.quota.dirty=0x3000<br>
> trusted.glusterfs.quota.size=0x0000000013fda800<br>
><br>
> For reference, another directory /modules/R/2.14.2 has only one<br>
> contribution attribute:<br>
> # getfattr -d -m quota -e hex 2.14.2<br>
> # file: 2.14.2<br>
> trusted.glusterfs.quota.c890be20-1bb9-4aec-a8d0-eacab0446f16.contri=0x0000000000692800<br>
> trusted.glusterfs.quota.dirty=0x3000<br>
> trusted.glusterfs.quota.size=0x0000000000692800<br>
><br>
> Questions:<br>
> 1. Why wasn't the<br>
> trusted.glusterfs.quota.242dcfd9-6aea-4cb8-beb2-c0ed91ad70d3.contri=0x0000000009af6000<br>
> cleaned up?<br>
> 2A. How can I remove old attributes from the fs, and then force a<br>
> re-calculation of contributions for the quota path /modules once I've<br>
> done this on all gluster nodes?<br>
> 2B. Or am I stuck yet again removing quotas completely, waiting for<br>
> the automated setfattr to remove the quotas for<br>
> c890be20-1bb9-4aec-a8d0-eacab0446f16 ID, manually removing attrs for<br>
> 242dcfd9-6aea-4cb8-beb2-c0ed91ad70d3, re-enabling quotas, waiting for<br>
> xattrs to be generated, then enabling limits?<br>
> 3. Shouldn't there be a command to re-trigger quota accounting on a<br>
> directory that confirms the attrs are set correctly and checks that<br>
> the contribution attr actually match disk usage?<br>
><br>
> On Tue, Feb 2, 2016 at 3:00 AM, Manikandan Selvaganesh<br>
> <<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>> wrote:<br>
>> Hi Steve,<br>
>><br>
>> As you have mentioned, if you are using a glusterfs version lesser than 3.7,<br>
>> then you are doing it right. We are sorry to say but unfortunately that's the only<br>
>> way(manually going and cleaning up the xattr's before enabling quota or wait for<br>
>> the process to complete itself, which would take quite some time depending upon the<br>
>> files) that can be done so as not to mess up quota enforcing/accounting. Also, we could<br>
>> not find anything that could help us with the logs too. Thanks for the<br>
>> point. We are in the process of writing blogs and documenting clearly about quota and<br>
>> it's internal working. There is an initial blog[1] which we have written. More blogs will<br>
>> follow.<br>
>><br>
>> With glusterfs-3.7, we have introduced something called "Quota versioning".<br>
>> So whenever you enable quota, we are suffixing a number(1..N) with the quota xattr's,<br>
>> say you enable quota for the first time and the xattr will be like,<br>
>> "trusted.glusterfs.quota.size.<suffix number from 1..N>". So all the quota related xattr's<br>
>> will have the number suffixed to the xattr. With the versioning patch[2], when you disable and<br>
>> enable quota again for the next time, it will be "trusted.glusterfs.quota.size.2"(Similarly<br>
>> for other quota related xattr's). So quota accounting can happen independently depending on<br>
>> the suffix and the cleanup process can go on independently which solves the issue that you<br>
>> have.<br>
>><br>
>> [1] <a href="https://manikandanselvaganesh.wordpress.com/" rel="noreferrer" target="_blank">https://manikandanselvaganesh.wordpress.com/</a><br>
>><br>
>> [2] <a href="http://review.gluster.org/12386" rel="noreferrer" target="_blank">http://review.gluster.org/12386</a><br>
>><br>
>> --<br>
>> Thanks & Regards,<br>
>> Manikandan Selvaganesh.<br>
>><br>
>> ----- Original Message -----<br>
>> From: "Vijaikumar Mallikarjuna" <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>><br>
>> To: "Steve Dainard" <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>><br>
>> Cc: "Manikandan Selvaganesh" <<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>><br>
>> Sent: Tuesday, February 2, 2016 10:12:51 AM<br>
>> Subject: Re: [Gluster-users] Quota list not reflecting disk usage<br>
>><br>
>> Hi Steve,<br>
>><br>
>> Sorry for the delay. Mani and myself was busy with something else at work,<br>
>> we will update you on this by eod.<br>
>><br>
>> Many quota issues has been fixed in 3.7, also version numbers are added to<br>
>> quota xattrs, so when quota is disabled we don't need to cleanup the xattrs.<br>
>><br>
>> Thanks,<br>
>> Vijay<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> On Tue, Feb 2, 2016 at 12:26 AM, Steve Dainard <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>> wrote:<br>
>><br>
>>> I haven't heard anything back on this thread so here's where I've landed:<br>
>>><br>
>>> It appears that the quota xattr's are not being cleared when quota's<br>
>>> are disabled, so when they are disabled and re-enabled the value for<br>
>>> size is added to the previous size, making it appear that the 'Used'<br>
>>> space is significantly greater than it should be. This seems like a<br>
>>> bug, but I don't know what to file it against, or if the logs I<br>
>>> attached prove this.<br>
>>><br>
>>> Also; the documentation doesn't make mention of how the quota system<br>
>>> works, and what happens when quotas are enabled/disabled. There seems<br>
>>> to be a background task for both settings:<br>
>>> On enable: "/usr/bin/find . -exec /usr/bin/stat {} \ ;"<br>
>>> On disable: setfattr is removing quota xattrs<br>
>>><br>
>>> The thing is neither of these tasks are listed in 'gluster volume<br>
>>> status <volume>' ie:<br>
>>><br>
>>> Status of volume: storage<br>
>>> Gluster process Port Online Pid<br>
>>><br>
>>> ------------------------------------------------------------------------------<br>
>>> Brick 10.0.231.50:/mnt/raid6-storage/storage 49156 Y 24899<br>
>>> Brick 10.0.231.51:/mnt/raid6-storage/storage 49156 Y 2991<br>
>>> Brick 10.0.231.52:/mnt/raid6-storage/storage 49156 Y 28853<br>
>>> Brick 10.0.231.53:/mnt/raid6-storage/storage 49153 Y 2705<br>
>>> NFS Server on localhost N/A N N/A<br>
>>> Quota Daemon on localhost N/A Y 30066<br>
>>> NFS Server on 10.0.231.52 N/A N N/A<br>
>>> Quota Daemon on 10.0.231.52 N/A Y 24976<br>
>>> NFS Server on 10.0.231.53 N/A N N/A<br>
>>> Quota Daemon on 10.0.231.53 N/A Y 30334<br>
>>> NFS Server on 10.0.231.51 N/A N N/A<br>
>>> Quota Daemon on 10.0.231.51 N/A Y 15781<br>
>>><br>
>>> Task Status of Volume storage<br>
>>><br>
>>> ------------------------------------------------------------------------------<br>
>>> ******There are no active volume tasks*******<br>
>>><br>
>>> (I added the asterisks above)<br>
>>> So without any visibility into these running tasks, or knowing of<br>
>>> their existence (not documented) it becomes very difficult to know<br>
>>> what's going on. On any reasonably large storage system these tasks<br>
>>> take days to complete and there should be some indication of this.<br>
>>><br>
>>> Where I'm at right now:<br>
>>> - I disabled the quota's on volume 'storage'<br>
>>> - I started to manually remove xattrs until I realized there is an<br>
>>> automated task to do this.<br>
>>> - After waiting for 'ps aux | grep setfattr' to return nothing, I<br>
>>> re-enabled quotas<br>
>>> - I'm currently waiting for the stat tasks to complete<br>
>>> - Once the entire filesystem has been stat'ed, I'm going to set limits<br>
>>> again.<br>
>>><br>
>>> As a note, this is a pretty brutal process on a system with 140T of<br>
>>> storage, and I can't imagine how much worse this would be if my nodes<br>
>>> had more than 12 disks per, or if I was at PB scale.<br>
>>><br>
>>> On Mon, Jan 25, 2016 at 12:31 PM, Steve Dainard <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>> wrote:<br>
>>> > Here's a l link to a tarball of one of the gluster hosts logs:<br>
>>> > <a href="https://dl.dropboxusercontent.com/u/21916057/gluster01.tar.gz" rel="noreferrer" target="_blank">https://dl.dropboxusercontent.com/u/21916057/gluster01.tar.gz</a><br>
>>> ><br>
>>> > I wanted to include past logs in case they were useful.<br>
>>> ><br>
>>> > Also, the volume I'm trying to get quota's working on is 'storage'<br>
>>> > you'll notice I have a brick issue on a different volume 'vm-storage'.<br>
>>> ><br>
>>> > In regards to the 3.7 upgrade. I'm a bit hesitant to move to the<br>
>>> > current release, I prefer to stay on a stable release with maintenance<br>
>>> > updates if possible.<br>
>>> ><br>
>>> > On Mon, Jan 25, 2016 at 12:09 PM, Manikandan Selvaganesh<br>
>>> > <<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>> wrote:<br>
>>> >> Hi Steve,<br>
>>> >><br>
>>> >> Also, do you have any plans to upgrade to the latest version. With 3.7,<br>
>>> >> we have re factored some approaches used in quota and marker and that<br>
>>> have<br>
>>> >> fixed quite some issues.<br>
>>> >><br>
>>> >> --<br>
>>> >> Thanks & Regards,<br>
>>> >> Manikandan Selvaganesh.<br>
>>> >><br>
>>> >> ----- Original Message -----<br>
>>> >> From: "Manikandan Selvaganesh" <<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>><br>
>>> >> To: "Steve Dainard" <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>><br>
>>> >> Cc: "<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a> List" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
>>> >> Sent: Tuesday, January 26, 2016 1:31:10 AM<br>
>>> >> Subject: Re: [Gluster-users] Quota list not reflecting disk usage<br>
>>> >><br>
>>> >> Hi Steve,<br>
>>> >><br>
>>> >> Could you send us the glusterfs logs, it could help us debug the issue!!<br>
>>> >><br>
>>> >> --<br>
>>> >> Thanks & Regards,<br>
>>> >> Manikandan Selvaganesh.<br>
>>> >><br>
>>> >> ----- Original Message -----<br>
>>> >> From: "Steve Dainard" <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>><br>
>>> >> To: "Manikandan Selvaganesh" <<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>><br>
>>> >> Cc: "<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a> List" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
>>> >> Sent: Tuesday, January 26, 2016 12:56:22 AM<br>
>>> >> Subject: Re: [Gluster-users] Quota list not reflecting disk usage<br>
>>> >><br>
>>> >> Something is seriously wrong with the quota output:<br>
>>> >><br>
>>> >> # gluster volume quota storage list<br>
>>> >> Path Hard-limit Soft-limit Used<br>
>>> >> Available Soft-limit exceeded? Hard-limit exceeded?<br>
>>> >><br>
>>> ---------------------------------------------------------------------------------------------------------------------------<br>
>>> >> /projects-CanSISE 10.0TB 80% 27.8TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /data4/climate 105.0TB 80% 307.1TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /data4/forestry 50.0GB 80% 61.9GB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /data4/projects 800.0GB 80% 2.0TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /data4/strays 85.0GB 80% 230.5GB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /data4/gis 2.2TB 80% 6.3TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /data4/modperl 1.0TB 80% 953.2GB<br>
>>> >> 70.8GB Yes No<br>
>>> >> /data4/dem 1.0GB 80% 0Bytes<br>
>>> >> 1.0GB No No<br>
>>> >> /projects-hydrology-archive0 5.0TB 80% 14.4TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /climate-downscale-idf-ec 7.5TB 80% 5.1TB<br>
>>> >> 2.4TB No No<br>
>>> >> /climate-downscale-idf 5.0TB 80% 6.1TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /home 5.0TB 80% 11.8TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /projects-hydrology-scratch0 7.0TB 80% 169.1GB<br>
>>> >> 6.8TB No No<br>
>>> >> /projects-rci-scratch 10.0TB 80% 1.9TB<br>
>>> >> 8.1TB No No<br>
>>> >> /projects-dataportal 1.0TB 80% 775.4GB<br>
>>> >> 248.6GB No No<br>
>>> >> /modules 1.0TB 80% 36.1GB<br>
>>> >> 987.9GB No No<br>
>>> >> /data4/climate/downscale/CMIP5 65.0TB 80% 56.4TB<br>
>>> >> 8.6TB Yes No<br>
>>> >><br>
>>> >> Gluster is listing 'Used' space of over 307TB on /data4/climate, but<br>
>>> >> the volume capacity is only 146T.<br>
>>> >><br>
>>> >> This has happened after disabling quotas on the volume, re-enabling<br>
>>> >> quotas, and then setting quotas again. There was a lot of glusterfsd<br>
>>> >> CPU usage afterwards, and now 3 days later the quota's I set were all<br>
>>> >> missing except<br>
>>> >><br>
>>> >> /data4/projects|800.0GB|2.0TB|0Bytes<br>
>>> >><br>
>>> >> So I re-set the quotas and the output above is what I have.<br>
>>> >><br>
>>> >> Previous to disabling quota's this was the output:<br>
>>> >> # gluster volume quota storage list<br>
>>> >> Path Hard-limit Soft-limit Used<br>
>>> >> Available Soft-limit exceeded? Hard-limit exceeded?<br>
>>> >><br>
>>> ---------------------------------------------------------------------------------------------------------------------------<br>
>>> >> /data4/climate 105.0TB 80% 151.6TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /data4/forestry 50.0GB 80% 45.4GB<br>
>>> >> 4.6GB Yes No<br>
>>> >> /data4/projects 800.0GB 80% 753.1GB<br>
>>> >> 46.9GB Yes No<br>
>>> >> /data4/strays 85.0GB 80% 80.8GB<br>
>>> >> 4.2GB Yes No<br>
>>> >> /data4/gis 2.2TB 80% 2.1TB<br>
>>> >> 91.8GB Yes No<br>
>>> >> /data4/modperl 1.0TB 80% 948.1GB<br>
>>> >> 75.9GB Yes No<br>
>>> >> /data4/dem 1.0GB 80% 0Bytes<br>
>>> >> 1.0GB No No<br>
>>> >> /projects-CanSISE 10.0TB 80% 11.9TB<br>
>>> >> 0Bytes Yes Yes<br>
>>> >> /projects-hydrology-archive0 5.0TB 80% 4.8TB<br>
>>> >> 174.0GB Yes No<br>
>>> >> /climate-downscale-idf-ec 7.5TB 80% 5.0TB<br>
>>> >> 2.5TB No No<br>
>>> >> /climate-downscale-idf 5.0TB 80% 3.8TB<br>
>>> >> 1.2TB No No<br>
>>> >> /home 5.0TB 80% 4.7TB<br>
>>> >> 283.8GB Yes No<br>
>>> >> /projects-hydrology-scratch0 7.0TB 80% 95.9GB<br>
>>> >> 6.9TB No No<br>
>>> >> /projects-rci-scratch 10.0TB 80% 1.7TB<br>
>>> >> 8.3TB No No<br>
>>> >> /projects-dataportal 1.0TB 80% 775.4GB<br>
>>> >> 248.6GB No No<br>
>>> >> /modules 1.0TB 80% 14.6GB<br>
>>> >> 1009.4GB No No<br>
>>> >> /data4/climate/downscale/CMIP5 65.0TB 80% 56.4TB<br>
>>> >> 8.6TB Yes No<br>
>>> >><br>
>>> >> I was so focused on the /projects-CanSISE quota not being accurate<br>
>>> >> that I missed that the 'Used' space on /data4/climate is listed higher<br>
>>> >> then the total gluster volume capacity.<br>
>>> >><br>
>>> >> On Mon, Jan 25, 2016 at 10:52 AM, Steve Dainard <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>><br>
>>> wrote:<br>
>>> >>> Hi Manikandan<br>
>>> >>><br>
>>> >>> I'm using 'du' not df in this case.<br>
>>> >>><br>
>>> >>> On Thu, Jan 21, 2016 at 9:20 PM, Manikandan Selvaganesh<br>
>>> >>> <<a href="mailto:mselvaga@redhat.com">mselvaga@redhat.com</a>> wrote:<br>
>>> >>>> Hi Steve,<br>
>>> >>>><br>
>>> >>>> If you would like disk usage using df utility by taking quota limits<br>
>>> into<br>
>>> >>>> consideration, then you are expected to run the following command.<br>
>>> >>>><br>
>>> >>>> 'gluster volume set VOLNAME quota-deem-statfs on'<br>
>>> >>>><br>
>>> >>>> with older versions where quota-deem-statfs is OFF by default.<br>
>>> However with<br>
>>> >>>> the latest versions, quota-deem-statfs is by default ON. In this<br>
>>> case, the total<br>
>>> >>>> disk space of the directory is taken as the quota hard limit set on<br>
>>> the directory<br>
>>> >>>> of the volume and disk utility would display accordingly. This<br>
>>> answers why there is<br>
>>> >>>> a mismatch in disk utility.<br>
>>> >>>><br>
>>> >>>> Next, answering to quota mechanism and accuracy: There is something<br>
>>> called timeouts<br>
>>> >>>> in quota. For performance reasons, quota caches the directory size on<br>
>>> client. You can<br>
>>> >>>> set timeout indicating the maximum valid duration of directory sizes<br>
>>> in cache,<br>
>>> >>>> from the time they are populated. By default the hard-timeout is 5s<br>
>>> and soft timeout<br>
>>> >>>> is 60s. Setting a timeout of zero will do a force fetching of<br>
>>> directory sizes from server<br>
>>> >>>> for every operation that modifies file data and will effectively<br>
>>> disables directory size<br>
>>> >>>> caching on client side. If you do not have a timeout of 0(which we do<br>
>>> not encourage due to<br>
>>> >>>> performance reasons), then till you reach soft-limit, soft timeout<br>
>>> will be taken into<br>
>>> >>>> consideration, and only for every 60s operations will be synced and<br>
>>> that could cause the<br>
>>> >>>> usage to exceed more than the hard-limit specified. If you would like<br>
>>> quota to<br>
>>> >>>> strictly enforce then please run the following commands,<br>
>>> >>>><br>
>>> >>>> 'gluster v quota VOLNAME hard-timeout 0s'<br>
>>> >>>> 'gluster v quota VOLNAME soft-timeout 0s'<br>
>>> >>>><br>
>>> >>>> Appreciate your curiosity in exploring and if you would like to know<br>
>>> more about quota<br>
>>> >>>> please refer[1]<br>
>>> >>>><br>
>>> >>>> [1]<br>
>>> <a href="http://gluster.readthedocs.org/en/release-3.7.0-1/Administrator%20Guide/Directory%20Quota/" rel="noreferrer" target="_blank">http://gluster.readthedocs.org/en/release-3.7.0-1/Administrator%20Guide/Directory%20Quota/</a><br>
>>> >>>><br>
>>> >>>> --<br>
>>> >>>> Thanks & Regards,<br>
>>> >>>> Manikandan Selvaganesh.<br>
>>> >>>><br>
>>> >>>> ----- Original Message -----<br>
>>> >>>> From: "Steve Dainard" <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>><br>
>>> >>>> To: "<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a> List" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
>>> >>>> Sent: Friday, January 22, 2016 1:40:07 AM<br>
>>> >>>> Subject: Re: [Gluster-users] Quota list not reflecting disk usage<br>
>>> >>>><br>
>>> >>>> This is gluster 3.6.6.<br>
>>> >>>><br>
>>> >>>> I've attempted to disable and re-enable quota's on the volume, but<br>
>>> >>>> when I re-apply the quotas on each directory the same 'Used' value is<br>
>>> >>>> present as before.<br>
>>> >>>><br>
>>> >>>> Where is quotad getting its information from, and how can I clean<br>
>>> >>>> up/regenerate that info?<br>
>>> >>>><br>
>>> >>>> On Thu, Jan 21, 2016 at 10:07 AM, Steve Dainard <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>><br>
>>> wrote:<br>
>>> >>>>> I have a distributed volume with quota's enabled:<br>
>>> >>>>><br>
>>> >>>>> Volume Name: storage<br>
>>> >>>>> Type: Distribute<br>
>>> >>>>> Volume ID: 26d355cb-c486-481f-ac16-e25390e73775<br>
>>> >>>>> Status: Started<br>
>>> >>>>> Number of Bricks: 4<br>
>>> >>>>> Transport-type: tcp<br>
>>> >>>>> Bricks:<br>
>>> >>>>> Brick1: 10.0.231.50:/mnt/raid6-storage/storage<br>
>>> >>>>> Brick2: 10.0.231.51:/mnt/raid6-storage/storage<br>
>>> >>>>> Brick3: 10.0.231.52:/mnt/raid6-storage/storage<br>
>>> >>>>> Brick4: 10.0.231.53:/mnt/raid6-storage/storage<br>
>>> >>>>> Options Reconfigured:<br>
>>> >>>>> performance.cache-size: 1GB<br>
>>> >>>>> performance.readdir-ahead: on<br>
>>> >>>>> features.quota: on<br>
>>> >>>>> diagnostics.brick-log-level: WARNING<br>
>>> >>>>><br>
>>> >>>>> Here is a partial list of quotas:<br>
>>> >>>>> # /usr/sbin/gluster volume quota storage list<br>
>>> >>>>> Path Hard-limit Soft-limit Used<br>
>>> >>>>> Available Soft-limit exceeded? Hard-limit exceeded?<br>
>>> >>>>><br>
>>> ---------------------------------------------------------------------------------------------------------------------------<br>
>>> >>>>> ...<br>
>>> >>>>> /projects-CanSISE 10.0TB 80%<br>
>>> 11.9TB<br>
>>> >>>>> 0Bytes Yes Yes<br>
>>> >>>>> ...<br>
>>> >>>>><br>
>>> >>>>> If I du on that location I do not get 11.9TB of space used (fuse<br>
>>> mount point):<br>
>>> >>>>> [root@storage projects-CanSISE]# du -hs<br>
>>> >>>>> 9.5T .<br>
>>> >>>>><br>
>>> >>>>> Can someone provide an explanation for how the quota mechanism tracks<br>
>>> >>>>> disk usage? How often does the quota mechanism check its accuracy?<br>
>>> And<br>
>>> >>>>> how could it get so far off?<br>
>>> >>>>><br>
>>> >>>>> Can I get gluster to rescan that location and update the quota usage?<br>
>>> >>>>><br>
>>> >>>>> Thanks,<br>
>>> >>>>> Steve<br>
>>> >>>> _______________________________________________<br>
>>> >>>> Gluster-users mailing list<br>
>>> >>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> >>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>> >> _______________________________________________<br>
>>> >> Gluster-users mailing list<br>
>>> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> >> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>> _______________________________________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>><br>
</div></div></blockquote></div><br></div>