<div dir="ltr">Hi Kotresh,<div><br></div><div><br></div><div>Writing new file.</div><div><br></div><div><div>getfattr -m. -e hex -d /media/disk2/brick2/data/G/test58-bs10M-c100.nul</div><div>getfattr: Removing leading &#39;/&#39; from absolute path names</div><div># file: media/disk2/brick2/data/G/test58-bs10M-c100.nul</div><div>trusted.bit-rot.version=0x020000000000000057da8b23000b120e</div><div>trusted.ec.config=0x0000080501000200</div><div>trusted.ec.size=0x000000003e800000</div><div>trusted.ec.version=0x0000000000001f400000000000001f40</div><div>trusted.gfid=0x6e7c49e6094e443585bff21f99fd8764</div><div><br></div><div><br></div><div>Running ls -l in brick 2 pid </div><div><br></div><div>ls -l /proc/30162/fd</div><div><br></div><div>lr-x------ 1 root root 64 Sep 21 16:22 59 -&gt; /media/disk2/brick2/.glusterfs/quanrantine</div><div>lrwx------ 1 root root 64 Sep 21 16:22 6 -&gt; /var/lib/glusterd/vols/glsvol1/run/10.1.2.2-media-disk2-brick2.pid</div><div>lr-x------ 1 root root 64 Sep 21 16:25 60 -&gt; /media/disk2/brick2/.glusterfs/6e/7c/6e7c49e6-094e-4435-85bf-f21f99fd8764</div><div>lr-x------ 1 root root 64 Sep 21 16:22 61 -&gt; /media/disk2/brick2/.glusterfs/quanrantine</div></div><div><br></div><div><br></div><div><div>find /media/disk2/ -samefile /media/disk2/brick2/.glusterfs/6e/7c/6e7c49e6-094e-4435-85bf-f21f99fd8764</div><div>/media/disk2/brick2/.glusterfs/6e/7c/6e7c49e6-094e-4435-85bf-f21f99fd8764</div><div>/media/disk2/brick2/data/G/test58-bs10M-c100.nul</div></div><div><br></div><div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 21, 2016 at 3:28 PM, Kotresh Hiremath Ravishankar <span dir="ltr">&lt;<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Amudhan,<br>
<br>
Don&#39;t grep for the filename, glusterfs maintains hardlink in .glusterfs directory<br>
for each file. Just check &#39;ls -l /proc/&lt;respective brick pid&gt;/fd&#39; for any fds opened<br>
for a file in .glusterfs and check if it&#39;s the same file.<br>
<span class="im HOEnZb"><br>
Thanks and Regards,<br>
Kotresh H R<br>
<br>
----- Original Message -----<br>
&gt; From: &quot;Amudhan P&quot; &lt;<a href="mailto:amudhan83@gmail.com">amudhan83@gmail.com</a>&gt;<br>
&gt; To: &quot;Kotresh Hiremath Ravishankar&quot; &lt;<a href="mailto:khiremat@redhat.com">khiremat@redhat.com</a>&gt;<br>
&gt; Cc: &quot;Gluster Users&quot; &lt;<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>&gt;<br>
</span><span class="im HOEnZb">&gt; Sent: Wednesday, September 21, 2016 1:33:10 PM<br>
&gt; Subject: Re: [Gluster-users] 3.8.3 Bitrot signature process<br>
&gt;<br>
</span><div class="HOEnZb"><div class="h5">&gt; Hi Kotresh,<br>
&gt;<br>
&gt; i have used below command to verify any open fd for file.<br>
&gt;<br>
&gt; &quot;ls -l /proc/*/fd | grep filename&quot;.<br>
&gt;<br>
&gt; as soon as write completes there no open fd&#39;s, if there is any alternate<br>
&gt; option. please let me know will also try that.<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Also, below is my scrub status in my test setup. number of skipped files<br>
&gt; slow reducing day by day. I think files are skipped due to bitrot signature<br>
&gt; process is not completed yet.<br>
&gt;<br>
&gt; where can i see scrub skipped files?<br>
&gt;<br>
&gt;<br>
&gt; Volume name : glsvol1<br>
&gt;<br>
&gt; State of scrub: Active (Idle)<br>
&gt;<br>
&gt; Scrub impact: normal<br>
&gt;<br>
&gt; Scrub frequency: daily<br>
&gt;<br>
&gt; Bitrot error log location: /var/log/glusterfs/bitd.log<br>
&gt;<br>
&gt; Scrubber error log location: /var/log/glusterfs/scrub.log<br>
&gt;<br>
&gt;<br>
&gt; ==============================<wbr>===========================<br>
&gt;<br>
&gt; Node: localhost<br>
&gt;<br>
&gt; Number of Scrubbed files: 1644<br>
&gt;<br>
&gt; Number of Skipped files: 1001<br>
&gt;<br>
&gt; Last completed scrub time: 2016-09-20 11:59:58<br>
&gt;<br>
&gt; Duration of last scrub (D:M:H:M:S): 0:0:39:26<br>
&gt;<br>
&gt; Error count: 0<br>
&gt;<br>
&gt;<br>
&gt; ==============================<wbr>===========================<br>
&gt;<br>
&gt; Node: 10.1.2.3<br>
&gt;<br>
&gt; Number of Scrubbed files: 1644<br>
&gt;<br>
&gt; Number of Skipped files: 1001<br>
&gt;<br>
&gt; Last completed scrub time: 2016-09-20 10:50:00<br>
&gt;<br>
&gt; Duration of last scrub (D:M:H:M:S): 0:0:38:17<br>
&gt;<br>
&gt; Error count: 0<br>
&gt;<br>
&gt;<br>
&gt; ==============================<wbr>===========================<br>
&gt;<br>
&gt; Node: 10.1.2.4<br>
&gt;<br>
&gt; Number of Scrubbed files: 981<br>
&gt;<br>
&gt; Number of Skipped files: 1664<br>
&gt;<br>
&gt; Last completed scrub time: 2016-09-20 12:38:01<br>
&gt;<br>
&gt; Duration of last scrub (D:M:H:M:S): 0:0:35:19<br>
&gt;<br>
&gt; Error count: 0<br>
&gt;<br>
&gt;<br>
&gt; ==============================<wbr>===========================<br>
&gt;<br>
&gt; Node: 10.1.2.1<br>
&gt;<br>
&gt; Number of Scrubbed files: 1263<br>
&gt;<br>
&gt; Number of Skipped files: 1382<br>
&gt;<br>
&gt; Last completed scrub time: 2016-09-20 11:57:21<br>
&gt;<br>
&gt; Duration of last scrub (D:M:H:M:S): 0:0:37:17<br>
&gt;<br>
&gt; Error count: 0<br>
&gt;<br>
&gt;<br>
&gt; ==============================<wbr>===========================<br>
&gt;<br>
&gt; Node: 10.1.2.2<br>
&gt;<br>
&gt; Number of Scrubbed files: 1644<br>
&gt;<br>
&gt; Number of Skipped files: 1001<br>
&gt;<br>
&gt; Last completed scrub time: 2016-09-20 11:59:25<br>
&gt;<br>
&gt; Duration of last scrub (D:M:H:M:S): 0:0:39:18<br>
&gt;<br>
&gt; Error count: 0<br>
&gt;<br>
&gt; ==============================<wbr>===========================<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Thanks<br>
&gt; Amudhan<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Sep 21, 2016 at 11:45 AM, Kotresh Hiremath Ravishankar &lt;<br>
&gt; <a href="mailto:khiremat@redhat.com">khiremat@redhat.com</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; Hi Amudhan,<br>
&gt; &gt;<br>
&gt; &gt; I don&#39;t think it&#39;s the limitation with read data from the brick.<br>
&gt; &gt; To limit the usage of CPU, throttling is done using token bucket<br>
&gt; &gt; algorithm. The log message showed is related to it. But even then<br>
&gt; &gt; I think it should not take 12 minutes for check-sum calculation unless<br>
&gt; &gt; there is an fd open (might be internal). Could you please cross verify<br>
&gt; &gt; if there are any fd opened on that file by looking into /proc? I will<br>
&gt; &gt; also test it out in the mean time and get back to you.<br>
&gt; &gt;<br>
&gt; &gt; Thanks and Regards,<br>
&gt; &gt; Kotresh H R<br>
&gt; &gt;<br>
&gt; &gt; ----- Original Message -----<br>
&gt; &gt; &gt; From: &quot;Amudhan P&quot; &lt;<a href="mailto:amudhan83@gmail.com">amudhan83@gmail.com</a>&gt;<br>
&gt; &gt; &gt; To: &quot;Kotresh Hiremath Ravishankar&quot; &lt;<a href="mailto:khiremat@redhat.com">khiremat@redhat.com</a>&gt;<br>
&gt; &gt; &gt; Cc: &quot;Gluster Users&quot; &lt;<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>&gt;<br>
&gt; &gt; &gt; Sent: Tuesday, September 20, 2016 3:19:28 PM<br>
&gt; &gt; &gt; Subject: Re: [Gluster-users] 3.8.3 Bitrot signature process<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Hi Kotresh,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Please correct me if i am wrong, Once a file write completes and as soon<br>
&gt; &gt; as<br>
&gt; &gt; &gt; closes fds, bitrot waits for 120 seconds and starts hashing and update<br>
&gt; &gt; &gt; signature for the file in brick.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; But, what i am feeling that bitrot takes too much of time to complete<br>
&gt; &gt; &gt; hashing.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; below is test result i would like to share.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; writing data in below path using dd :<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; /mnt/gluster/data/G (mount point)<br>
&gt; &gt; &gt; -rw-r--r-- 1 root root  10M Sep 20 12:19 test53-bs10M-c1.nul<br>
&gt; &gt; &gt; -rw-r--r-- 1 root root 100M Sep 20 12:19 test54-bs10M-c10.nul<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; No any other write or read process is going on.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Checking file data in one of the brick.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; -rw-r--r-- 2 root root 2.5M Sep 20 12:23 test53-bs10M-c1.nul<br>
&gt; &gt; &gt; -rw-r--r-- 2 root root  25M Sep 20 12:23 test54-bs10M-c10.nul<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; file&#39;s stat and getfattr info from brick, after write process completed.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ stat test53-bs10M-c1.nul<br>
&gt; &gt; &gt;   File: ‘test53-bs10M-c1.nul’<br>
&gt; &gt; &gt;   Size: 2621440         Blocks: 5120       IO Block: 4096   regular file<br>
&gt; &gt; &gt; Device: 821h/2081d      Inode: 536874168   Links: 2<br>
&gt; &gt; &gt; Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)<br>
&gt; &gt; &gt; Access: 2016-09-20 12:23:28.798886647 +0530<br>
&gt; &gt; &gt; Modify: 2016-09-20 12:23:28.994886646 +0530<br>
&gt; &gt; &gt; Change: 2016-09-20 12:23:28.998886646 +0530<br>
&gt; &gt; &gt;  Birth: -<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ stat test54-bs10M-c10.nul<br>
&gt; &gt; &gt;   File: ‘test54-bs10M-c10.nul’<br>
&gt; &gt; &gt;   Size: 26214400        Blocks: 51200      IO Block: 4096   regular file<br>
&gt; &gt; &gt; Device: 821h/2081d      Inode: 536874169   Links: 2<br>
&gt; &gt; &gt; Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)<br>
&gt; &gt; &gt; Access: 2016-09-20 12:23:42.902886624 +0530<br>
&gt; &gt; &gt; Modify: 2016-09-20 12:23:44.378886622 +0530<br>
&gt; &gt; &gt; Change: 2016-09-20 12:23:44.378886622 +0530<br>
&gt; &gt; &gt;  Birth: -<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ sudo getfattr -m. -e hex -d<br>
&gt; &gt; &gt; test53-bs10M-c1.nul<br>
&gt; &gt; &gt; # file: test53-bs10M-c1.nul<br>
&gt; &gt; &gt; trusted.bit-rot.version=<wbr>0x020000000000000057daa7b50002<wbr>e5b4<br>
&gt; &gt; &gt; trusted.ec.config=<wbr>0x0000080501000200<br>
&gt; &gt; &gt; trusted.ec.size=<wbr>0x0000000000a00000<br>
&gt; &gt; &gt; trusted.ec.version=<wbr>0x0000000000000050000000000000<wbr>0050<br>
&gt; &gt; &gt; trusted.gfid=<wbr>0xe2416bd1aae4403c88f44286273b<wbr>be99<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ sudo getfattr -m. -e hex -d<br>
&gt; &gt; &gt; test54-bs10M-c10.nul<br>
&gt; &gt; &gt; # file: test54-bs10M-c10.nul<br>
&gt; &gt; &gt; trusted.bit-rot.version=<wbr>0x020000000000000057daa7b50002<wbr>e5b4<br>
&gt; &gt; &gt; trusted.ec.config=<wbr>0x0000080501000200<br>
&gt; &gt; &gt; trusted.ec.size=<wbr>0x0000000006400000<br>
&gt; &gt; &gt; trusted.ec.version=<wbr>0x0000000000000320000000000000<wbr>0320<br>
&gt; &gt; &gt; trusted.gfid=<wbr>0x54e018dd8c5a4bd79e0317729d8a<wbr>57c5<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; file&#39;s stat and getfattr info from brick, after bitrot signature updated.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ stat test53-bs10M-c1.nul<br>
&gt; &gt; &gt;   File: ‘test53-bs10M-c1.nul’<br>
&gt; &gt; &gt;   Size: 2621440         Blocks: 5120       IO Block: 4096   regular file<br>
&gt; &gt; &gt; Device: 821h/2081d      Inode: 536874168   Links: 2<br>
&gt; &gt; &gt; Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)<br>
&gt; &gt; &gt; Access: 2016-09-20 12:25:31.494886450 +0530<br>
&gt; &gt; &gt; Modify: 2016-09-20 12:23:28.994886646 +0530<br>
&gt; &gt; &gt; Change: 2016-09-20 12:27:00.994886307 +0530<br>
&gt; &gt; &gt;  Birth: -<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ sudo getfattr -m. -e hex -d<br>
&gt; &gt; &gt; test53-bs10M-c1.nul<br>
&gt; &gt; &gt; # file: test53-bs10M-c1.nul<br>
&gt; &gt; &gt; trusted.bit-rot.signature=<wbr>0x0102000000000000006de7493c5c<br>
&gt; &gt; 90f643357c268fbaaf461c1567e033<wbr>4e4948023ce17268403aa37a<br>
&gt; &gt; &gt; trusted.bit-rot.version=<wbr>0x020000000000000057daa7b50002<wbr>e5b4<br>
&gt; &gt; &gt; trusted.ec.config=<wbr>0x0000080501000200<br>
&gt; &gt; &gt; trusted.ec.size=<wbr>0x0000000000a00000<br>
&gt; &gt; &gt; trusted.ec.version=<wbr>0x0000000000000050000000000000<wbr>0050<br>
&gt; &gt; &gt; trusted.gfid=<wbr>0xe2416bd1aae4403c88f44286273b<wbr>be99<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ stat test54-bs10M-c10.nul<br>
&gt; &gt; &gt;   File: ‘test54-bs10M-c10.nul’<br>
&gt; &gt; &gt;   Size: 26214400        Blocks: 51200      IO Block: 4096   regular file<br>
&gt; &gt; &gt; Device: 821h/2081d      Inode: 536874169   Links: 2<br>
&gt; &gt; &gt; Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)<br>
&gt; &gt; &gt; Access: 2016-09-20 12:25:47.510886425 +0530<br>
&gt; &gt; &gt; Modify: 2016-09-20 12:23:44.378886622 +0530<br>
&gt; &gt; &gt; Change: 2016-09-20 12:38:05.954885243 +0530<br>
&gt; &gt; &gt;  Birth: -<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ sudo getfattr -m. -e hex -d<br>
&gt; &gt; &gt; test54-bs10M-c10.nul<br>
&gt; &gt; &gt; # file: test54-bs10M-c10.nul<br>
&gt; &gt; &gt; trusted.bit-rot.signature=<wbr>0x010200000000000000394c345f0b<br>
&gt; &gt; 0c63ee652627a62eed069244d35c4d<wbr>5134e4f07d4eabb51afda47e<br>
&gt; &gt; &gt; trusted.bit-rot.version=<wbr>0x020000000000000057daa7b50002<wbr>e5b4<br>
&gt; &gt; &gt; trusted.ec.config=<wbr>0x0000080501000200<br>
&gt; &gt; &gt; trusted.ec.size=<wbr>0x0000000006400000<br>
&gt; &gt; &gt; trusted.ec.version=<wbr>0x0000000000000320000000000000<wbr>0320<br>
&gt; &gt; &gt; trusted.gfid=<wbr>0x54e018dd8c5a4bd79e0317729d8a<wbr>57c5<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; (Actual time taken for reading file from brick for md5sum)<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ time md5sum test53-bs10M-c1.nul<br>
&gt; &gt; &gt; 8354dcaa18a1ecb52d0895bf00888c<wbr>44  test53-bs10M-c1.nul<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; real    0m0.045s<br>
&gt; &gt; &gt; user    0m0.007s<br>
&gt; &gt; &gt; sys     0m0.003s<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; gfstst-node5:/media/disk2/<wbr>brick2/data/G$ time md5sum<br>
&gt; &gt; test54-bs10M-c10.nul<br>
&gt; &gt; &gt; bed3c0a4a1407f584989b4009e9ce3<wbr>3f  test54-bs10M-c10.nul<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; real    0m0.166s<br>
&gt; &gt; &gt; user    0m0.062s<br>
&gt; &gt; &gt; sys     0m0.011s<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; As you can see that &#39;test54-bs10M-c10.nul&#39; file took around 12 minutes to<br>
&gt; &gt; &gt; update bitort signature (pls refer stat output for the file).<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; what would be the cause for such a slow read?. Any limitation in read<br>
&gt; &gt; data<br>
&gt; &gt; &gt; from brick?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Also, i am seeing this line bitd.log, what does this mean?<br>
&gt; &gt; &gt; [bit-rot.c:1784:br_rate_limit_<wbr>signer] 0-glsvol1-bit-rot-0: [Rate Limit<br>
&gt; &gt; &gt; Info] &quot;tokens/sec (rate): 131072, maxlimit: 524288<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks<br>
&gt; &gt; &gt; Amudhan P<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Mon, Sep 19, 2016 at 1:00 PM, Kotresh Hiremath Ravishankar &lt;<br>
&gt; &gt; &gt; <a href="mailto:khiremat@redhat.com">khiremat@redhat.com</a>&gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Hi Amudhan,<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Thanks for testing out the bitrot feature and sorry for the delayed<br>
&gt; &gt; &gt; &gt; response.<br>
&gt; &gt; &gt; &gt; Please find the answers inline.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Thanks and Regards,<br>
&gt; &gt; &gt; &gt; Kotresh H R<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; ----- Original Message -----<br>
&gt; &gt; &gt; &gt; &gt; From: &quot;Amudhan P&quot; &lt;<a href="mailto:amudhan83@gmail.com">amudhan83@gmail.com</a>&gt;<br>
&gt; &gt; &gt; &gt; &gt; To: &quot;Gluster Users&quot; &lt;<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>&gt;<br>
&gt; &gt; &gt; &gt; &gt; Sent: Friday, September 16, 2016 4:14:10 PM<br>
&gt; &gt; &gt; &gt; &gt; Subject: Re: [Gluster-users] 3.8.3 Bitrot signature process<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Hi,<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Can anyone reply to this mail.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; On Tue, Sep 13, 2016 at 12:49 PM, Amudhan P &lt; <a href="mailto:amudhan83@gmail.com">amudhan83@gmail.com</a> &gt;<br>
&gt; &gt; &gt; &gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Hi,<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I am testing bitrot feature in Gluster 3.8.3 with disperse EC volume<br>
&gt; &gt; 4+1.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; When i write single small file (&lt; 10MB) after 2 seconds i can see<br>
&gt; &gt; bitrot<br>
&gt; &gt; &gt; &gt; &gt; signature in bricks for the file, but when i write multiple files<br>
&gt; &gt; with<br>
&gt; &gt; &gt; &gt; &gt; different size ( &gt; 10MB) it takes long time (&gt; 24hrs) to see bitrot<br>
&gt; &gt; &gt; &gt; &gt; signature in all the files.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;    The default timeout for signing to happen is 120 seconds. So the<br>
&gt; &gt; &gt; &gt; signing will happen<br>
&gt; &gt; &gt; &gt;   120 secs after the last fd gets closed on that file. So if the file<br>
&gt; &gt; is<br>
&gt; &gt; &gt; &gt; being written<br>
&gt; &gt; &gt; &gt;   continuously, it will not be signed until 120 secs after it&#39;s last<br>
&gt; &gt; fd is<br>
&gt; &gt; &gt; &gt; closed.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; My questions are.<br>
&gt; &gt; &gt; &gt; &gt; 1. I have enabled scrub schedule as hourly and throttle as normal,<br>
&gt; &gt; does<br>
&gt; &gt; &gt; &gt; this<br>
&gt; &gt; &gt; &gt; &gt; make any impact in delaying bitrot signature?<br>
&gt; &gt; &gt; &gt;       No.<br>
&gt; &gt; &gt; &gt; &gt; 2. other than &quot;bitd.log&quot; where else i can watch current status of<br>
&gt; &gt; bitrot,<br>
&gt; &gt; &gt; &gt; &gt; like number of files added for signature and file status?<br>
&gt; &gt; &gt; &gt;      Signature will happen after 120 sec of last fd closure, as said<br>
&gt; &gt; above.<br>
&gt; &gt; &gt; &gt;      There is not status command which tracks the signature of the<br>
&gt; &gt; files.<br>
&gt; &gt; &gt; &gt;      But there is bitrot status command which tracks the number of<br>
&gt; &gt; files<br>
&gt; &gt; &gt; &gt;      scrubbed.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;      #gluster vol bitrot &lt;volname&gt; scrub status<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; 3. where i can confirm that all the files in the brick are bitrot<br>
&gt; &gt; signed?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;      As said, signing information of all the files is not tracked.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; 4. is there any file read size limit in bitrot?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;      I didn&#39;t get. Could you please elaborate this ?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; 5. options for tuning bitrot for faster signing of files?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;      Bitrot feature is mainly to detect silent corruption (bitflips) of<br>
&gt; &gt; &gt; &gt; files due to long<br>
&gt; &gt; &gt; &gt;      term storage. Hence the default is 120 sec of last fd closure, the<br>
&gt; &gt; &gt; &gt; signing happens.<br>
&gt; &gt; &gt; &gt;      But there is a tune able which can change the default 120 sec but<br>
&gt; &gt; &gt; &gt; that&#39;s only for<br>
&gt; &gt; &gt; &gt;      testing purposes and we don&#39;t recommend it.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;       gluster vol get master features.expiry-time<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;      For testing purposes, you can change this default and test.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Thanks<br>
&gt; &gt; &gt; &gt; &gt; Amudhan<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; ______________________________<wbr>_________________<br>
&gt; &gt; &gt; &gt; &gt; Gluster-users mailing list<br>
&gt; &gt; &gt; &gt; &gt; <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
&gt; &gt; &gt; &gt; &gt; <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt;<br>
&gt;<br>
</div></div></blockquote></div><br></div>