<div dir="ltr">Hi Xavier<div><br></div><div>We are facing same I/O error after upgrade into gluster 3.7.2.</div><div><br></div><div><div>Description of problem:</div><div>=======================</div><div>In a 3 x (4 + 2) = 18 distributed disperse volume, there are input/output error of some files on fuse mount after simulating the following scenario</div><div><br></div><div>1.   Simulate the disk failure by killing the disk pid and again adding the same disk after formatting the drive </div><div>2.   Try to read the recovered or healed file after 2 bricks/nodes were brought down </div><div><pre class="" style="white-space:pre-wrap;word-wrap:break-word;width:50em;color:rgb(0,0,0)">Version-Release number of selected component (if applicable):
==============================================================</pre></div><div><div>admin@node001:~$ sudo gluster --version</div><div>glusterfs 3.7.2 built on Jun 19 2015 16:33:27</div><div>Repository revision: git://<a href="http://git.gluster.com/glusterfs.git">git.gluster.com/glusterfs.git</a></div><div>Copyright (coffee) 2006-2011 Gluster Inc. &lt;<a href="http://www.gluster.com">http://www.gluster.com</a>&gt;</div><div>GlusterFS comes with ABSOLUTELY NO WARRANTY.</div><div>You may redistribute copies of GlusterFS under the terms of the GNU General Public License.</div></div><div><br></div><div><pre class="" style="word-wrap:break-word;width:50em"><span style="white-space:pre-wrap;color:rgb(0,0,0);font-family:arial,sans-serif">Steps to Reproduce:</span><br></pre><pre class="" style="word-wrap:break-word;width:50em"><font color="#000000"><span style="white-space:pre-wrap"><font face="arial, helvetica, sans-serif">1. create a 3x(4+2) disperse volume across nodes
2. FUSE mount on the client and start creating files/directories with mkdir and rsync/dd
3. simulate the disk failure by killing pid of any disk on one node and add again the same disk after formatting the drive 
4. start volume by force
5. self haling adding the file name with 0 bytes in newly formatted drive
6. wait more time to finish self healing, but self healing is not happening the file lies on 0 bytes
7. Try to read same file from client, now the file name with 0 byte try to recovery and recovery completed. Get the md5sum of the file with all client live and the result is positive
8. Now, bring down 2 of the node 
9. Now try to get the mdsum of same recoverd file, client throws I/O error</font><br></span></font></pre></div><div>Screen shots</div><div><br></div><div><div>admin@node001:~$ sudo gluster volume info</div><div><br></div><div>Volume Name: vaulttest21</div><div>Type: Distributed-Disperse</div><div>Volume ID: ac6a374d-a0a2-405c-823d-0672fd92f0af</div><div>Status: Started</div><div>Number of Bricks: 3 x (4 + 2) = 18</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 10.1.2.1:/media/disk1</div><div>Brick2: 10.1.2.2:/media/disk1</div><div>Brick3: 10.1.2.3:/media/disk1</div><div>Brick4: 10.1.2.4:/media/disk1</div><div>Brick5: 10.1.2.5:/media/disk1</div><div>Brick6: 10.1.2.6:/media/disk1</div><div>Brick7: 10.1.2.1:/media/disk2</div><div>Brick8: 10.1.2.2:/media/disk2</div><div>Brick9: 10.1.2.3:/media/disk2</div><div>Brick10: 10.1.2.4:/media/disk2</div><div>Brick11: 10.1.2.5:/media/disk2</div><div>Brick12: 10.1.2.6:/media/disk2</div><div>Brick13: 10.1.2.1:/media/disk3</div><div>Brick14: 10.1.2.2:/media/disk3</div><div>Brick15: 10.1.2.3:/media/disk3</div><div>Brick16: 10.1.2.4:/media/disk3</div><div>Brick17: 10.1.2.5:/media/disk3</div><div>Brick18: 10.1.2.6:/media/disk3</div><div>Options Reconfigured:</div><div>performance.readdir-ahead: on</div></div><div><br></div><div><b><u>After simulated the disk failure( node3- disk2) and adding aging by formatting the drive </u></b></div><div><br></div><div>admin@node003:~$ date</div>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">Thu Jun 25 <b>16:21:58</b> IST 2015</span></p><p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN"><br></span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">admin@node003:~$ ls -l -h 
/media/disk2</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">total 1.6G</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">drwxr-xr-x 3 root root   22 Jun 25
16:18 1</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN"><b>-rw-r--r-- 2 root root    0 Jun 25
16:17 up1</b></span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN"><b>-rw-r--r-- 2 root root    0 Jun 25
16:17 up2</b></span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">-rw-r--r-- 2 root root 797M Jun 25 16:03 up3</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">-rw-r--r-- 2 root root 797M Jun 25 16:04 up4</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">--</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">admin@node003:~$ date</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">Thu Jun 25 <b>16:25:09</b> IST 2015</span></p><p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN"><br></span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">admin@node003:~$ ls
-l -h  /media/disk2</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">total 1.6G</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">drwxr-xr-x 3 root root   22 Jun 25
16:18 1</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN"><b>-rw-r--r-- 2 root root    0 Jun 25
16:17 up1</b></span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN"><b>-rw-r--r-- 2 root root    0 Jun 25
16:17 up2</b></span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">-rw-r--r-- 2 root root 797M Jun 25 16:03 up3</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">-rw-r--r-- 2 root root 797M
Jun 25 16:04 up4</span></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><br></p><p class="MsoNormal" style="margin-bottom:0.0001pt">admin@node003:~$ date</p>

<p class="MsoNormal" style="margin-bottom:0.0001pt">Thu Jun 25
<b>16:41:25</b> IST 2015</p><p class="MsoNormal" style="margin-bottom:0.0001pt"><br></p>

<p class="MsoNormal" style="margin-bottom:0.0001pt"><span lang="EN">admin@node003:~$
</span> ls -l -h  /media/disk2</p>

<p class="MsoNormal" style="margin-bottom:0.0001pt">total 1.6G</p>

<p class="MsoNormal" style="margin-bottom:0.0001pt">drwxr-xr-x 3
root root   22 Jun 25 16:18 1</p>

<p class="MsoNormal" style="margin-bottom:0.0001pt">-rw-r--r-- 2
root root    0 Jun 25 16:17 up1</p>

<p class="MsoNormal" style="margin-bottom:0.0001pt">-rw-r--r-- 2
root root    0 Jun 25 16:17 up2</p>

<p class="MsoNormal" style="margin-bottom:0.0001pt">-rw-r--r-- 2
root root 797M Jun 25 16:03 up3</p>

<p class="MsoNormal" style="margin-bottom:0.0001pt">-rw-r--r-- 2
root root 797M Jun 25 16:04 up4</p></div><div><br><div class="gmail_extra"><b>after waiting nearly 20 minutes, self healing is not recovered the full data junk . Then try to read the file using md5sum</b></div><div class="gmail_extra"><b><br></b></div><div class="gmail_extra"><div class="gmail_extra">root@mas03:/mnt/gluster# time md5sum up1</div><div class="gmail_extra">4650543ade404ed5a1171726e76f8b7c  up1</div><div class="gmail_extra"><br></div><div class="gmail_extra">real    1m58.010s</div><div class="gmail_extra">user    0m6.243s</div><div class="gmail_extra">sys     0m0.778s</div><div class="gmail_extra"><br></div><div class="gmail_extra"><b>corrupted junk starts growing</b></div><div class="gmail_extra"><br></div><div class="gmail_extra">admin@node003:~$ ls -l -h  /media/disk2</div><div class="gmail_extra">total 2.6G</div><div class="gmail_extra">drwxr-xr-x 3 root root   22 Jun 25 16:18 1</div><div class="gmail_extra">-rw-r--r-- 2 root root 797M Jun 25 15:57 up1</div><div class="gmail_extra">-rw-r--r-- 2 root root    0 Jun 25 16:17 up2</div><div class="gmail_extra">-rw-r--r-- 2 root root 797M Jun 25 16:03 up3</div><div class="gmail_extra">-rw-r--r-- 2 root root 797M Jun 25 16:04 up4</div><div class="gmail_extra"><br></div><div class="gmail_extra"><b><u>To verify healed file after two node 5 &amp; 6 taken offline</u></b></div><div class="gmail_extra"><br></div><div class="gmail_extra">root@mas03:/mnt/gluster# time md5sum up1</div><div class="gmail_extra">md5sum: up1:<b> Input/output error</b></div><div><br></div><div>Still the I/O error is not rectified. Could you suggest, if any thing wrong on our testing?</div><div><br></div><div><br></div><div><div>admin@node001:~$ sudo gluster volume get vaulttest21 all</div><div>Option                                  Value</div><div>------                                  -----</div><div>cluster.lookup-unhashed                 on</div><div>cluster.lookup-optimize                 off</div><div>cluster.min-free-disk                   10%</div><div>cluster.min-free-inodes                 5%</div><div>cluster.rebalance-stats                 off</div><div>cluster.subvols-per-directory           (null)</div><div>cluster.readdir-optimize                off</div><div>cluster.rsync-hash-regex                (null)</div><div>cluster.extra-hash-regex                (null)</div><div>cluster.dht-xattr-name                  trusted.glusterfs.dht</div><div>cluster.randomize-hash-range-by-gfid    off</div><div>cluster.rebal-throttle                  normal</div><div>cluster.local-volume-name               (null)</div><div>cluster.weighted-rebalance              on</div><div>cluster.entry-change-log                on</div><div>cluster.read-subvolume                  (null)</div><div>cluster.read-subvolume-index            -1</div><div>cluster.read-hash-mode                  1</div><div>cluster.background-self-heal-count      16</div><div>cluster.metadata-self-heal              on</div><div>cluster.data-self-heal                  on</div><div>cluster.entry-self-heal                 on</div><div>cluster.self-heal-daemon                on</div><div>cluster.heal-timeout                    600</div><div>cluster.self-heal-window-size           1</div><div>cluster.data-change-log                 on</div><div>cluster.metadata-change-log             on</div><div>cluster.data-self-heal-algorithm        (null)</div><div>cluster.eager-lock                      on</div><div>cluster.quorum-type                     none</div><div>cluster.quorum-count                    (null)</div><div>cluster.choose-local                    true</div><div>cluster.self-heal-readdir-size          1KB</div><div>cluster.post-op-delay-secs              1</div><div>cluster.ensure-durability               on</div><div>cluster.consistent-metadata             no</div><div>cluster.stripe-block-size               128KB</div><div>cluster.stripe-coalesce                 true</div><div>diagnostics.latency-measurement         off</div><div>diagnostics.dump-fd-stats               off</div><div>diagnostics.count-fop-hits              off</div><div>diagnostics.brick-log-level             INFO</div><div>diagnostics.client-log-level            INFO</div><div>diagnostics.brick-sys-log-level         CRITICAL</div><div>diagnostics.client-sys-log-level        CRITICAL</div><div>diagnostics.brick-logger                (null)</div><div>diagnostics.client-logger               (null)</div><div>diagnostics.brick-log-format            (null)</div><div>diagnostics.client-log-format           (null)</div><div>diagnostics.brick-log-buf-size          5</div><div>diagnostics.client-log-buf-size         5</div><div>diagnostics.brick-log-flush-timeout     120</div><div>diagnostics.client-log-flush-timeout    120</div><div>performance.cache-max-file-size         0</div><div>performance.cache-min-file-size         0</div><div>performance.cache-refresh-timeout       1</div><div>performance.cache-priority</div><div>performance.cache-size                  32MB</div><div>performance.io-thread-count             16</div><div>performance.high-prio-threads           16</div><div>performance.normal-prio-threads         16</div><div>performance.low-prio-threads            16</div><div>performance.least-prio-threads          1</div><div>performance.enable-least-priority       on</div><div>performance.least-rate-limit            0</div><div>performance.cache-size                  128MB</div><div>performance.flush-behind                on</div><div>performance.nfs.flush-behind            on</div><div>performance.write-behind-window-size    1MB</div><div>performance.nfs.write-behind-window-size1MB</div><div>performance.strict-o-direct             off</div><div>performance.nfs.strict-o-direct         off</div><div>performance.strict-write-ordering       off</div><div>performance.nfs.strict-write-ordering   off</div><div>performance.lazy-open                   yes</div><div>performance.read-after-open             no</div><div>performance.read-ahead-page-count       4</div><div>performance.md-cache-timeout            1</div><div>features.encryption                     off</div><div>encryption.master-key                   (null)</div><div>encryption.data-key-size                256</div><div>encryption.block-size                   4096</div><div>network.frame-timeout                   1800</div><div>network.ping-timeout                    42</div><div>network.tcp-window-size                 (null)</div><div>features.lock-heal                      off</div><div>features.grace-timeout                  10</div><div>network.remote-dio                      disable</div><div>client.event-threads                    2</div><div>network.ping-timeout                    42</div><div>network.tcp-window-size                 (null)</div><div>network.inode-lru-limit                 16384</div><div>auth.allow                              *</div><div>auth.reject                             (null)</div><div>transport.keepalive                     (null)</div><div>server.allow-insecure                   (null)</div><div>server.root-squash                      off</div><div>server.anonuid                          65534</div><div>server.anongid                          65534</div><div>server.statedump-path                   /var/run/gluster</div><div>server.outstanding-rpc-limit            64</div><div>features.lock-heal                      off</div><div>features.grace-timeout                  (null)</div><div>server.ssl                              (null)</div><div>auth.ssl-allow                          *</div><div>server.manage-gids                      off</div><div>client.send-gids                        on</div><div>server.gid-timeout                      300</div><div>server.own-thread                       (null)</div><div>server.event-threads                    2</div><div>performance.write-behind                on</div><div>performance.read-ahead                  on</div><div>performance.readdir-ahead               on</div><div>performance.io-cache                    on</div><div>performance.quick-read                  on</div><div>performance.open-behind                 on</div><div>performance.stat-prefetch               on</div><div>performance.client-io-threads           off</div><div>performance.nfs.write-behind            on</div><div>performance.nfs.read-ahead              off</div><div>performance.nfs.io-cache                off</div><div>performance.nfs.quick-read              off</div><div>performance.nfs.stat-prefetch           off</div><div>performance.nfs.io-threads              off</div><div>performance.force-readdirp              true</div><div>features.file-snapshot                  off</div><div>features.uss                            off</div><div>features.snapshot-directory             .snaps</div><div>features.show-snapshot-directory        off</div><div>network.compression                     off</div><div>network.compression.window-size         -15</div><div>network.compression.mem-level           8</div><div>network.compression.min-size            0</div><div>network.compression.compression-level   -1</div><div>network.compression.debug               false</div><div>features.limit-usage                    (null)</div><div>features.quota-timeout                  0</div><div>features.default-soft-limit             80%</div><div>features.soft-timeout                   60</div><div>features.hard-timeout                   5</div><div>features.alert-time                     86400</div><div>features.quota-deem-statfs              off</div><div>geo-replication.indexing                off</div><div>geo-replication.indexing                off</div><div>geo-replication.ignore-pid-check        off</div><div>geo-replication.ignore-pid-check        off</div><div>features.quota                          off</div><div>features.inode-quota                    off</div><div>features.bitrot                         disable</div><div>debug.trace                             off</div><div>debug.log-history                       no</div><div>debug.log-file                          no</div><div>debug.exclude-ops                       (null)</div><div>debug.include-ops                       (null)</div><div>debug.error-gen                         off</div><div>debug.error-failure                     (null)</div><div>debug.error-number                      (null)</div><div>debug.random-failure                    off</div><div>debug.error-fops                        (null)</div><div>nfs.enable-ino32                        no</div><div>nfs.mem-factor                          15</div><div>nfs.export-dirs                         on</div><div>nfs.export-volumes                      on</div><div>nfs.addr-namelookup                     off</div><div>nfs.dynamic-volumes                     off</div><div>nfs.register-with-portmap               on</div><div>nfs.outstanding-rpc-limit               16</div><div>nfs.port                                2049</div><div>nfs.rpc-auth-unix                       on</div><div>nfs.rpc-auth-null                       on</div><div>nfs.rpc-auth-allow                      all</div><div>nfs.rpc-auth-reject                     none</div><div>nfs.ports-insecure                      off</div><div>nfs.trusted-sync                        off</div><div>nfs.trusted-write                       off</div><div>nfs.volume-access                       read-write</div><div>nfs.export-dir</div><div>nfs.disable                             false</div><div>nfs.nlm                                 on</div><div>nfs.acl                                 on</div><div>nfs.mount-udp                           off</div><div>nfs.mount-rmtab                         /var/lib/glusterd/nfs/rmtab</div><div>nfs.rpc-statd                           /sbin/rpc.statd</div><div>nfs.server-aux-gids                     off</div><div>nfs.drc                                 off</div><div>nfs.drc-size                            0x20000</div><div>nfs.read-size                           (1 * 1048576ULL)</div><div>nfs.write-size                          (1 * 1048576ULL)</div><div>nfs.readdir-size                        (1 * 1048576ULL)</div><div>nfs.exports-auth-enable                 (null)</div><div>nfs.auth-refresh-interval-sec           (null)</div><div>nfs.auth-cache-ttl-sec                  (null)</div><div>features.read-only                      off</div><div>features.worm                           off</div><div>storage.linux-aio                       off</div><div>storage.batch-fsync-mode                reverse-fsync</div><div>storage.batch-fsync-delay-usec          0</div><div>storage.owner-uid                       -1</div><div>storage.owner-gid                       -1</div><div>storage.node-uuid-pathinfo              off</div><div>storage.health-check-interval           30</div><div>storage.build-pgfid                     off</div><div>storage.bd-aio                          off</div><div>cluster.server-quorum-type              off</div><div>cluster.server-quorum-ratio             0</div><div>changelog.changelog                     off</div><div>changelog.changelog-dir                 (null)</div><div>changelog.encoding                      ascii</div><div>changelog.rollover-time                 15</div><div>changelog.fsync-interval                5</div><div>changelog.changelog-barrier-timeout     120</div><div>changelog.capture-del-path              off</div><div>features.barrier                        disable</div><div>features.barrier-timeout                120</div><div>features.trash                          off</div><div>features.trash-dir                      .trashcan</div><div>features.trash-eliminate-path           (null)</div><div>features.trash-max-filesize             5MB</div><div>features.trash-internal-op              off</div><div>cluster.enable-shared-storage           disable</div><div>features.ctr-enabled                    off</div><div>features.record-counters                off</div><div>features.ctr_link_consistency           off</div><div>locks.trace                             (null)</div><div>cluster.disperse-self-heal-daemon       enable</div><div>cluster.quorum-reads                    no</div><div>client.bind-insecure                    (null)</div><div>ganesha.enable                          off</div><div>features.shard                          off</div><div>features.shard-block-size               4MB</div><div>features.scrub-throttle                 lazy</div><div>features.scrub-freq                     biweekly</div><div>features.expiry-time                    120</div><div>features.cache-invalidation             off</div><div>features.cache-invalidation-timeout     60</div></div><div><br></div><div><br></div><div>Thanks &amp; regards</div><div>Backer</div><div><br></div><div><br></div></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 15, 2015 at 1:26 PM, Xavier Hernandez <span dir="ltr">&lt;<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On 06/15/2015 09:25 AM, Mohamed Pakkeer wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Hi Xavier,<br>
<br><span class="">
When can we expect the 3.7.2 release for fixing the I/O error which we<br>
discussed on this mail thread?.<br>
</span></blockquote>
<br>
As per the latest meeting held last wednesday [1] it will be released this week.<br>
<br>
Xavi<br>
<br>
[1] <a href="http://meetbot.fedoraproject.org/gluster-meeting/2015-06-10/gluster-meeting.2015-06-10-12.01.html" rel="noreferrer" target="_blank">http://meetbot.fedoraproject.org/gluster-meeting/2015-06-10/gluster-meeting.2015-06-10-12.01.html</a><br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">
<br>
Thanks<br>
Backer<br>
<br>
On Wed, May 27, 2015 at 8:02 PM, Xavier Hernandez &lt;<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a><br></span><span class="">
&lt;mailto:<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a>&gt;&gt; wrote:<br>
<br>
    Hi again,<br>
<br>
    in today&#39;s gluster meeting [1] it has been decided that 3.7.1 will<br>
    be released urgently to solve a bug in glusterd. All fixes planned<br>
    for 3.7.1 will be moved to 3.7.2 which will be released soon after.<br>
<br>
    Xavi<br>
<br>
    [1]<br>
    <a href="http://meetbot.fedoraproject.org/gluster-meeting/2015-05-27/gluster-meeting.2015-05-27-12.01.html" rel="noreferrer" target="_blank">http://meetbot.fedoraproject.org/gluster-meeting/2015-05-27/gluster-meeting.2015-05-27-12.01.html</a><br>
<br>
<br>
    On 05/27/2015 12:01 PM, Xavier Hernandez wrote:<br>
<br>
        On 05/27/2015 11:26 AM, Mohamed Pakkeer wrote:<br>
<br>
            Hi Xavier,<br>
<br>
            Thanks for your reply. When can we expect the 3.7.1 release?<br>
<br>
<br>
        AFAIK a beta of 3.7.1 will be released very soon.<br>
<br>
<br>
            cheers<br>
            Backer<br>
<br>
            On Wed, May 27, 2015 at 1:22 PM, Xavier Hernandez<br>
            &lt;<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a> &lt;mailto:<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a>&gt;<br></span>
            &lt;mailto:<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a><div><div class="h5"><br>
            &lt;mailto:<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a>&gt;&gt;&gt; wrote:<br>
<br>
                 Hi,<br>
<br>
                 some Input/Output error issues have been identified and<br>
            fixed. These<br>
                 fixes will be available on 3.7.1.<br>
<br>
                 Xavi<br>
<br>
<br>
                 On 05/26/2015 10:15 AM, Mohamed Pakkeer wrote:<br>
<br>
                     Hi Glusterfs Experts,<br>
<br>
                     We are testing glusterfs 3.7.0 tarball on our 10<br>
            Node glusterfs<br>
                     cluster.<br>
                     Each node has 36 dirves and please find the volume<br>
            info below<br>
<br>
                     Volume Name: vaulttest5<br>
                     Type: Distributed-Disperse<br>
                     Volume ID: 68e082a6-9819-4885-856c-1510cd201bd9<br>
                     Status: Started<br>
                     Number of Bricks: 36 x (8 + 2) = 360<br>
                     Transport-type: tcp<br>
                     Bricks:<br>
                     Brick1: 10.1.2.1:/media/disk1<br>
                     Brick2: 10.1.2.2:/media/disk1<br>
                     Brick3: 10.1.2.3:/media/disk1<br>
                     Brick4: 10.1.2.4:/media/disk1<br>
                     Brick5: 10.1.2.5:/media/disk1<br>
                     Brick6: 10.1.2.6:/media/disk1<br>
                     Brick7: 10.1.2.7:/media/disk1<br>
                     Brick8: 10.1.2.8:/media/disk1<br>
                     Brick9: 10.1.2.9:/media/disk1<br>
                     Brick10: 10.1.2.10:/media/disk1<br>
                     Brick11: 10.1.2.1:/media/disk2<br>
                     Brick12: 10.1.2.2:/media/disk2<br>
                     Brick13: 10.1.2.3:/media/disk2<br>
                     Brick14: 10.1.2.4:/media/disk2<br>
                     Brick15: 10.1.2.5:/media/disk2<br>
                     Brick16: 10.1.2.6:/media/disk2<br>
                     Brick17: 10.1.2.7:/media/disk2<br>
                     Brick18: 10.1.2.8:/media/disk2<br>
                     Brick19: 10.1.2.9:/media/disk2<br>
                     Brick20: 10.1.2.10:/media/disk2<br>
                     ...<br>
                     ....<br>
                     Brick351: 10.1.2.1:/media/disk36<br>
                     Brick352: 10.1.2.2:/media/disk36<br>
                     Brick353: 10.1.2.3:/media/disk36<br>
                     Brick354: 10.1.2.4:/media/disk36<br>
                     Brick355: 10.1.2.5:/media/disk36<br>
                     Brick356: 10.1.2.6:/media/disk36<br>
                     Brick357: 10.1.2.7:/media/disk36<br>
                     Brick358: 10.1.2.8:/media/disk36<br>
                     Brick359: 10.1.2.9:/media/disk36<br>
                     Brick360: 10.1.2.10:/media/disk36<br>
                     Options Reconfigured:<br>
                     performance.readdir-ahead: on<br>
<br>
                     We did some performance testing and simulated the<br>
            proactive self<br>
                     healing<br>
                     for Erasure coding. Disperse volume has been<br>
            created across<br>
            nodes.<br>
<br>
                     _*Description of problem*_<br>
<br>
                     I disconnected the *network of two nodes* and tried<br>
            to write<br>
                     some video<br>
                     files and *glusterfs* *wrote the video files on<br>
            balance 8 nodes<br>
                     perfectly*. I tried to download the uploaded file<br>
            and it was<br>
                     downloaded<br>
                     perfectly. Then i enabled the network of two nodes,<br>
            the pro<br>
                     active self<br>
                     healing mechanism worked perfectly and wrote the<br>
            unavailable<br>
            junk of<br>
                     data to the recently enabled node from the other 8<br>
            nodes. But<br>
            when i<br>
                     tried to download the same file node, it showed<br>
            Input/Output<br>
                     error. I<br>
                     couldn&#39;t download the file. I think there is an<br>
            issue in pro<br>
                     active self<br>
                     healing.<br>
<br>
                     Also we tried the simulation with one node network<br>
            failure. We<br>
            faced<br>
                     same I/O error issue while downloading the file<br>
<br>
<br>
                     _Error while downloading file _<br>
                     _<br>
                     _<br>
<br>
                     root@master02:/home/admin# rsync -r --progress<br>
                     /mnt/gluster/file13_AN<br>
                     ./1/file13_AN-2<br>
<br>
                     sending incremental file list<br>
<br>
                     file13_AN<br>
<br>
                         3,342,355,597 100% 4.87MB/s    0:10:54 (xfr#1,<br>
            to-chk=0/1)<br>
<br>
                     rsync: read errors mapping &quot;/mnt/gluster/file13_AN&quot;:<br>
                     Input/output error (5)<br>
<br>
                     WARNING: file13_AN failed verification -- update<br>
            discarded (will<br>
                     try again).<br>
<br>
                        root@master02:/home/admin# cp /mnt/gluster/file13_AN<br>
                     ./1/file13_AN-3<br>
<br>
                     cp: error reading ‘/mnt/gluster/file13_AN’:<br>
            Input/output error<br>
<br>
                     cp: failed to extend ‘./1/file13_AN-3’:<br>
            Input/output error_<br>
                     _<br>
<br>
<br>
                     We can&#39;t conclude the issue with glusterfs 3.7.0 or<br>
            our glusterfs<br>
                     configuration.<br>
<br>
                     Any help would be greatly appreciated<br>
<br>
                     --<br>
                     Cheers<br>
                     Backer<br>
<br>
<br>
<br>
                     _______________________________________________<br>
                     Gluster-users mailing list<br>
            <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> &lt;mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>&gt;<br></div></div>
            &lt;mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><span class=""><br>
            &lt;mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>&gt;&gt;<br>
            <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
<br>
<br>
<br>
<br>
        _______________________________________________<br>
        Gluster-users mailing list<br></span>
        <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> &lt;mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>&gt;<br>
        <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Thanks &amp; Regards    <br><span>K.</span>Mohamed <span>Pakkeer</span><br>Mobile- 0091-8754410114<br><br></div>
</div></div></div>