<div dir="ltr">Hello Ravishankar,<div>Thanks for introducing the sharding feature to me.</div><div>It does seems to resolve the problem i was encountering earlier. But I have 1 question, do we expect the checksum of the file to be different if i copy from directory A to a shard-enabled volume?</div><div><br></div><div><div>[xxxxx@ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso</div><div>ea8472f6408163fa9a315d878c651a519fc3f438 /var/tmp/oVirt-Live-4.0.4.iso</div><div>[xxxxx@ip-172-31-1-72 ~]$ sudo rsync -avH /var/tmp/oVirt-Live-4.0.4.iso /mnt/</div><div>sending incremental file list</div><div>oVirt-Live-4.0.4.iso</div><div><br></div><div>sent 1373802342 bytes received 31 bytes 30871963.44 bytes/sec</div><div>total size is 1373634560 speedup is 1.00</div><div>[xxxxx@ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso</div><div>14e9064857b40face90c91750d79c4d8665b9cab /mnt/oVirt-Live-4.0.4.iso</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N <span dir="ltr"><<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class="">
<div>On 09/27/2016 05:15 AM, ML Wong wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Have anyone in the list who has tried copying file
which is bigger than the individual brick/replica size?
<div>Test Scenario:</div>
<div>Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2
replicas</div>
<div>Each replica has 1GB</div>
<div><br>
</div>
<div>When i tried to copy file this volume, by both fuse, or nfs
mount. i get I/O error.</div>
<div>
<div>Filesystem Size Used Avail Use% Mounted
on</div>
<div>/dev/mapper/vg0-brick1 1017M 33M 985M 4%
/data/brick1</div>
<div>/dev/mapper/vg0-brick2 1017M 109M 909M 11%
/data/brick2</div>
<div>lbre-cloud-dev1:/sharevol1 2.0G 141M 1.9G 7%
/sharevol1</div>
</div>
<div><br>
</div>
<div>
<div>[xxxxxx@cloud-dev1 ~]$ du -sh
/var/tmp/ovirt-live-el7-3.6.2.<wbr>iso </div>
<div>1.3G<span style="white-space:pre-wrap">        </span>/var/tmp/ovirt-live-el7-3.6.2.<wbr>iso</div>
<div><br>
</div>
<div>[melvinw@lbre-cloud-dev1 ~]$ sudo cp
/var/tmp/ovirt-live-el7-3.6.2.<wbr>iso /sharevol1/</div>
<div>cp: error writing ‘/sharevol1/ovirt-live-el7-3.<wbr>6.2.iso’:
Input/output error</div>
<div>cp: failed to extend
‘/sharevol1/ovirt-live-el7-3.<wbr>6.2.iso’: Input/output error</div>
<div>cp: failed to close
‘/sharevol1/ovirt-live-el7-3.<wbr>6.2.iso’: Input/output error</div>
</div>
</div>
</blockquote>
<br></span>
Does the mount log give you more information? It it was a disk full
issue, the error you would get is ENOSPC and not EIO. This looks
like something else.<span class=""><br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>I know, we have experts in this mailing list. And, i
assume, this is a common situation where many Gluster users
may have encountered. The worry i have what if you have a big
VM file sitting on top of Gluster volume ...?</div>
<div><br>
</div>
</div>
</blockquote></span>
It is recommended to use sharding
(<a href="http://blog.gluster.org/2015/12/introducing-shard-translator/" target="_blank">http://blog.gluster.org/2015/<wbr>12/introducing-shard-<wbr>translator/</a>) for
VM workloads to alleviate these kinds of issues.<br>
-Ravi<br>
<br>
<blockquote type="cite"><span class="">
<div dir="ltr">
<div>Any insights will be much appreciated.<br>
</div>
<div><br>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><pre>______________________________<wbr>_________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a></pre>
</blockquote>
<p><br>
</p>
</div>
</blockquote></div><br></div>