<div dir="ltr">Hello Krutika, Ravishankar,<div>Unfortunately, i deleted my previous test instance in AWS (running on EBS storage, on CentOS7 with XFS).</div><div>I was using 3.7.15 for Gluster.  It&#39;s good to know they should be the same. And, i have also set up another set of VMs quick locally, and use the same version of 3.7.15. It did return the same checksum. I will see if i have time and resources to set a test again in AWS.</div><div><br></div><div>Thank you both for the promptly reply,</div><div>Melvin</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 27, 2016 at 7:59 PM, Krutika Dhananjay <span dir="ltr">&lt;<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Worked fine for me actually.<br><br># md5sum lastlog <br>ab7557d582484a068c3478e3420693<wbr>26  lastlog<br># rsync -avH lastlog  /mnt/<br>sending incremental file list<br>lastlog<br><br>sent 364,001,522 bytes  received 35 bytes  48,533,540.93 bytes/sec<br>total size is 363,912,592  speedup is 1.00<br># cd /mnt<br># md5sum lastlog<br>ab7557d582484a068c3478e3420693<wbr>26  lastlog<span class="HOEnZb"><font color="#888888"><br><br></font></span></div><span class="HOEnZb"><font color="#888888">-Krutika<br><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 28, 2016 at 8:21 AM, Krutika Dhananjay <span dir="ltr">&lt;<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Hi,<br><br></div>What version of gluster are you using?<br>Also, could you share your volume configuration (`gluster volume info`)?<span class="m_-5875983066279508333HOEnZb"><font color="#888888"><br><br></font></span></div><span class="m_-5875983066279508333HOEnZb"><font color="#888888">-Krutika<br></font></span></div><div class="m_-5875983066279508333HOEnZb"><div class="m_-5875983066279508333h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N <span dir="ltr">&lt;<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><span>
    <div>On 09/28/2016 12:16 AM, ML Wong wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">Hello Ravishankar,
        <div>Thanks for introducing the sharding feature to me.</div>
        <div>It does seems to resolve the problem i was encountering
          earlier. But I have 1 question, do we expect the checksum of
          the file to be different if i copy from directory A to a
          shard-enabled volume?</div>
      </div>
    </blockquote>
    <br></span>
    No the checksums must match. Perhaps Krutika who works on Sharding
    (CC&#39;ed) can help you figure out why that isn&#39;t the case here.<br>
    -Ravi<div><div><br>
    <blockquote type="cite">
      <div dir="ltr">
        <div><br>
        </div>
        <div>
          <div>[xxxxx@ip-172-31-1-72 ~]$ sudo sha1sum
            /var/tmp/oVirt-Live-4.0.4.iso</div>
          <div>ea8472f6408163fa9a315d878c651a<wbr>519fc3f438
             /var/tmp/oVirt-Live-4.0.4.iso</div>
          <div>[xxxxx@ip-172-31-1-72 ~]$ sudo rsync -avH
            /var/tmp/oVirt-Live-4.0.4.iso /mnt/</div>
          <div>sending incremental file list</div>
          <div>oVirt-Live-4.0.4.iso</div>
          <div><br>
          </div>
          <div>sent 1373802342 bytes  received 31 bytes  30871963.44
            bytes/sec</div>
          <div>total size is 1373634560  speedup is 1.00</div>
          <div>[xxxxx@ip-172-31-1-72 ~]$ sudo sha1sum
            /mnt/oVirt-Live-4.0.4.iso</div>
          <div>14e9064857b40face90c91750d79c4<wbr>d8665b9cab
             /mnt/oVirt-Live-4.0.4.iso</div>
        </div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Mon, Sep 26, 2016 at 6:42 PM,
          Ravishankar N <span dir="ltr">&lt;<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"><span>
                <div>On 09/27/2016 05:15 AM, ML Wong wrote:<br>
                </div>
                <blockquote type="cite">
                  <div dir="ltr">Have anyone in the list who has tried
                    copying file which is bigger than the individual
                    brick/replica size?
                    <div>Test Scenario:</div>
                    <div>Distributed-Replicated volume, 2GB size, 2x2 =
                      4 bricks, 2 replicas</div>
                    <div>Each replica has 1GB</div>
                    <div><br>
                    </div>
                    <div>When i tried to copy file this volume, by both
                      fuse, or nfs mount. i get I/O error.</div>
                    <div>
                      <div>Filesystem                  Size  Used Avail
                        Use% Mounted on</div>
                      <div>/dev/mapper/vg0-brick1     1017M   33M  985M
                          4% /data/brick1</div>
                      <div>/dev/mapper/vg0-brick2     1017M  109M  909M
                         11% /data/brick2</div>
                      <div>lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G
                          7% /sharevol1</div>
                    </div>
                    <div><br>
                    </div>
                    <div>
                      <div>[xxxxxx@cloud-dev1 ~]$ du -sh
                        /var/tmp/ovirt-live-el7-3.6.2.<wbr>iso </div>
                      <div>1.3G<span style="white-space:pre-wrap">        </span>/var/tmp/ovirt-live-el7-3.6.2.<wbr>iso</div>
                      <div><br>
                      </div>
                      <div>[melvinw@lbre-cloud-dev1 ~]$ sudo cp
                        /var/tmp/ovirt-live-el7-3.6.2.<wbr>iso
                        /sharevol1/</div>
                      <div>cp: error writing
                        ‘/sharevol1/ovirt-live-el7-3.6<wbr>.2.iso’:
                        Input/output error</div>
                      <div>cp: failed to extend
                        ‘/sharevol1/ovirt-live-el7-3.6<wbr>.2.iso’:
                        Input/output error</div>
                      <div>cp: failed to close
                        ‘/sharevol1/ovirt-live-el7-3.6<wbr>.2.iso’:
                        Input/output error</div>
                    </div>
                  </div>
                </blockquote>
                <br>
              </span> Does the mount log give you more information? It
              it was a disk full issue, the error you would get is
              ENOSPC and not EIO. This looks like something else.<span><br>
                <blockquote type="cite">
                  <div dir="ltr">
                    <div><br>
                    </div>
                    <div>I know, we have experts in this mailing list.
                      And, i assume, this is a common situation where
                      many Gluster users may have encountered.  The
                      worry i have what if you have a big VM file
                      sitting on top of Gluster volume ...?</div>
                    <div><br>
                    </div>
                  </div>
                </blockquote>
              </span> It is recommended to use sharding (<a href="http://blog.gluster.org/2015/12/introducing-shard-translator/" target="_blank">http://blog.gluster.org/2015/<wbr>12/introducing-shard-translato<wbr>r/</a>)
              for VM workloads to alleviate these kinds of issues.<br>
              -Ravi<br>
              <br>
              <blockquote type="cite"><span>
                  <div dir="ltr">
                    <div>Any insights will be much appreciated.<br>
                    </div>
                    <div><br>
                    </div>
                  </div>
                  <br>
                  <fieldset></fieldset>
                  <br>
                </span>
                <pre>______________________________<wbr>_________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a></pre>
    </blockquote>
    <p>

    </p>
  </div>

</blockquote></div>
</div>



</blockquote><p>
</p></div></div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>