<div dir="ltr"><div><div>Tried it locally on my setup. Worked fine.<br><br></div>Could you please attach the mount logs?<br><br></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 25, 2016 at 6:55 PM, Pranith Kumar Karampuri <span dir="ltr">&lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">+Krutika<br></div><div class="gmail_extra"><div><div class="h5"><br><div class="gmail_quote">On Mon, Oct 24, 2016 at 4:10 PM, qingwei wei <span dir="ltr">&lt;<a href="mailto:tchengwee@gmail.com" target="_blank">tchengwee@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I am currently running a simple gluster setup using one server node<br>
with multiple disks. I realize that if i delete away all the .shard<br>
files in one replica in the backend, my application (dd) will report<br>
Input/Output error even though i have 3 replicas.<br>
<br>
My gluster version is 3.7.16<br>
<br>
gluster volume file<br>
<br>
Volume Name: testHeal<br>
Type: Replicate<br>
Volume ID: 26d16d7f-bc4f-44a6-a18b-eab780<wbr>d80851<br>
Status: Started<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 192.168.123.4:/mnt/sdb_mssd/te<wbr>stHeal2<br>
Brick2: 192.168.123.4:/mnt/sde_mssd/te<wbr>stHeal2<br>
Brick3: 192.168.123.4:/mnt/sdd_mssd/te<wbr>stHeal2<br>
Options Reconfigured:<br>
cluster.self-heal-daemon: on<br>
features.shard-block-size: 16MB<br>
features.shard: on<br>
performance.readdir-ahead: on<br>
<br>
dd error<br>
<br>
[root@fujitsu05 .shard]# dd of=/home/test if=/mnt/fuseMount/ddTest<br>
bs=16M count=20 oflag=direct<br>
dd: error reading ‘/mnt/fuseMount/ddTest’: Input/output error<br>
1+0 records in<br>
1+0 records out<br>
16777216 bytes (17 MB) copied, 0.111038 s, 151 MB/s<br>
<br>
in the .shard folder where i deleted all the .shard file, i can see<br>
one .shard file is recreated<br>
<br>
getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
# file: 9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
trusted.afr.testHeal-client-0=<wbr>0x000000010000000100000000<br>
trusted.afr.testHeal-client-2=<wbr>0x000000010000000100000000<br>
trusted.gfid=0x41b653f7daa1462<wbr>7b1f91f9e8554ddde<br>
<br>
However, the gfid is not the same compare to the other replicas<br>
<br>
getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
# file: 9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
trusted.afr.dirty=0x0000000000<wbr>00000000000000<br>
trusted.afr.testHeal-client-1=<wbr>0x000000000000000000000000<br>
trusted.bit-rot.version=0x0300<wbr>000000000000580dde99000e5e5d<br>
trusted.gfid=0x9ee5c5eed7964a6<wbr>cb9ac1a1419de5a40<br>
<br>
Is this consider a bug?<br>
<br>
Regards,<br>
<br>
Cwtan<br>
______________________________<wbr>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-devel</a></blockquote></div><br><br clear="all"><br></div></div><span class="HOEnZb"><font color="#888888">-- <br><div class="m_-7311374043240116474gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</font></span></div>
</blockquote></div><br></div>