<div dir="ltr"><div><div>Found the RC. The problem seems to be that sharding translator attempts to create<br>non-existent shards in read/write codepaths with a newly generated gfid attached<br></div>to the create request in case the shard is absent. Replicate translator, which sits below<br>sharding on the stack takes this request and plays it on all of its replicas. On two of them it<br></div><div>fails with EEXIST, and on the one where the shards were removed from the backend, the<br></div><div>shard path is created but with the newly generated gfid while the other two replicas continue to<br></div><div>hold the original gfid (the one prior to rm -rf). Although this can be fixed, it will require one<br></div><div>additional lookup for each shard for each read/write operation, causing the latency of the read/write<br></div><div>response to the application to increase by a factor of 1 network call.<br></div><div><br><div>The test you&#39;re doing is partially (but not fully) manipulating and removing data from the backend,<br></div>which is not recommended.<br><br></div><div>My question to you is this - what is the specific failure that you are trying to simulate with removal of<br></div><div>contents of .shard? Normally, the `rm -rf on backend` type of tests are performed to simulate disk<br></div><div>failure and its replacement with a brand new disk, in which case executing the replace-brick/reset-brick<br></div><div>commands should be sufficient to recover all contents from the remaining two replicas.<br><br></div><div>-Krutika<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Oct 27, 2016 at 12:49 PM, Krutika Dhananjay <span dir="ltr">&lt;<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Now it&#39;s reproducible, thanks. :)<br><br></div><div>I think I know the RC. Let me confirm it through tests and report back.<span class="HOEnZb"><font color="#888888"><br><br></font></span></div><span class="HOEnZb"><font color="#888888"><div>-Krutika<br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Oct 27, 2016 at 10:42 AM, qingwei wei <span dir="ltr">&lt;<a href="mailto:tchengwee@gmail.com" target="_blank">tchengwee@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I did few more test runs and it seems that it happens during this sequence<br>
<br>
1.populate data using dd<br>
2. delete away ALL the shard files in one of the brick .shard folder<br>
3. Trying to access using dd, no error reported<br>
4. umount and mount.<br>
5. Trying to access using dd, no error reported<br>
6. umount and mount.<br>
7. Trying to access using dd and Input/Output error reported<br>
<br>
during step 3 and 4, no file is created under the .shard directory<br>
For step 7, a shard file is created with same file name but different<br>
gfid compare to other good replicas.<br>
<br>
Below is the client log and brick log with more details in the attached log.<br>
<br>
Client log<br>
<br>
[2016-10-27 04:34:46.493281] D [MSGID: 0]<br>
[shard.c:3138:shard_common_mkn<wbr>od_cbk] 0-testHeal4-shard: mknod of<br>
shard 1 failed: File exists<br>
[2016-10-27 04:34:46.493351] D [MSGID: 0]<br>
[dht-common.c:2633:dht_lookup] 0-testHeal4-dht: Calling fresh lookup<br>
for /.shard/76bc4b0f-bb18-4736-832<wbr>7-99098cd0d7ce.1 on<br>
testHeal4-replicate-0<br>
[2016-10-27 04:34:46.494646] W [MSGID: 114031]<br>
[client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal4-client-0:<br>
<span>remote operation failed. Path: (null)<br>
(00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
</span>[2016-10-27 04:34:46.494673] D [MSGID: 0]<br>
[client-rpc-fops.c:2989:client<wbr>3_3_lookup_cbk] 0-stack-trace:<br>
stack-address: 0x7f9083edc1c8, testHeal4-client-0 returned -1 error:<br>
Invalid argument [Invalid argument]<br>
[2016-10-27 04:34:46.494705] W [MSGID: 114031]<br>
[client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal4-client-1:<br>
<span>remote operation failed. Path: (null)<br>
(00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
</span>[2016-10-27 04:34:46.494710] W [MSGID: 114031]<br>
[client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal4-client-2:<br>
<span>remote operation failed. Path: (null)<br>
(00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
</span>[2016-10-27 04:34:46.494730] D [MSGID: 0]<br>
[client-rpc-fops.c:2989:client<wbr>3_3_lookup_cbk] 0-stack-trace:<br>
stack-address: 0x7f9083edc1c8, testHeal4-client-1 returned -1 error:<br>
Invalid argument [Invalid argument]<br>
[2016-10-27 04:34:46.494751] D [MSGID: 0]<br>
[client-rpc-fops.c:2989:client<wbr>3_3_lookup_cbk] 0-stack-trace:<br>
stack-address: 0x7f9083edc1c8, testHeal4-client-2 returned -1 error:<br>
Invalid argument [Invalid argument]<br>
[2016-10-27 04:34:46.495339] D [MSGID: 0]<br>
[afr-common.c:1986:afr_lookup_<wbr>done] 0-stack-trace: stack-address:<br>
0x7f9083edbb1c, testHeal4-replicate-0 returned -1 error: Input/output<br>
error [Input/output error]<br>
[2016-10-27 04:34:46.495364] D [MSGID: 0]<br>
[dht-common.c:2220:dht_lookup_<wbr>cbk] 0-testHeal4-dht: fresh_lookup<br>
returned for /.shard/76bc4b0f-bb18-4736-832<wbr>7-99098cd0d7ce.1 with<br>
op_ret -1 [Input/output error]<br>
[2016-10-27 04:34:46.495374] D [MSGID: 0]<br>
[dht-common.c:2300:dht_lookup_<wbr>cbk] 0-testHeal4-dht: Lookup of<br>
/.shard/76bc4b0f-bb18-4736-832<wbr>7-99098cd0d7ce.1 for subvolume<br>
testHeal4-replicate-0 failed [Input/output error]<br>
[2016-10-27 04:34:46.495384] D [MSGID: 0]<br>
[dht-common.c:2363:dht_lookup_<wbr>cbk] 0-stack-trace: stack-address:<br>
0x7f9083edbb1c, testHeal4-dht returned -1 error: Input/output error<br>
[Input/output error]<br>
[2016-10-27 04:34:46.495395] E [MSGID: 133010]<br>
[shard.c:1582:shard_common_loo<wbr>kup_shards_cbk] 0-testHeal4-shard:<br>
<span>Lookup on shard 1 failed. Base file gfid =<br>
</span>76bc4b0f-bb18-4736-8327-99098c<wbr>d0d7ce [Input/output error]<br>
[2016-10-27 04:34:46.495406] D [MSGID: 0]<br>
[shard.c:3086:shard_post_looku<wbr>p_shards_readv_handler] 0-stack-trace:<br>
stack-address: 0x7f9083edbb1c, testHeal4-shard returned -1 error:<br>
Input/output error [Input/output error]<br>
[2016-10-27 04:34:46.495417] D [MSGID: 0]<br>
[defaults.c:1010:default_readv<wbr>_cbk] 0-stack-trace: stack-address:<br>
0x7f9083edbb1c, testHeal4-write-behind returned -1 error: Input/output<br>
error [Input/output error]<br>
[2016-10-27 04:34:46.495428] D [MSGID: 0]<br>
[read-ahead.c:462:ra_readv_dis<wbr>abled_cbk] 0-stack-trace: stack-address:<br>
0x7f9083edbb1c, testHeal4-read-ahead returned -1 error: Input/output<br>
error [Input/output error]<br>
<br>
brick log<br>
<br>
[2016-10-27 04:34:46.492055] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: STATFS<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.492157] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_entrylk+0x93)<br>
[0x7efebb37d633]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 3<br>
[2016-10-27 04:34:46.492180] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: ENTRYLK<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.492239] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_statfs_cbk+0x112)<br>
[0x7efebb36f8e2]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.492271] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_entrylk_cbk+0xa6)<br>
[0x7efebb3713a6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.492535] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_mknod+0x80)<br>
[0x7efebb37b690]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.492565] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: MKNOD<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.492843] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_mknod_cbk+0x5ad)<br>
[0x7efebb383c9d]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.492981] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_xattrop+0x86)<br>
[0x7efebb3789d6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.493056] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: XATTROP<br>
scheduled as slow fop<br>
[2016-10-27 04:34:46.493128] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_entrylk+0x93)<br>
[0x7efebb37d633]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 3<br>
[2016-10-27 04:34:46.493148] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: ENTRYLK<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.493214] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_xattrop_cbk+0xd9)<br>
[0x7efebb370579]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.493239] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_entrylk_cbk+0xa6)<br>
[0x7efebb3713a6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.493490] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.493514] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: LOOKUP<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.493666] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.493782] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.493986] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.494596] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.494616] D [logging.c:1954:_gf_msg_intern<wbr>al]<br>
0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
About to flush least recently used log message to disk<br>
[2016-10-27 04:34:46.493818] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: LOOKUP<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.494616] W [MSGID: 115009]<br>
[server-resolve.c:574:server_r<wbr>esolve] 0-testHeal4-server: no<br>
resolution type for (null) (LOOKUP)<br>
[2016-10-27 04:34:46.494650] E [MSGID: 115050]<br>
[server-rpc-fops.c:179:server_<wbr>lookup_cbk] 0-testHeal4-server: 29:<br>
LOOKUP (null) (00000000-0000-0000-0000-00000<wbr>0000000/76bc4b0f-bb18-4736-<wbr>8327-99098cd0d7ce.1)<br>
==&gt; (Invalid argument) [Invalid argument]<br>
[2016-10-27 04:34:46.494720] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.494936] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.494967] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: LOOKUP<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.495108] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.595813] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.595915] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 3<br>
[2016-10-27 04:34:46.596054] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.596162] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.596427] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_entrylk+0x93)<br>
[0x7efebb37d633]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.596453] D [logging.c:1954:_gf_msg_intern<wbr>al]<br>
0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
About to flush least recently used log message to disk<br>
The message &quot;D [MSGID: 0] [io-threads.c:351:iot_schedule<wbr>]<br>
0-testHeal4-io-threads: LOOKUP scheduled as fast fop&quot; repeated 2 times<br>
between [2016-10-27 04:34:46.494967] and [2016-10-27 04:34:46.595944]<br>
[2016-10-27 04:34:46.596453] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: ENTRYLK<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.596551] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_entrylk+0x93)<br>
[0x7efebb37d633]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 3<br>
[2016-10-27 04:34:46.596603] D [logging.c:1954:_gf_msg_intern<wbr>al]<br>
0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
About to flush least recently used log message to disk<br>
[2016-10-27 04:34:46.596611] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_entrylk_cbk+0xa6)<br>
[0x7efebb3713a6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.596570] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: ENTRYLK<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.596602] D [MSGID: 0]<br>
[entrylk.c:701:pl_common_entry<wbr>lk] 0-stack-trace: stack-address:<br>
0x7efecd3db738, testHeal4-locks returned -1 error: Resource<br>
temporarily unavailable [Resource temporarily unavailable]<br>
[2016-10-27 04:34:46.596650] D [MSGID: 0]<br>
[defaults.c:1196:default_entry<wbr>lk_cbk] 0-stack-trace: stack-address:<br>
0x7efecd3db738, testHeal4-io-threads returned -1 error: Resource<br>
temporarily unavailable [Resource temporarily unavailable]<br>
[2016-10-27 04:34:46.596664] D [MSGID: 0]<br>
[io-stats.c:1811:io_stats_entr<wbr>ylk_cbk] 0-stack-trace: stack-address:<br>
0x7efecd3db738, /mnt/sdc_mssd/testHeal4 returned -1 error: Resource<br>
temporarily unavailable [Resource temporarily unavailable]<br>
[2016-10-27 04:34:46.596676] D [MSGID: 115054]<br>
[server-rpc-fops.c:350:server_<wbr>entrylk_cbk] 0-testHeal4-server: 34:<br>
ENTRYLK /.shard (be318638-e8a0-4c6d-977d-7a937<wbr>aa84806) ==&gt; (Resource<br>
temporarily unavailable) [Resource temporarily unavailable]<br>
[2016-10-27 04:34:46.596764] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_entrylk_cbk+0xa6)<br>
[0x7efebb3713a6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.596791] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_mknod+0x80)<br>
[0x7efebb37b690]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.596813] D [MSGID: 0]<br>
[server-resolve.c:330:resolve_<wbr>entry_simple] 0-testHeal4-server: inode<br>
(pointer: 0x7efea1aebaa0 gfid:041e3b34-14c2-4bb1-82e2-d<wbr>b352232c3cf<br>
found for path ((null)) while type is RESOLVE_NOT<br>
[2016-10-27 04:34:46.596828] D [MSGID: 115057]<br>
[server-rpc-fops.c:563:server_<wbr>mknod_cbk] 0-testHeal4-server: 35: MKNOD<br>
(null) (be318638-e8a0-4c6d-977d-7a937<wbr>aa84806/76bc4b0f-bb18-4736-<wbr>8327-99098cd0d7ce.1)<br>
==&gt; (File exists) [File exists]<br>
[2016-10-27 04:34:46.596896] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_mknod_cbk+0x5ad)<br>
[0x7efebb383c9d]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.597174] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_entrylk+0x93)<br>
[0x7efebb37d633]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.597199] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: ENTRYLK<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.597289] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_entrylk_cbk+0xa6)<br>
[0x7efebb3713a6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.597396] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_entrylk+0x93)<br>
[0x7efebb37d633]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.597571] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_entrylk_cbk+0xa6)<br>
[0x7efebb3713a6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.597604] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.597632] D [logging.c:1954:_gf_msg_intern<wbr>al]<br>
0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
About to flush least recently used log message to disk<br>
[2016-10-27 04:34:46.597415] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: ENTRYLK<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.597632] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: LOOKUP<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.597864] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.598116] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_mknod+0x80)<br>
[0x7efebb37b690]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.598134] D [MSGID: 0]<br>
[server-resolve.c:330:resolve_<wbr>entry_simple] 0-testHeal4-server: inode<br>
(pointer: 0x7efea1aebaa0 gfid:041e3b34-14c2-4bb1-82e2-d<wbr>b352232c3cf<br>
found for path ((null)) while type is RESOLVE_NOT<br>
[2016-10-27 04:34:46.598147] D [MSGID: 115057]<br>
[server-rpc-fops.c:563:server_<wbr>mknod_cbk] 0-testHeal4-server: 39: MKNOD<br>
(null) (be318638-e8a0-4c6d-977d-7a937<wbr>aa84806/76bc4b0f-bb18-4736-<wbr>8327-99098cd0d7ce.1)<br>
==&gt; (File exists) [File exists]<br>
[2016-10-27 04:34:46.598205] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_mknod_cbk+0x5ad)<br>
[0x7efebb383c9d]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.598258] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.598301] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: LOOKUP<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.598580] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.598599] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_entrylk+0x93)<br>
[0x7efebb37d633]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.598619] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: ENTRYLK<br>
scheduled as normal fop<br>
[2016-10-27 04:34:46.598754] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_entrylk_cbk+0xa6)<br>
[0x7efebb3713a6]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.598921] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.598938] W [MSGID: 115009]<br>
[server-resolve.c:574:server_r<wbr>esolve] 0-testHeal4-server: no<br>
resolution type for (null) (LOOKUP)<br>
[2016-10-27 04:34:46.598951] E [MSGID: 115050]<br>
[server-rpc-fops.c:179:server_<wbr>lookup_cbk] 0-testHeal4-server: 42:<br>
LOOKUP (null) (00000000-0000-0000-0000-00000<wbr>0000000/76bc4b0f-bb18-4736-<wbr>8327-99098cd0d7ce.1)<br>
==&gt; (Invalid argument) [Invalid argument]<br>
[2016-10-27 04:34:46.599007] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.599059] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.599081] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: LOOKUP<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.599215] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 3<br>
[2016-10-27 04:34:46.599379] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.599412] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 3<br>
[2016-10-27 04:34:46.599505] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.599584] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.599783] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.599807] D [logging.c:1954:_gf_msg_intern<wbr>al]<br>
0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
About to flush least recently used log message to disk<br>
The message &quot;D [MSGID: 0] [io-threads.c:351:iot_schedule<wbr>]<br>
0-testHeal4-io-threads: LOOKUP scheduled as fast fop&quot; repeated 2 times<br>
between [2016-10-27 04:34:46.599081] and [2016-10-27 04:34:46.599439]<br>
[2016-10-27 04:34:46.599806] W [MSGID: 115009]<br>
[server-resolve.c:574:server_r<wbr>esolve] 0-testHeal4-server: no<br>
resolution type for (null) (LOOKUP)<br>
[2016-10-27 04:34:46.599833] E [MSGID: 115050]<br>
[server-rpc-fops.c:179:server_<wbr>lookup_cbk] 0-testHeal4-server: 46:<br>
LOOKUP (null) (00000000-0000-0000-0000-00000<wbr>0000000/76bc4b0f-bb18-4736-<wbr>8327-99098cd0d7ce.1)<br>
==&gt; (Invalid argument) [Invalid argument]<br>
[2016-10-27 04:34:46.599893] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:46.600183] D [client_t.c:333:gf_client_ref]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver3_3_lookup+0x8b)<br>
[0x7efebb386beb]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(get<wbr>_frame_from_request+0x257)<br>
[0x7efebb36cfd7] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_ref+0x68)<br>
[0x7efecfadf608] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 2<br>
[2016-10-27 04:34:46.600206] D [MSGID: 0]<br>
[io-threads.c:351:iot_schedule<wbr>] 0-testHeal4-io-threads: LOOKUP<br>
scheduled as fast fop<br>
[2016-10-27 04:34:46.600336] D [client_t.c:417:gf_client_unre<wbr>f]<br>
(--&gt;/usr/lib64/glusterfs/3.7.1<wbr>6/xlator/protocol/server.so(se<wbr>rver_lookup_cbk+0x548)<br>
[0x7efebb3864c8]<br>
--&gt;/usr/lib64/glusterfs/3.7.16<wbr>/xlator/protocol/server.so(ser<wbr>ver_submit_reply+0x123)<br>
[0x7efebb368f13] --&gt;/lib64/libglusterfs.so.0(gf<wbr>_client_unref+0x77)<br>
[0x7efecfadf787] ) 0-client_t:<br>
fujitsu05.dctopenstack.org-606<wbr>4-2016/10/27-04:34:44:217958-<wbr>testHeal4-client-1-0-0:<br>
ref-count 1<br>
[2016-10-27 04:34:59.343124] D<br>
[logging.c:1830:gf_log_flush_t<wbr>imeout_cbk] 0-logging-infra: Log timer<br>
timed out. About to flush outstanding messages if present<br>
[2016-10-27 04:34:59.343202] D<br>
[logging.c:1792:__gf_log_injec<wbr>t_timer_event] 0-logging-infra: Starting<br>
timer now. Timeout = 120, current buf size = 5<br>
<br>
<br>
Thanks.<br>
<br>
Regards,<br>
<br>
Cwtan<br>
<div class="m_6833621998130120041HOEnZb"><div class="m_6833621998130120041h5"><br>
On Wed, Oct 26, 2016 at 8:09 PM, Krutika Dhananjay &lt;<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>&gt; wrote:<br>
&gt; Do you also have the brick logs? Looks like the bricks are returning EINVAL<br>
&gt; on lookup<br>
&gt; which AFR is subsequently converting into an EIO. And sharding is merely<br>
&gt; delivering the same error code upwards.<br>
&gt;<br>
&gt; -Krutika<br>
&gt;<br>
&gt; On Wed, Oct 26, 2016 at 6:38 AM, qingwei wei &lt;<a href="mailto:tchengwee@gmail.com" target="_blank">tchengwee@gmail.com</a>&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; Hi,<br>
&gt;&gt;<br>
&gt;&gt; Pls see the client log below.<br>
&gt;&gt;<br>
&gt;&gt; [2016-10-24 10:29:51.111603] I [fuse-bridge.c:5171:fuse_graph<wbr>_setup]<br>
&gt;&gt; 0-fuse: switched to graph 0<br>
&gt;&gt; [2016-10-24 10:29:51.111662] I [MSGID: 114035]<br>
&gt;&gt; [client-handshake.c:193:client<wbr>_set_lk_version_cbk]<br>
&gt;&gt; 0-testHeal-client-2: Server lk version = 1<br>
&gt;&gt; [2016-10-24 10:29:51.112371] I [fuse-bridge.c:4083:fuse_init]<br>
&gt;&gt; 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22<br>
&gt;&gt; kernel 7.22<br>
&gt;&gt; [2016-10-24 10:29:51.113563] I [MSGID: 108031]<br>
&gt;&gt; [afr-common.c:2071:afr_local_d<wbr>iscovery_cbk] 0-testHeal-replicate-0:<br>
&gt;&gt; selecting local read_child testHeal-client-2<br>
&gt;&gt; [2016-10-24 10:29:51.113604] I [MSGID: 108031]<br>
&gt;&gt; [afr-common.c:2071:afr_local_d<wbr>iscovery_cbk] 0-testHeal-replicate-0:<br>
&gt;&gt; selecting local read_child testHeal-client-0<br>
&gt;&gt; [2016-10-24 10:29:51.113630] I [MSGID: 108031]<br>
&gt;&gt; [afr-common.c:2071:afr_local_d<wbr>iscovery_cbk] 0-testHeal-replicate-0:<br>
&gt;&gt; selecting local read_child testHeal-client-1<br>
&gt;&gt; [2016-10-24 10:29:54.016802] W [MSGID: 108001]<br>
&gt;&gt; [afr-transaction.c:789:afr_han<wbr>dle_quorum] 0-testHeal-replicate-0:<br>
&gt;&gt; /.shard/9061198a-eb7e-45a2-93f<wbr>b-eb396d1b2727.1: F<br>
&gt;&gt; ailing MKNOD as quorum is not met<br>
&gt;&gt; [2016-10-24 10:29:54.019330] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-0:<br>
&gt;&gt; remote operation failed. Path: (null) (00000000-<br>
&gt;&gt; 0000-0000-0000-000000000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.019343] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-2:<br>
&gt;&gt; remote operation failed. Path: (null) (00000000-<br>
&gt;&gt; 0000-0000-0000-000000000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.019373] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-1:<br>
&gt;&gt; remote operation failed. Path: (null) (00000000-<br>
&gt;&gt; 0000-0000-0000-000000000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.019854] E [MSGID: 133010]<br>
&gt;&gt; [shard.c:1582:shard_common_loo<wbr>kup_shards_cbk] 0-testHeal-shard: Lookup<br>
&gt;&gt; on shard 1 failed. Base file gfid = 9061198a<br>
&gt;&gt; -eb7e-45a2-93fb-eb396d1b2727 [Input/output error]<br>
&gt;&gt; [2016-10-24 10:29:54.020886] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk]<br>
&gt;&gt; 0-glusterfs-fuse: 135: READ =&gt; -1<br>
&gt;&gt; gfid=9061198a-eb7e-45a2-93fb-e<wbr>b396d1b2727 fd=0x7f70c80d12dc (<br>
&gt;&gt; Input/output error)<br>
&gt;&gt; [2016-10-24 10:29:54.118264] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-0:<br>
&gt;&gt; remote operation failed. Path: (null) (00000000-<br>
&gt;&gt; 0000-0000-0000-000000000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.118308] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-2:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.118329] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-1:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.118751] E [MSGID: 133010]<br>
&gt;&gt; [shard.c:1582:shard_common_loo<wbr>kup_shards_cbk] 0-testHeal-shard: Lookup<br>
&gt;&gt; on shard 1 failed. Base file gfid =<br>
&gt;&gt; 9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727 [Input/output error]<br>
&gt;&gt; [2016-10-24 10:29:54.118787] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk]<br>
&gt;&gt; 0-glusterfs-fuse: 137: READ =&gt; -1<br>
&gt;&gt; gfid=9061198a-eb7e-45a2-93fb-e<wbr>b396d1b2727 fd=0x7f70c80d12dc<br>
&gt;&gt; (Input/output error)<br>
&gt;&gt; [2016-10-24 10:29:54.119330] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-1:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.119338] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-0:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.119368] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-2:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:29:54.119674] E [MSGID: 133010]<br>
&gt;&gt; [shard.c:1582:shard_common_loo<wbr>kup_shards_cbk] 0-testHeal-shard: Lookup<br>
&gt;&gt; on shard 1 failed. Base file gfid =<br>
&gt;&gt; 9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727 [Input/output error]<br>
&gt;&gt; [2016-10-24 10:29:54.119715] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk]<br>
&gt;&gt; 0-glusterfs-fuse: 138: READ =&gt; -1<br>
&gt;&gt; gfid=9061198a-eb7e-45a2-93fb-e<wbr>b396d1b2727 fd=0x7f70c80d12dc<br>
&gt;&gt; (Input/output error)<br>
&gt;&gt; [2016-10-24 10:36:13.140414] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-0:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:36:13.140451] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-2:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:36:13.140461] W [MSGID: 114031]<br>
&gt;&gt; [client-rpc-fops.c:2981:client<wbr>3_3_lookup_cbk] 0-testHeal-client-1:<br>
&gt;&gt; remote operation failed. Path: (null)<br>
&gt;&gt; (00000000-0000-0000-0000-00000<wbr>0000000) [Invalid argument]<br>
&gt;&gt; [2016-10-24 10:36:13.140956] E [MSGID: 133010]<br>
&gt;&gt; [shard.c:1582:shard_common_loo<wbr>kup_shards_cbk] 0-testHeal-shard: Lookup<br>
&gt;&gt; on shard 1 failed. Base file gfid =<br>
&gt;&gt; 9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727 [Input/output error]<br>
&gt;&gt; [2016-10-24 10:36:13.140995] W [fuse-bridge.c:2227:fuse_readv<wbr>_cbk]<br>
&gt;&gt; 0-glusterfs-fuse: 145: READ =&gt; -1<br>
&gt;&gt; gfid=9061198a-eb7e-45a2-93fb-e<wbr>b396d1b2727 fd=0x7f70c80d12dc<br>
&gt;&gt; (Input/output error)<br>
&gt;&gt; [2016-10-25 03:22:01.220025] I [MSGID: 100011]<br>
&gt;&gt; [glusterfsd.c:1323:reincarnate<wbr>] 0-glusterfsd: Fetching the volume file<br>
&gt;&gt; from server...<br>
&gt;&gt; [2016-10-25 03:22:01.220938] I<br>
&gt;&gt; [glusterfsd-mgmt.c:1600:mgmt_g<wbr>etspec_cbk] 0-glusterfs: No change in<br>
&gt;&gt; volfile, continuing<br>
&gt;&gt;<br>
&gt;&gt; I also attached the log in this email.<br>
&gt;&gt;<br>
&gt;&gt; Thanks.<br>
&gt;&gt;<br>
&gt;&gt; Cwtan<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; On Wed, Oct 26, 2016 at 12:30 AM, Krutika Dhananjay &lt;<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>&gt;<br>
&gt;&gt; wrote:<br>
&gt;&gt; &gt; Tried it locally on my setup. Worked fine.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Could you please attach the mount logs?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; -Krutika<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; On Tue, Oct 25, 2016 at 6:55 PM, Pranith Kumar Karampuri<br>
&gt;&gt; &gt; &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt; wrote:<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; +Krutika<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; On Mon, Oct 24, 2016 at 4:10 PM, qingwei wei &lt;<a href="mailto:tchengwee@gmail.com" target="_blank">tchengwee@gmail.com</a>&gt;<br>
&gt;&gt; &gt;&gt; wrote:<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Hi,<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; I am currently running a simple gluster setup using one server node<br>
&gt;&gt; &gt;&gt;&gt; with multiple disks. I realize that if i delete away all the .shard<br>
&gt;&gt; &gt;&gt;&gt; files in one replica in the backend, my application (dd) will report<br>
&gt;&gt; &gt;&gt;&gt; Input/Output error even though i have 3 replicas.<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; My gluster version is 3.7.16<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; gluster volume file<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Volume Name: testHeal<br>
&gt;&gt; &gt;&gt;&gt; Type: Replicate<br>
&gt;&gt; &gt;&gt;&gt; Volume ID: 26d16d7f-bc4f-44a6-a18b-eab780<wbr>d80851<br>
&gt;&gt; &gt;&gt;&gt; Status: Started<br>
&gt;&gt; &gt;&gt;&gt; Number of Bricks: 1 x 3 = 3<br>
&gt;&gt; &gt;&gt;&gt; Transport-type: tcp<br>
&gt;&gt; &gt;&gt;&gt; Bricks:<br>
&gt;&gt; &gt;&gt;&gt; Brick1: 192.168.123.4:/mnt/sdb_mssd/te<wbr>stHeal2<br>
&gt;&gt; &gt;&gt;&gt; Brick2: 192.168.123.4:/mnt/sde_mssd/te<wbr>stHeal2<br>
&gt;&gt; &gt;&gt;&gt; Brick3: 192.168.123.4:/mnt/sdd_mssd/te<wbr>stHeal2<br>
&gt;&gt; &gt;&gt;&gt; Options Reconfigured:<br>
&gt;&gt; &gt;&gt;&gt; cluster.self-heal-daemon: on<br>
&gt;&gt; &gt;&gt;&gt; features.shard-block-size: 16MB<br>
&gt;&gt; &gt;&gt;&gt; features.shard: on<br>
&gt;&gt; &gt;&gt;&gt; performance.readdir-ahead: on<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; dd error<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; [root@fujitsu05 .shard]# dd of=/home/test if=/mnt/fuseMount/ddTest<br>
&gt;&gt; &gt;&gt;&gt; bs=16M count=20 oflag=direct<br>
&gt;&gt; &gt;&gt;&gt; dd: error reading ‘/mnt/fuseMount/ddTest’: Input/output error<br>
&gt;&gt; &gt;&gt;&gt; 1+0 records in<br>
&gt;&gt; &gt;&gt;&gt; 1+0 records out<br>
&gt;&gt; &gt;&gt;&gt; 16777216 bytes (17 MB) copied, 0.111038 s, 151 MB/s<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; in the .shard folder where i deleted all the .shard file, i can see<br>
&gt;&gt; &gt;&gt;&gt; one .shard file is recreated<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
&gt;&gt; &gt;&gt;&gt; # file: 9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
&gt;&gt; &gt;&gt;&gt; trusted.afr.testHeal-client-0=<wbr>0x000000010000000100000000<br>
&gt;&gt; &gt;&gt;&gt; trusted.afr.testHeal-client-2=<wbr>0x000000010000000100000000<br>
&gt;&gt; &gt;&gt;&gt; trusted.gfid=0x41b653f7daa1462<wbr>7b1f91f9e8554ddde<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; However, the gfid is not the same compare to the other replicas<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; getfattr -d -e hex -m.  9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
&gt;&gt; &gt;&gt;&gt; # file: 9061198a-eb7e-45a2-93fb-eb396d<wbr>1b2727.1<br>
&gt;&gt; &gt;&gt;&gt; trusted.afr.dirty=0x0000000000<wbr>00000000000000<br>
&gt;&gt; &gt;&gt;&gt; trusted.afr.testHeal-client-1=<wbr>0x000000000000000000000000<br>
&gt;&gt; &gt;&gt;&gt; trusted.bit-rot.version=0x0300<wbr>000000000000580dde99000e5e5d<br>
&gt;&gt; &gt;&gt;&gt; trusted.gfid=0x9ee5c5eed7964a6<wbr>cb9ac1a1419de5a40<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Is this consider a bug?<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Regards,<br>
&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt; &gt;&gt;&gt; Cwtan<br>
&gt;&gt; &gt;&gt;&gt; ______________________________<wbr>_________________<br>
&gt;&gt; &gt;&gt;&gt; Gluster-devel mailing list<br>
&gt;&gt; &gt;&gt;&gt; <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
&gt;&gt; &gt;&gt;&gt; <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-devel</a><br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt;<br>
&gt;&gt; &gt;&gt; --<br>
&gt;&gt; &gt;&gt; Pranith<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;<br>
&gt;<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>