<div dir="ltr"><div><div>Thanks for this. The information seems sufficient at the moment.<br></div><div>Will get back to you on this if/when I find something.<br></div><br></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Dec 19, 2016 at 1:44 PM, qingwei wei <span dir="ltr"><<a href="mailto:tchengwee@gmail.com" target="_blank">tchengwee@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Krutika,<br>
<br>
Sorry for the delay as i am busy with other works. Attached is the<br>
tar.gz file with client and server log, the gfid information on the<br>
shard folder (please look at test.0.0 file as the log is captured when<br>
i run fio on this file.) and also the print statement i put inside the<br>
code. Fyi, i did 2 runs this time and only the second run give me<br>
problem. Hope this information helps.<br>
<br>
Regards,<br>
<br>
Cw<br>
<div class="HOEnZb"><div class="h5"><br>
On Thu, Dec 15, 2016 at 8:02 PM, Krutika Dhananjay <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>> wrote:<br>
> Good that you asked. I'll try but be warned this will involve me coming back<br>
> to you with lot more questions. :)<br>
><br>
> I've been trying this for the past two days (not to mention the fio run<br>
> takes<br>
> really long) and so far there has been no crash/assert failure.<br>
><br>
> If you already have the core:<br>
> in frame 1,<br>
> 0. print block_num<br>
> 1. get lru_inode_ctx->stat.ia_gfid<br>
> 2. convert it to hex<br>
> 3. find the gfid in your backend that corresponds to this gfid and share its<br>
> path in your response<br>
> 4. print priv->inode_count<br>
> 5. and of course lru_inode_ctx->block_num :)<br>
> 6. Also attach the complete brick and client logs.<br>
><br>
> -Krutika<br>
><br>
><br>
> On Thu, Dec 15, 2016 at 3:18 PM, qingwei wei <<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>> wrote:<br>
>><br>
>> Hi Krutika,<br>
>><br>
>> Do you need anymore information? Do let me know as i can try on my<br>
>> test system. Thanks.<br>
>><br>
>> Cw<br>
>><br>
>> On Tue, Dec 13, 2016 at 12:17 AM, qingwei wei <<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>> wrote:<br>
>> > Hi Krutika,<br>
>> ><br>
>> > You mean FIO command?<br>
>> ><br>
>> > Below is how i do the sequential write. This example i am using 400GB<br>
>> > file, for the SHARD_MAX_INODE=16, i use 300MB file.<br>
>> ><br>
>> > fio -group_reporting -ioengine libaio -directory /mnt/testSF-HDD1<br>
>> > -fallocate none -direct 1 -filesize 400g -nrfiles 1 -openfiles 1 -bs<br>
>> > 256k -numjobs 1 -iodepth 2 -name test -rw write<br>
>> ><br>
>> > And after FIO complete the above workload, i do the random write<br>
>> ><br>
>> > fio -group_reporting -ioengine libaio -directory /mnt/testSF-HDD1<br>
>> > -fallocate none -direct 1 -filesize 400g -nrfiles 1 -openfiles 1 -bs<br>
>> > 8k -numjobs 1 -iodepth 2 -name test -rw randwrite<br>
>> ><br>
>> > The error (Sometimes segmentation fault) only happen during random<br>
>> > write.<br>
>> ><br>
>> > The gluster volume is 3 replica volume with shard enable and 16MB<br>
>> > shard block size.<br>
>> ><br>
>> > Thanks.<br>
>> ><br>
>> > Cw<br>
>> ><br>
>> > On Tue, Dec 13, 2016 at 12:00 AM, Krutika Dhananjay<br>
>> > <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>> wrote:<br>
>> >> I tried but couldn't recreate this issue (even with SHARD_MAX_INODES<br>
>> >> being<br>
>> >> 16).<br>
>> >> Could you share the exact command you used?<br>
>> >><br>
>> >> -Krutika<br>
>> >><br>
>> >> On Mon, Dec 12, 2016 at 12:15 PM, qingwei wei <<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>><br>
>> >> wrote:<br>
>> >>><br>
>> >>> Hi Krutika,<br>
>> >>><br>
>> >>> Thanks. Looking forward to your reply.<br>
>> >>><br>
>> >>> Cw<br>
>> >>><br>
>> >>> On Mon, Dec 12, 2016 at 2:27 PM, Krutika Dhananjay<br>
>> >>> <<a href="mailto:kdhananj@redhat.com">kdhananj@redhat.com</a>><br>
>> >>> wrote:<br>
>> >>> > Hi,<br>
>> >>> ><br>
>> >>> > First of all, apologies for the late reply. Couldn't find time to<br>
>> >>> > look<br>
>> >>> > into<br>
>> >>> > this<br>
>> >>> > until now.<br>
>> >>> ><br>
>> >>> > Changing SHARD_MAX_INODES value from 12384 to 16 is a cool trick!<br>
>> >>> > Let me try that as well and get back to you in some time.<br>
>> >>> ><br>
>> >>> > -Krutika<br>
>> >>> ><br>
>> >>> > On Thu, Dec 8, 2016 at 11:07 AM, qingwei wei <<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>><br>
>> >>> > wrote:<br>
>> >>> >><br>
>> >>> >> Hi,<br>
>> >>> >><br>
>> >>> >> With the help from my colleague, we did some changes to the code<br>
>> >>> >> with<br>
>> >>> >> reduce number of SHARD_MAX_INODES (from 16384 to 16) and also<br>
>> >>> >> include<br>
>> >>> >> the printing of blk_num inside __shard_update_shards_inode_<wbr>list. We<br>
>> >>> >> then execute fio to first do sequential write of 300MB file. After<br>
>> >>> >> this run completed, we then use fio to generate random write (8k).<br>
>> >>> >> And<br>
>> >>> >> during this random write run, we found that there is situation<br>
>> >>> >> where<br>
>> >>> >> the blk_num is negative number and this trigger the following<br>
>> >>> >> assertion.<br>
>> >>> >><br>
>> >>> >> GF_ASSERT (lru_inode_ctx->block_num > 0);<br>
>> >>> >><br>
>> >>> >> [2016-12-08 03:16:34.217582] E<br>
>> >>> >> [shard.c:468:__shard_update_<wbr>shards_inode_list]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> (-->/usr/local/lib/glusterfs/<wbr>3.7.17/xlator/features/shard.<wbr>so(shard_common_lookup_shards_<wbr>cbk+0x2d)<br>
>> >>> >> [0x7f7300930b6d]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> -->/usr/local/lib/glusterfs/3.<wbr>7.17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xce)<br>
>> >>> >> [0x7f7300930b1e]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> -->/usr/local/lib/glusterfs/3.<wbr>7.17/xlator/features/shard.so(<wbr>__shard_update_shards_inode_<wbr>list+0x36b)<br>
>> >>> >> [0x7f730092bf5b] ) 0-: Assertion failed: lru_inode_ctx->block_num ><br>
>> >>> >> 0<br>
>> >>> >><br>
>> >>> >> Also, there is segmentation fault shortly after this assertion and<br>
>> >>> >> after that fio exit with error.<br>
>> >>> >><br>
>> >>> >> frame : type(0) op(0)<br>
>> >>> >> patchset: git://<a href="http://git.gluster.com/glusterfs.git" rel="noreferrer" target="_blank">git.gluster.com/<wbr>glusterfs.git</a><br>
>> >>> >> signal received: 11<br>
>> >>> >> time of crash:<br>
>> >>> >> 2016-12-08 03:16:34<br>
>> >>> >> configuration details:<br>
>> >>> >> argp 1<br>
>> >>> >> backtrace 1<br>
>> >>> >> dlfcn 1<br>
>> >>> >> libpthread 1<br>
>> >>> >> llistxattr 1<br>
>> >>> >> setfsid 1<br>
>> >>> >> spinlock 1<br>
>> >>> >> epoll.h 1<br>
>> >>> >> xattr.h 1<br>
>> >>> >> st_atim.tv_nsec 1<br>
>> >>> >> package-string: glusterfs 3.7.17<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/libglusterfs.<wbr>so.0(_gf_msg_backtrace_nomem+<wbr>0x92)[0x7f730e900332]<br>
>> >>> >><br>
>> >>> >> /usr/local/lib/libglusterfs.<wbr>so.0(gf_print_trace+0x2d5)[<wbr>0x7f730e9250b5]<br>
>> >>> >> /lib64/libc.so.6(+0x35670)[<wbr>0x7f730d1f1670]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x1d4)[0x7f730092bdc4]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xce)[<wbr>0x7f7300930b1e]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_common_lookup_shards_<wbr>cbk+0x2d)[0x7f7300930b6d]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/glusterfs/3.7.<wbr>17/xlator/cluster/distribute.<wbr>so(dht_lookup_cbk+0x380)[<wbr>0x7f7300b8e240]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/glusterfs/3.7.<wbr>17/xlator/protocol/client.so(<wbr>client3_3_lookup_cbk+0x769)[<wbr>0x7f7300df4989]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/libgfrpc.so.0(<wbr>rpc_clnt_handle_reply+0x90)[<wbr>0x7f730e6ce010]<br>
>> >>> >> /usr/local/lib/libgfrpc.so.0(<wbr>rpc_clnt_notify+0x1df)[<wbr>0x7f730e6ce2ef]<br>
>> >>> >><br>
>> >>> >> /usr/local/lib/libgfrpc.so.0(<wbr>rpc_transport_notify+0x23)[<wbr>0x7f730e6ca483]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/glusterfs/3.7.<wbr>17/rpc-transport/socket.so(+<wbr>0x6344)[0x7f73034dc344]<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> /usr/local/lib/glusterfs/3.7.<wbr>17/rpc-transport/socket.so(+<wbr>0x8f44)[0x7f73034def44]<br>
>> >>> >> /usr/local/lib/libglusterfs.<wbr>so.0(+0x925aa)[0x7f730e96c5aa]<br>
>> >>> >> /lib64/libpthread.so.0(+<wbr>0x7dc5)[0x7f730d96ddc5]<br>
>> >>> >><br>
>> >>> >> Core dump:<br>
>> >>> >><br>
>> >>> >> Using host libthread_db library "/lib64/libthread_db.so.1".<br>
>> >>> >> Core was generated by `/usr/local/sbin/glusterfs<br>
>> >>> >> --volfile-server=10.217.242.32 --volfile-id=/testSF1'.<br>
>> >>> >> Program terminated with signal 11, Segmentation fault.<br>
>> >>> >> #0 list_del_init (old=0x7f72f4003de0) at<br>
>> >>> >> ../../../../libglusterfs/src/<wbr>list.h:87<br>
>> >>> >> 87 old->prev->next = old->next;<br>
>> >>> >><br>
>> >>> >> bt<br>
>> >>> >><br>
>> >>> >> #0 list_del_init (old=0x7f72f4003de0) at<br>
>> >>> >> ../../../../libglusterfs/src/<wbr>list.h:87<br>
>> >>> >> #1 __shard_update_shards_inode_<wbr>list<br>
>> >>> >> (linked_inode=linked_inode@<wbr>entry=0x7f72fa7a6e48,<br>
>> >>> >> this=this@entry=<wbr>0x7f72fc0090c0, base_inode=0x7f72fa7a5108,<br>
>> >>> >> block_num=block_num@entry=10) at shard.c:469<br>
>> >>> >> #2 0x00007f7300930b1e in shard_link_block_inode<br>
>> >>> >> (local=local@entry=<wbr>0x7f730ec4ed00, block_num=10, inode=<optimized<br>
>> >>> >> out>,<br>
>> >>> >> buf=buf@entry=0x7f730180c990) at shard.c:1559<br>
>> >>> >> #3 0x00007f7300930b6d in shard_common_lookup_shards_cbk<br>
>> >>> >> (frame=0x7f730c611204, cookie=<optimized out>, this=0x7f72fc0090c0,<br>
>> >>> >> op_ret=0,<br>
>> >>> >> op_errno=<optimized out>, inode=<optimized out>,<br>
>> >>> >> buf=0x7f730180c990, xdata=0x7f730c029cdc,<br>
>> >>> >> postparent=0x7f730180ca00)<br>
>> >>> >> at shard.c:1596<br>
>> >>> >> #4 0x00007f7300b8e240 in dht_lookup_cbk (frame=0x7f730c61dc40,<br>
>> >>> >> cookie=<optimized out>, this=<optimized out>, op_ret=0,<br>
>> >>> >> op_errno=22,<br>
>> >>> >> inode=0x7f72fa7a6e48, stbuf=0x7f730180c990,<br>
>> >>> >> xattr=0x7f730c029cdc,<br>
>> >>> >> postparent=0x7f730180ca00) at dht-common.c:2362<br>
>> >>> >> #5 0x00007f7300df4989 in client3_3_lookup_cbk (req=<optimized<br>
>> >>> >> out>,<br>
>> >>> >> iov=<optimized out>, count=<optimized out>, myframe=0x7f730c616ab4)<br>
>> >>> >> at client-rpc-fops.c:2988<br>
>> >>> >> #6 0x00007f730e6ce010 in rpc_clnt_handle_reply<br>
>> >>> >> (clnt=clnt@entry=<wbr>0x7f72fc04c040,<br>
>> >>> >> pollin=pollin@entry=<wbr>0x7f72fc079560)<br>
>> >>> >> at rpc-clnt.c:796<br>
>> >>> >> #7 0x00007f730e6ce2ef in rpc_clnt_notify (trans=<optimized out>,<br>
>> >>> >> mydata=0x7f72fc04c070, event=<optimized out>, data=0x7f72fc079560)<br>
>> >>> >> at rpc-clnt.c:967<br>
>> >>> >> #8 0x00007f730e6ca483 in rpc_transport_notify<br>
>> >>> >> (this=this@entry=<wbr>0x7f72fc05bd30,<br>
>> >>> >> event=event@entry=RPC_<wbr>TRANSPORT_MSG_RECEIVED,<br>
>> >>> >> data=data@entry=<wbr>0x7f72fc079560) at rpc-transport.c:546<br>
>> >>> >> #9 0x00007f73034dc344 in socket_event_poll_in<br>
>> >>> >> (this=this@entry=<wbr>0x7f72fc05bd30) at socket.c:2250<br>
>> >>> >> #10 0x00007f73034def44 in socket_event_handler (fd=fd@entry=10,<br>
>> >>> >> idx=idx@entry=2, data=0x7f72fc05bd30, poll_in=1, poll_out=0,<br>
>> >>> >> poll_err=0)<br>
>> >>> >> at socket.c:2363<br>
>> >>> >> #11 0x00007f730e96c5aa in event_dispatch_epoll_handler<br>
>> >>> >> (event=0x7f730180ced0, event_pool=0xf42ee0) at event-epoll.c:575<br>
>> >>> >> #12 event_dispatch_epoll_worker (data=0xf8d650) at<br>
>> >>> >> event-epoll.c:678<br>
>> >>> >> #13 0x00007f730d96ddc5 in start_thread () from<br>
>> >>> >> /lib64/libpthread.so.0<br>
>> >>> >> #14 0x00007f730d2b2ced in clone () from /lib64/libc.so.6<br>
>> >>> >><br>
>> >>> >> It seems like there is some situation where the structure is not<br>
>> >>> >> intialized properly? Appreciate if anyone can advice. Thanks.<br>
>> >>> >><br>
>> >>> >> Cw<br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >><br>
>> >>> >> On Wed, Dec 7, 2016 at 9:42 AM, qingwei wei <<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>><br>
>> >>> >> wrote:<br>
>> >>> >> > Hi,<br>
>> >>> >> ><br>
>> >>> >> > I did another test and this time FIO fails with<br>
>> >>> >> ><br>
>> >>> >> > fio: io_u error on file /mnt/testSF-HDD1/test: Invalid argument:<br>
>> >>> >> > write<br>
>> >>> >> > offset=114423242752, buflen=8192<br>
>> >>> >> > fio: pid=10052, err=22/file:io_u.c:1582, func=io_u error,<br>
>> >>> >> > error=Invalid<br>
>> >>> >> > argument<br>
>> >>> >> ><br>
>> >>> >> > test: (groupid=0, jobs=1): err=22 (file:io_u.c:1582, func=io_u<br>
>> >>> >> > error,<br>
>> >>> >> > error=Invalid argument): pid=10052: Tue Dec 6 15:18:47 2016<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > Below is the client log:<br>
>> >>> >> ><br>
>> >>> >> > [2016-12-06 05:19:31.261289] I<br>
>> >>> >> > [fuse-bridge.c:5171:fuse_<wbr>graph_setup]<br>
>> >>> >> > 0-fuse: switched to graph 0<br>
>> >>> >> > [2016-12-06 05:19:31.261355] I [MSGID: 114035]<br>
>> >>> >> > [client-handshake.c:193:<wbr>client_set_lk_version_cbk]<br>
>> >>> >> > 0-testSF-HDD-client-5: Server lk version = 1<br>
>> >>> >> > [2016-12-06 05:19:31.261404] I [fuse-bridge.c:4083:fuse_init]<br>
>> >>> >> > 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs<br>
>> >>> >> > 7.22<br>
>> >>> >> > kernel 7.22<br>
>> >>> >> > [2016-12-06 05:19:31.262901] I [MSGID: 108031]<br>
>> >>> >> > [afr-common.c:2071:afr_local_<wbr>discovery_cbk]<br>
>> >>> >> > 0-testSF-HDD-replicate-0:<br>
>> >>> >> > selecting local read_child testSF-HDD-client-1<br>
>> >>> >> > [2016-12-06 05:19:31.262930] I [MSGID: 108031]<br>
>> >>> >> > [afr-common.c:2071:afr_local_<wbr>discovery_cbk]<br>
>> >>> >> > 0-testSF-HDD-replicate-0:<br>
>> >>> >> > selecting local read_child testSF-HDD-client-0<br>
>> >>> >> > [2016-12-06 05:19:31.262948] I [MSGID: 108031]<br>
>> >>> >> > [afr-common.c:2071:afr_local_<wbr>discovery_cbk]<br>
>> >>> >> > 0-testSF-HDD-replicate-0:<br>
>> >>> >> > selecting local read_child testSF-HDD-client-2<br>
>> >>> >> > [2016-12-06 05:19:31.269592] I [MSGID: 108031]<br>
>> >>> >> > [afr-common.c:2071:afr_local_<wbr>discovery_cbk]<br>
>> >>> >> > 0-testSF-HDD-replicate-1:<br>
>> >>> >> > selecting local read_child testSF-HDD-client-3<br>
>> >>> >> > [2016-12-06 05:19:31.269795] I [MSGID: 108031]<br>
>> >>> >> > [afr-common.c:2071:afr_local_<wbr>discovery_cbk]<br>
>> >>> >> > 0-testSF-HDD-replicate-1:<br>
>> >>> >> > selecting local read_child testSF-HDD-client-4<br>
>> >>> >> > [2016-12-06 05:19:31.277763] I [MSGID: 108031]<br>
>> >>> >> > [afr-common.c:2071:afr_local_<wbr>discovery_cbk]<br>
>> >>> >> > 0-testSF-HDD-replicate-1:<br>
>> >>> >> > selecting local read_child testSF-HDD-client-5<br>
>> >>> >> > [2016-12-06 06:58:05.399244] W [MSGID: 101159]<br>
>> >>> >> > [inode.c:1219:__inode_unlink] 0-inode:<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > be318638-e8a0-4c6d-977d-<wbr>7a937aa84806/864c9ea1-3a7e-<wbr>4d41-899b-f30604a7584e.16284:<br>
>> >>> >> > dentry not found in 63af10b7-9dac-4a53-aab1-<wbr>3cc17fff3255<br>
>> >>> >> > [2016-12-06 15:17:43.311400] E<br>
>> >>> >> > [shard.c:460:__shard_update_<wbr>shards_inode_list]<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > (-->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_common_lookup_shards_<wbr>cbk+0x2d)<br>
>> >>> >> > [0x7f5575680fdd]<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > -->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
>> >>> >> > [0x7f5575680f6f]<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > -->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x22e)<br>
>> >>> >> > [0x7f557567c1ce] ) 0-: Assertion failed: lru_inode_ctx->block_num<br>
>> >>> >> > > 0<br>
>> >>> >> > [2016-12-06 15:17:43.311472] W [inode.c:1232:inode_unlink]<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > (-->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
>> >>> >> > [0x7f5575680f6f]<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > -->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x14a)<br>
>> >>> >> > [0x7f557567c0ea] -->/lib64/libglusterfs.so.0(<wbr>inode_unlink+0x9c)<br>
>> >>> >> > [0x7f558386ba0c] ) 0-testSF-HDD-shard: inode not found<br>
>> >>> >> > [2016-12-06 15:17:43.333456] W [inode.c:1133:inode_forget]<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > (-->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
>> >>> >> > [0x7f5575680f6f]<br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> ><br>
>> >>> >> > -->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x154)<br>
>> >>> >> > [0x7f557567c0f4] -->/lib64/libglusterfs.so.0(<wbr>inode_forget+0x90)<br>
>> >>> >> > [0x7f558386b800] ) 0-testSF-HDD-shard: inode not found<br>
>> >>> >> > [2016-12-06 15:18:47.129794] W<br>
>> >>> >> > [fuse-bridge.c:2311:fuse_<wbr>writev_cbk]<br>
>> >>> >> > 0-glusterfs-fuse: 12555429: WRITE => -1<br>
>> >>> >> > gfid=864c9ea1-3a7e-4d41-899b-<wbr>f30604a7584e fd=0x7f557016ae6c<br>
>> >>> >> > (Invalid<br>
>> >>> >> > argument)<br>
>> >>> >> ><br>
>> >>> >> > Below is the code and it will go to the else block when<br>
>> >>> >> > inode_count<br>
>> >>> >> > is<br>
>> >>> >> > greater than SHARD_MAX_INODES which is 16384. And my dataset of<br>
>> >>> >> > 400GB<br>
>> >>> >> > with 16MB shard size has enough shard file (400GB/16MB) to<br>
>> >>> >> > achieve<br>
>> >>> >> > it.<br>
>> >>> >> > When i do the test with smaller dataset, there is no such error.<br>
>> >>> >> ><br>
>> >>> >> > shard.c<br>
>> >>> >> ><br>
>> >>> >> > if (priv->inode_count + 1 <= SHARD_MAX_INODES) {<br>
>> >>> >> > /* If this inode was linked here for the first<br>
>> >>> >> > time<br>
>> >>> >> > (indicated<br>
>> >>> >> > * by empty list), and if there is still space in<br>
>> >>> >> > the<br>
>> >>> >> > priv list,<br>
>> >>> >> > * add this ctx to the tail of the list.<br>
>> >>> >> > */<br>
>> >>> >> > gf_uuid_copy (ctx->base_gfid,<br>
>> >>> >> > base_inode->gfid);<br>
>> >>> >> > ctx->block_num = block_num;<br>
>> >>> >> > list_add_tail (&ctx->ilist,<br>
>> >>> >> > &priv->ilist_head);<br>
>> >>> >> > priv->inode_count++;<br>
>> >>> >> > } else {<br>
>> >>> >> > /*If on the other hand there is no available slot<br>
>> >>> >> > for<br>
>> >>> >> > this inode<br>
>> >>> >> > * in the list, delete the lru inode from the<br>
>> >>> >> > head of<br>
>> >>> >> > the list,<br>
>> >>> >> > * unlink it. And in its place add this new inode<br>
>> >>> >> > into<br>
>> >>> >> > the list.<br>
>> >>> >> > */<br>
>> >>> >> > lru_inode_ctx = list_first_entry<br>
>> >>> >> > (&priv->ilist_head,<br>
>> >>> >> ><br>
>> >>> >> > shard_inode_ctx_t,<br>
>> >>> >> > ilist);<br>
>> >>> >> > /* add in message for debug*/<br>
>> >>> >> > gf_msg (THIS->name, GF_LOG_WARNING, 0,<br>
>> >>> >> > SHARD_MSG_INVALID_FOP,<br>
>> >>> >> > "block number = %d",<br>
>> >>> >> > lru_inode_ctx->block_num);<br>
>> >>> >> ><br>
>> >>> >> > GF_ASSERT (lru_inode_ctx->block_num > 0);<br>
>> >>> >> ><br>
>> >>> >> > Hopefully can get some advice from you guys on this. Thanks.<br>
>> >>> >> ><br>
>> >>> >> > Cw<br>
>> >>> >> ><br>
>> >>> >> > On Tue, Dec 6, 2016 at 9:07 AM, qingwei wei <<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>><br>
>> >>> >> > wrote:<br>
>> >>> >> >> Hi,<br>
>> >>> >> >><br>
>> >>> >> >> This is the repost of my email in the gluster-user mailing list.<br>
>> >>> >> >> Appreciate if anyone has any idea on the issue i have now.<br>
>> >>> >> >> Thanks.<br>
>> >>> >> >><br>
>> >>> >> >> I encountered this when i do the FIO random write on the fuse<br>
>> >>> >> >> mount<br>
>> >>> >> >> gluster volume. After this assertion happen, the client log is<br>
>> >>> >> >> filled<br>
>> >>> >> >> with pending frames messages and FIO just show zero IO in the<br>
>> >>> >> >> progress<br>
>> >>> >> >> status. As i leave this test to run overnight, the client log<br>
>> >>> >> >> file<br>
>> >>> >> >> fill up with those pending frame messages and hit 28GB for<br>
>> >>> >> >> around 12<br>
>> >>> >> >> hours.<br>
>> >>> >> >><br>
>> >>> >> >> The client log:<br>
>> >>> >> >><br>
>> >>> >> >> [2016-12-04 15:48:35.274208] W [MSGID: 109072]<br>
>> >>> >> >> [dht-linkfile.c:50:dht_<wbr>linkfile_lookup_cbk] 0-testSF-dht: got<br>
>> >>> >> >> non-linkfile<br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >> testSF-replicate-0:/.shard/<wbr>21da7b64-45e5-4c6a-9244-<wbr>53d0284bf7ed.7038,<br>
>> >>> >> >> gfid = 00000000-0000-0000-0000-<wbr>000000000000<br>
>> >>> >> >> [2016-12-04 15:48:35.277208] W [MSGID: 109072]<br>
>> >>> >> >> [dht-linkfile.c:50:dht_<wbr>linkfile_lookup_cbk] 0-testSF-dht: got<br>
>> >>> >> >> non-linkfile<br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >> testSF-replicate-0:/.shard/<wbr>21da7b64-45e5-4c6a-9244-<wbr>53d0284bf7ed.8957,<br>
>> >>> >> >> gfid = 00000000-0000-0000-0000-<wbr>000000000000<br>
>> >>> >> >> [2016-12-04 15:48:35.277588] W [MSGID: 109072]<br>
>> >>> >> >> [dht-linkfile.c:50:dht_<wbr>linkfile_lookup_cbk] 0-testSF-dht: got<br>
>> >>> >> >> non-linkfile<br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >> testSF-replicate-0:/.shard/<wbr>21da7b64-45e5-4c6a-9244-<wbr>53d0284bf7ed.11912,<br>
>> >>> >> >> gfid = 00000000-0000-0000-0000-<wbr>000000000000<br>
>> >>> >> >> [2016-12-04 15:48:35.312751] E<br>
>> >>> >> >> [shard.c:460:__shard_update_<wbr>shards_inode_list]<br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >> (-->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_common_lookup_shards_<wbr>cbk+0x2d)<br>
>> >>> >> >> [0x7f86cc42efdd]<br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >> -->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
>> >>> >> >> [0x7f86cc42ef6f]<br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >><br>
>> >>> >> >> -->/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x22e)<br>
>> >>> >> >> [0x7f86cc42a1ce] ) 0-: Assertion failed:<br>
>> >>> >> >> lru_inode_ctx->block_num ><br>
>> >>> >> >> 0<br>
>> >>> >> >> pending frames:<br>
>> >>> >> >> frame : type(0) op(0)<br>
>> >>> >> >> frame : type(0) op(0)<br>
>> >>> >> >> frame : type(0) op(0)<br>
>> >>> >> >> frame : type(0) op(0)<br>
>> >>> >> >> frame : type(0) op(0)<br>
>> >>> >> >> frame : type(0) op(0)<br>
>> >>> >> >><br>
>> >>> >> >> Gluster info (i am testing this on one server with each disk<br>
>> >>> >> >> representing one brick, this gluster volume is then mounted<br>
>> >>> >> >> locally<br>
>> >>> >> >> via fuse)<br>
>> >>> >> >><br>
>> >>> >> >> Volume Name: testSF<br>
>> >>> >> >> Type: Distributed-Replicate<br>
>> >>> >> >> Volume ID: 3f205363-5029-40d7-b1b5-<wbr>216f9639b454<br>
>> >>> >> >> Status: Started<br>
>> >>> >> >> Number of Bricks: 2 x 3 = 6<br>
>> >>> >> >> Transport-type: tcp<br>
>> >>> >> >> Bricks:<br>
>> >>> >> >> Brick1: 192.168.123.4:/mnt/sdb_mssd/<wbr>testSF<br>
>> >>> >> >> Brick2: 192.168.123.4:/mnt/sdc_mssd/<wbr>testSF<br>
>> >>> >> >> Brick3: 192.168.123.4:/mnt/sdd_mssd/<wbr>testSF<br>
>> >>> >> >> Brick4: 192.168.123.4:/mnt/sde_mssd/<wbr>testSF<br>
>> >>> >> >> Brick5: 192.168.123.4:/mnt/sdf_mssd/<wbr>testSF<br>
>> >>> >> >> Brick6: 192.168.123.4:/mnt/sdg_mssd/<wbr>testSF<br>
>> >>> >> >> Options Reconfigured:<br>
>> >>> >> >> features.shard-block-size: 16MB<br>
>> >>> >> >> features.shard: on<br>
>> >>> >> >> performance.readdir-ahead: on<br>
>> >>> >> >><br>
>> >>> >> >> Gluster version: 3.7.17<br>
>> >>> >> >><br>
>> >>> >> >> The actual disk usage (Is about 91% full):<br>
>> >>> >> >><br>
>> >>> >> >> /dev/sdb1 235G 202G 22G 91% /mnt/sdb_mssd<br>
>> >>> >> >> /dev/sdc1 235G 202G 22G 91% /mnt/sdc_mssd<br>
>> >>> >> >> /dev/sdd1 235G 202G 22G 91% /mnt/sdd_mssd<br>
>> >>> >> >> /dev/sde1 235G 200G 23G 90% /mnt/sde_mssd<br>
>> >>> >> >> /dev/sdf1 235G 200G 23G 90% /mnt/sdf_mssd<br>
>> >>> >> >> /dev/sdg1 235G 200G 23G 90% /mnt/sdg_mssd<br>
>> >>> >> >><br>
>> >>> >> >> Anyone encounter this issue before?<br>
>> >>> >> >><br>
>> >>> >> >> Cw<br>
>> >>> >> ______________________________<wbr>_________________<br>
>> >>> >> Gluster-devel mailing list<br>
>> >>> >> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
>> >>> >> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
>> >>> ><br>
>> >>> ><br>
>> >><br>
>> >><br>
><br>
><br>
</div></div></blockquote></div><br></div>