<div dir="ltr"><div><div><div><div>Hi,<br><br></div>First of all, apologies for the late reply. Couldn&#39;t find time to look into this<br></div><div>until now.<br></div><div><br></div>Changing SHARD_MAX_INODES value from 12384 to 16 is a cool trick!<br></div>Let me try that as well and get back to you in some time.<br><br></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Dec 8, 2016 at 11:07 AM, qingwei wei <span dir="ltr">&lt;<a href="mailto:tchengwee@gmail.com" target="_blank">tchengwee@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
With the help from my colleague, we did some changes to the code with<br>
reduce number of SHARD_MAX_INODES (from 16384 to 16) and also include<br>
the printing of blk_num inside __shard_update_shards_inode_<wbr>list. We<br>
then execute fio to first do sequential write of 300MB file. After<br>
this run completed, we then use fio to generate random write (8k). And<br>
during this random write run, we found that there is situation where<br>
the blk_num is negative number and this trigger the following<br>
assertion.<br>
<br>
GF_ASSERT (lru_inode_ctx-&gt;block_num &gt; 0);<br>
<br>
[2016-12-08 03:16:34.217582] E<br>
[shard.c:468:__shard_update_<wbr>shards_inode_list]<br>
(--&gt;/usr/local/lib/glusterfs/<wbr>3.7.17/xlator/features/shard.<wbr>so(shard_common_lookup_shards_<wbr>cbk+0x2d)<br>
[0x7f7300930b6d]<br>
--&gt;/usr/local/lib/glusterfs/3.<wbr>7.17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xce)<br>
[0x7f7300930b1e]<br>
--&gt;/usr/local/lib/glusterfs/3.<wbr>7.17/xlator/features/shard.so(<wbr>__shard_update_shards_inode_<wbr>list+0x36b)<br>
[0x7f730092bf5b] ) 0-: Assertion failed: lru_inode_ctx-&gt;block_num &gt; 0<br>
<br>
Also, there is segmentation fault shortly after this assertion and<br>
after that fio exit with error.<br>
<br>
frame : type(0) op(0)<br>
patchset: git://<a href="http://git.gluster.com/glusterfs.git" rel="noreferrer" target="_blank">git.gluster.com/<wbr>glusterfs.git</a><br>
signal received: 11<br>
time of crash:<br>
2016-12-08 03:16:34<br>
configuration details:<br>
argp 1<br>
backtrace 1<br>
dlfcn 1<br>
libpthread 1<br>
llistxattr 1<br>
setfsid 1<br>
spinlock 1<br>
epoll.h 1<br>
xattr.h 1<br>
st_atim.tv_nsec 1<br>
package-string: glusterfs 3.7.17<br>
/usr/local/lib/libglusterfs.<wbr>so.0(_gf_msg_backtrace_nomem+<wbr>0x92)[0x7f730e900332]<br>
/usr/local/lib/libglusterfs.<wbr>so.0(gf_print_trace+0x2d5)[<wbr>0x7f730e9250b5]<br>
/lib64/libc.so.6(+0x35670)[<wbr>0x7f730d1f1670]<br>
/usr/local/lib/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x1d4)[0x7f730092bdc4]<br>
/usr/local/lib/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xce)[<wbr>0x7f7300930b1e]<br>
/usr/local/lib/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_common_lookup_shards_<wbr>cbk+0x2d)[0x7f7300930b6d]<br>
/usr/local/lib/glusterfs/3.7.<wbr>17/xlator/cluster/distribute.<wbr>so(dht_lookup_cbk+0x380)[<wbr>0x7f7300b8e240]<br>
/usr/local/lib/glusterfs/3.7.<wbr>17/xlator/protocol/client.so(<wbr>client3_3_lookup_cbk+0x769)[<wbr>0x7f7300df4989]<br>
/usr/local/lib/libgfrpc.so.0(<wbr>rpc_clnt_handle_reply+0x90)[<wbr>0x7f730e6ce010]<br>
/usr/local/lib/libgfrpc.so.0(<wbr>rpc_clnt_notify+0x1df)[<wbr>0x7f730e6ce2ef]<br>
/usr/local/lib/libgfrpc.so.0(<wbr>rpc_transport_notify+0x23)[<wbr>0x7f730e6ca483]<br>
/usr/local/lib/glusterfs/3.7.<wbr>17/rpc-transport/socket.so(+<wbr>0x6344)[0x7f73034dc344]<br>
/usr/local/lib/glusterfs/3.7.<wbr>17/rpc-transport/socket.so(+<wbr>0x8f44)[0x7f73034def44]<br>
/usr/local/lib/libglusterfs.<wbr>so.0(+0x925aa)[0x7f730e96c5aa]<br>
/lib64/libpthread.so.0(+<wbr>0x7dc5)[0x7f730d96ddc5]<br>
<br>
Core dump:<br>
<br>
Using host libthread_db library &quot;/lib64/libthread_db.so.1&quot;.<br>
Core was generated by `/usr/local/sbin/glusterfs<br>
--volfile-server=10.217.242.32 --volfile-id=/testSF1&#39;.<br>
Program terminated with signal 11, Segmentation fault.<br>
#0  list_del_init (old=0x7f72f4003de0) at ../../../../libglusterfs/src/<wbr>list.h:87<br>
87        old-&gt;prev-&gt;next = old-&gt;next;<br>
<br>
bt<br>
<br>
#0  list_del_init (old=0x7f72f4003de0) at ../../../../libglusterfs/src/<wbr>list.h:87<br>
#1  __shard_update_shards_inode_<wbr>list<br>
(linked_inode=linked_inode@<wbr>entry=0x7f72fa7a6e48,<br>
this=this@entry=<wbr>0x7f72fc0090c0, base_inode=0x7f72fa7a5108,<br>
    block_num=block_num@entry=10) at shard.c:469<br>
#2  0x00007f7300930b1e in shard_link_block_inode<br>
(local=local@entry=<wbr>0x7f730ec4ed00, block_num=10, inode=&lt;optimized<br>
out&gt;,<br>
    buf=buf@entry=0x7f730180c990) at shard.c:1559<br>
#3  0x00007f7300930b6d in shard_common_lookup_shards_cbk<br>
(frame=0x7f730c611204, cookie=&lt;optimized out&gt;, this=0x7f72fc0090c0,<br>
op_ret=0,<br>
    op_errno=&lt;optimized out&gt;, inode=&lt;optimized out&gt;,<br>
buf=0x7f730180c990, xdata=0x7f730c029cdc, postparent=0x7f730180ca00)<br>
at shard.c:1596<br>
#4  0x00007f7300b8e240 in dht_lookup_cbk (frame=0x7f730c61dc40,<br>
cookie=&lt;optimized out&gt;, this=&lt;optimized out&gt;, op_ret=0, op_errno=22,<br>
    inode=0x7f72fa7a6e48, stbuf=0x7f730180c990, xattr=0x7f730c029cdc,<br>
postparent=0x7f730180ca00) at dht-common.c:2362<br>
#5  0x00007f7300df4989 in client3_3_lookup_cbk (req=&lt;optimized out&gt;,<br>
iov=&lt;optimized out&gt;, count=&lt;optimized out&gt;, myframe=0x7f730c616ab4)<br>
    at client-rpc-fops.c:2988<br>
#6  0x00007f730e6ce010 in rpc_clnt_handle_reply<br>
(clnt=clnt@entry=<wbr>0x7f72fc04c040, pollin=pollin@entry=<wbr>0x7f72fc079560)<br>
at rpc-clnt.c:796<br>
#7  0x00007f730e6ce2ef in rpc_clnt_notify (trans=&lt;optimized out&gt;,<br>
mydata=0x7f72fc04c070, event=&lt;optimized out&gt;, data=0x7f72fc079560)<br>
    at rpc-clnt.c:967<br>
#8  0x00007f730e6ca483 in rpc_transport_notify<br>
(this=this@entry=<wbr>0x7f72fc05bd30,<br>
event=event@entry=RPC_<wbr>TRANSPORT_MSG_RECEIVED,<br>
    data=data@entry=<wbr>0x7f72fc079560) at rpc-transport.c:546<br>
#9  0x00007f73034dc344 in socket_event_poll_in<br>
(this=this@entry=<wbr>0x7f72fc05bd30) at socket.c:2250<br>
#10 0x00007f73034def44 in socket_event_handler (fd=fd@entry=10,<br>
idx=idx@entry=2, data=0x7f72fc05bd30, poll_in=1, poll_out=0,<br>
poll_err=0)<br>
    at socket.c:2363<br>
#11 0x00007f730e96c5aa in event_dispatch_epoll_handler<br>
(event=0x7f730180ced0, event_pool=0xf42ee0) at event-epoll.c:575<br>
#12 event_dispatch_epoll_worker (data=0xf8d650) at event-epoll.c:678<br>
#13 0x00007f730d96ddc5 in start_thread () from /lib64/libpthread.so.0<br>
#14 0x00007f730d2b2ced in clone () from /lib64/libc.so.6<br>
<br>
It seems like there is some situation where the structure is not<br>
intialized properly? Appreciate if anyone can advice. Thanks.<br>
<br>
Cw<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
On Wed, Dec 7, 2016 at 9:42 AM, qingwei wei &lt;<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>&gt; wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt; I did another test and this time FIO fails with<br>
&gt;<br>
&gt; fio: io_u error on file /mnt/testSF-HDD1/test: Invalid argument: write<br>
&gt; offset=114423242752, buflen=8192<br>
&gt; fio: pid=10052, err=22/file:io_u.c:1582, func=io_u error, error=Invalid argument<br>
&gt;<br>
&gt; test: (groupid=0, jobs=1): err=22 (file:io_u.c:1582, func=io_u error,<br>
&gt; error=Invalid argument): pid=10052: Tue Dec  6 15:18:47 2016<br>
&gt;<br>
&gt;<br>
&gt; Below is the client log:<br>
&gt;<br>
&gt; [2016-12-06 05:19:31.261289] I [fuse-bridge.c:5171:fuse_<wbr>graph_setup]<br>
&gt; 0-fuse: switched to graph 0<br>
&gt; [2016-12-06 05:19:31.261355] I [MSGID: 114035]<br>
&gt; [client-handshake.c:193:<wbr>client_set_lk_version_cbk]<br>
&gt; 0-testSF-HDD-client-5: Server lk version = 1<br>
&gt; [2016-12-06 05:19:31.261404] I [fuse-bridge.c:4083:fuse_init]<br>
&gt; 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22<br>
&gt; kernel 7.22<br>
&gt; [2016-12-06 05:19:31.262901] I [MSGID: 108031]<br>
&gt; [afr-common.c:2071:afr_local_<wbr>discovery_cbk] 0-testSF-HDD-replicate-0:<br>
&gt; selecting local read_child testSF-HDD-client-1<br>
&gt; [2016-12-06 05:19:31.262930] I [MSGID: 108031]<br>
&gt; [afr-common.c:2071:afr_local_<wbr>discovery_cbk] 0-testSF-HDD-replicate-0:<br>
&gt; selecting local read_child testSF-HDD-client-0<br>
&gt; [2016-12-06 05:19:31.262948] I [MSGID: 108031]<br>
&gt; [afr-common.c:2071:afr_local_<wbr>discovery_cbk] 0-testSF-HDD-replicate-0:<br>
&gt; selecting local read_child testSF-HDD-client-2<br>
&gt; [2016-12-06 05:19:31.269592] I [MSGID: 108031]<br>
&gt; [afr-common.c:2071:afr_local_<wbr>discovery_cbk] 0-testSF-HDD-replicate-1:<br>
&gt; selecting local read_child testSF-HDD-client-3<br>
&gt; [2016-12-06 05:19:31.269795] I [MSGID: 108031]<br>
&gt; [afr-common.c:2071:afr_local_<wbr>discovery_cbk] 0-testSF-HDD-replicate-1:<br>
&gt; selecting local read_child testSF-HDD-client-4<br>
&gt; [2016-12-06 05:19:31.277763] I [MSGID: 108031]<br>
&gt; [afr-common.c:2071:afr_local_<wbr>discovery_cbk] 0-testSF-HDD-replicate-1:<br>
&gt; selecting local read_child testSF-HDD-client-5<br>
&gt; [2016-12-06 06:58:05.399244] W [MSGID: 101159]<br>
&gt; [inode.c:1219:__inode_unlink] 0-inode:<br>
&gt; be318638-e8a0-4c6d-977d-<wbr>7a937aa84806/864c9ea1-3a7e-<wbr>4d41-899b-f30604a7584e.16284:<br>
&gt; dentry not found in 63af10b7-9dac-4a53-aab1-<wbr>3cc17fff3255<br>
&gt; [2016-12-06 15:17:43.311400] E<br>
&gt; [shard.c:460:__shard_update_<wbr>shards_inode_list]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_common_lookup_shards_<wbr>cbk+0x2d)<br>
&gt; [0x7f5575680fdd]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
&gt; [0x7f5575680f6f]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x22e)<br>
&gt; [0x7f557567c1ce] ) 0-: Assertion failed: lru_inode_ctx-&gt;block_num &gt; 0<br>
&gt; [2016-12-06 15:17:43.311472] W [inode.c:1232:inode_unlink]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
&gt; [0x7f5575680f6f]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x14a)<br>
&gt; [0x7f557567c0ea] --&gt;/lib64/libglusterfs.so.0(<wbr>inode_unlink+0x9c)<br>
&gt; [0x7f558386ba0c] ) 0-testSF-HDD-shard: inode not found<br>
&gt; [2016-12-06 15:17:43.333456] W [inode.c:1133:inode_forget]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
&gt; [0x7f5575680f6f]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x154)<br>
&gt; [0x7f557567c0f4] --&gt;/lib64/libglusterfs.so.0(<wbr>inode_forget+0x90)<br>
&gt; [0x7f558386b800] ) 0-testSF-HDD-shard: inode not found<br>
&gt; [2016-12-06 15:18:47.129794] W [fuse-bridge.c:2311:fuse_<wbr>writev_cbk]<br>
&gt; 0-glusterfs-fuse: 12555429: WRITE =&gt; -1<br>
&gt; gfid=864c9ea1-3a7e-4d41-899b-<wbr>f30604a7584e fd=0x7f557016ae6c (Invalid<br>
&gt; argument)<br>
&gt;<br>
&gt; Below is the code and it will go to the else block when inode_count is<br>
&gt; greater than SHARD_MAX_INODES which is 16384. And my dataset of 400GB<br>
&gt; with 16MB shard size has enough shard file (400GB/16MB) to achieve it.<br>
&gt; When i do the test with smaller dataset, there is no such error.<br>
&gt;<br>
&gt; shard.c<br>
&gt;<br>
&gt;                 if (priv-&gt;inode_count + 1 &lt;= SHARD_MAX_INODES) {<br>
&gt;                 /* If this inode was linked here for the first time (indicated<br>
&gt;                  * by empty list), and if there is still space in the priv list,<br>
&gt;                  * add this ctx to the tail of the list.<br>
&gt;                  */<br>
&gt;                         gf_uuid_copy (ctx-&gt;base_gfid, base_inode-&gt;gfid);<br>
&gt;                         ctx-&gt;block_num = block_num;<br>
&gt;                         list_add_tail (&amp;ctx-&gt;ilist, &amp;priv-&gt;ilist_head);<br>
&gt;                         priv-&gt;inode_count++;<br>
&gt;                 } else {<br>
&gt;                 /*If on the other hand there is no available slot for this inode<br>
&gt;                  * in the list, delete the lru inode from the head of the list,<br>
&gt;                  * unlink it. And in its place add this new inode into the list.<br>
&gt;                  */<br>
&gt;                         lru_inode_ctx = list_first_entry (&amp;priv-&gt;ilist_head,<br>
&gt;                                                           shard_inode_ctx_t,<br>
&gt;                                                           ilist);<br>
&gt;                         /* add in message for debug*/<br>
&gt;                         gf_msg (THIS-&gt;name, GF_LOG_WARNING, 0,<br>
&gt; SHARD_MSG_INVALID_FOP,<br>
&gt;                         &quot;block number = %d&quot;, lru_inode_ctx-&gt;block_num);<br>
&gt;<br>
&gt;                         GF_ASSERT (lru_inode_ctx-&gt;block_num &gt; 0);<br>
&gt;<br>
&gt; Hopefully can get some advice from you guys on this. Thanks.<br>
&gt;<br>
&gt; Cw<br>
&gt;<br>
&gt; On Tue, Dec 6, 2016 at 9:07 AM, qingwei wei &lt;<a href="mailto:tchengwee@gmail.com">tchengwee@gmail.com</a>&gt; wrote:<br>
&gt;&gt; Hi,<br>
&gt;&gt;<br>
&gt;&gt; This is the repost of my email in the gluster-user mailing list.<br>
&gt;&gt; Appreciate if anyone has any idea on the issue i have now. Thanks.<br>
&gt;&gt;<br>
&gt;&gt; I encountered this when i do the FIO random write on the fuse mount<br>
&gt;&gt; gluster volume. After this assertion happen, the client log is filled<br>
&gt;&gt; with pending frames messages and FIO just show zero IO in the progress<br>
&gt;&gt; status. As i leave this test to run overnight, the client log file<br>
&gt;&gt; fill up with those pending frame messages and hit 28GB for around 12<br>
&gt;&gt; hours.<br>
&gt;&gt;<br>
&gt;&gt; The client log:<br>
&gt;&gt;<br>
&gt;&gt; [2016-12-04 15:48:35.274208] W [MSGID: 109072]<br>
&gt;&gt; [dht-linkfile.c:50:dht_<wbr>linkfile_lookup_cbk] 0-testSF-dht: got<br>
&gt;&gt; non-linkfile testSF-replicate-0:/.shard/<wbr>21da7b64-45e5-4c6a-9244-<wbr>53d0284bf7ed.7038,<br>
&gt;&gt; gfid = 00000000-0000-0000-0000-<wbr>000000000000<br>
&gt;&gt; [2016-12-04 15:48:35.277208] W [MSGID: 109072]<br>
&gt;&gt; [dht-linkfile.c:50:dht_<wbr>linkfile_lookup_cbk] 0-testSF-dht: got<br>
&gt;&gt; non-linkfile testSF-replicate-0:/.shard/<wbr>21da7b64-45e5-4c6a-9244-<wbr>53d0284bf7ed.8957,<br>
&gt;&gt; gfid = 00000000-0000-0000-0000-<wbr>000000000000<br>
&gt;&gt; [2016-12-04 15:48:35.277588] W [MSGID: 109072]<br>
&gt;&gt; [dht-linkfile.c:50:dht_<wbr>linkfile_lookup_cbk] 0-testSF-dht: got<br>
&gt;&gt; non-linkfile testSF-replicate-0:/.shard/<wbr>21da7b64-45e5-4c6a-9244-<wbr>53d0284bf7ed.11912,<br>
&gt;&gt; gfid = 00000000-0000-0000-0000-<wbr>000000000000<br>
&gt;&gt; [2016-12-04 15:48:35.312751] E<br>
&gt;&gt; [shard.c:460:__shard_update_<wbr>shards_inode_list]<br>
&gt;&gt; (--&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_common_lookup_shards_<wbr>cbk+0x2d)<br>
&gt;&gt; [0x7f86cc42efdd]<br>
&gt;&gt; --&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(<wbr>shard_link_block_inode+0xdf)<br>
&gt;&gt; [0x7f86cc42ef6f]<br>
&gt;&gt; --&gt;/usr/lib64/glusterfs/3.7.<wbr>17/xlator/features/shard.so(__<wbr>shard_update_shards_inode_<wbr>list+0x22e)<br>
&gt;&gt; [0x7f86cc42a1ce] ) 0-: Assertion failed: lru_inode_ctx-&gt;block_num &gt; 0<br>
&gt;&gt; pending frames:<br>
&gt;&gt; frame : type(0) op(0)<br>
&gt;&gt; frame : type(0) op(0)<br>
&gt;&gt; frame : type(0) op(0)<br>
&gt;&gt; frame : type(0) op(0)<br>
&gt;&gt; frame : type(0) op(0)<br>
&gt;&gt; frame : type(0) op(0)<br>
&gt;&gt;<br>
&gt;&gt; Gluster info (i am testing this on one server with each disk<br>
&gt;&gt; representing one brick, this gluster volume is then mounted locally<br>
&gt;&gt; via fuse)<br>
&gt;&gt;<br>
&gt;&gt; Volume Name: testSF<br>
&gt;&gt; Type: Distributed-Replicate<br>
&gt;&gt; Volume ID: 3f205363-5029-40d7-b1b5-<wbr>216f9639b454<br>
&gt;&gt; Status: Started<br>
&gt;&gt; Number of Bricks: 2 x 3 = 6<br>
&gt;&gt; Transport-type: tcp<br>
&gt;&gt; Bricks:<br>
&gt;&gt; Brick1: 192.168.123.4:/mnt/sdb_mssd/<wbr>testSF<br>
&gt;&gt; Brick2: 192.168.123.4:/mnt/sdc_mssd/<wbr>testSF<br>
&gt;&gt; Brick3: 192.168.123.4:/mnt/sdd_mssd/<wbr>testSF<br>
&gt;&gt; Brick4: 192.168.123.4:/mnt/sde_mssd/<wbr>testSF<br>
&gt;&gt; Brick5: 192.168.123.4:/mnt/sdf_mssd/<wbr>testSF<br>
&gt;&gt; Brick6: 192.168.123.4:/mnt/sdg_mssd/<wbr>testSF<br>
&gt;&gt; Options Reconfigured:<br>
&gt;&gt; features.shard-block-size: 16MB<br>
&gt;&gt; features.shard: on<br>
&gt;&gt; performance.readdir-ahead: on<br>
&gt;&gt;<br>
&gt;&gt; Gluster version: 3.7.17<br>
&gt;&gt;<br>
&gt;&gt; The actual disk usage (Is about 91% full):<br>
&gt;&gt;<br>
&gt;&gt; /dev/sdb1                235G  202G   22G  91% /mnt/sdb_mssd<br>
&gt;&gt; /dev/sdc1                235G  202G   22G  91% /mnt/sdc_mssd<br>
&gt;&gt; /dev/sdd1                235G  202G   22G  91% /mnt/sdd_mssd<br>
&gt;&gt; /dev/sde1                235G  200G   23G  90% /mnt/sde_mssd<br>
&gt;&gt; /dev/sdf1                235G  200G   23G  90% /mnt/sdf_mssd<br>
&gt;&gt; /dev/sdg1                235G  200G   23G  90% /mnt/sdg_mssd<br>
&gt;&gt;<br>
&gt;&gt; Anyone encounter this issue before?<br>
&gt;&gt;<br>
&gt;&gt; Cw<br>
______________________________<wbr>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
</div></div></blockquote></div><br></div>