<html><head></head><body>Interesting. I just encountered a hanging flush problem, too. Probably unrelated but if you want to give this a try a temporary workaround I found was to drop caches, "echo 3 > /proc/vm/drop_caches", on all the servers prior to the flush operation.<br><br><div class="gmail_quote">On February 4, 2016 10:06:45 PM PST, Raghavendra G <raghavendra@gluster.com> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div dir="ltr">+soumyak, +rtalur.</div><div class="gmail_extra"><br /><div class="gmail_quote">On Fri, Jan 29, 2016 at 2:34 PM, Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br /><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br />
<br />
On 01/28/2016 05:05 PM, Pranith Kumar Karampuri wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
With baul jianguo's help I am able to see that FLUSH fops are hanging for some reason.<br />
<br />
pk1@localhost - ~/Downloads<br />
17:02:13 :) ⚡ grep "unique=" client-dump1.txt<br />
unique=3160758373<br />
unique=<a href="tel:2073075682" value="+12073075682" target="_blank">2073075682</a><br />
unique=1455047665<br />
unique=0<br />
<br />
pk1@localhost - ~/Downloads<br />
17:02:21 :) ⚡ grep "unique=" client-dump-0.txt<br />
unique=3160758373<br />
unique=<a href="tel:2073075682" value="+12073075682" target="_blank">2073075682</a><br />
unique=1455047665<br />
unique=0<br />
<br />
I will be debugging a bit more and post my findings.<br />
</blockquote></span>
+Raghavendra G<br />
<br />
All the stubs are hung in write-behind. I checked that the statedumps doesn't have any writes in progress. May be because of some race, flush fop is not resumed after write calls are complete? It seems this issue happens only when io-threads is enabled on the client.<span class="HOEnZb"><font color="#888888"><br />
<br />
Pranith</font></span><div class="HOEnZb"><div class="h5"><br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br />
Pranith<br />
On 01/28/2016 03:18 PM, baul jianguo wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
the client glusterfs gdb info, main thread id is 70800。<br />
In the top output,70800 thread time 1263:30,70810 thread time<br />
1321:10,other thread time too small。<br />
(gdb) thread apply all bt<br />
<br />
<br />
<br />
Thread 9 (Thread 0x7fc21acaf700 (LWP 70801)):<br />
<br />
#0 0x00007fc21cc0c535 in sigwait () from /lib64/libpthread.so.0<br />
<br />
#1 0x000000000040539b in glusterfs_sigwaiter (arg=<value optimized<br />
out>) at glusterfsd.c:1653<br />
<br />
#2 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#3 0x00007fc21c56e93d in clone () from /lib64/libc.so.6<br />
<br />
<br />
<br />
Thread 8 (Thread 0x7fc21a2ae700 (LWP 70802)):<br />
<br />
#0 0x00007fc21cc08a0e in pthread_cond_timedwait@@GLIBC_2.3.2 () from<br />
/lib64/libpthread.so.0<br />
<br />
#1 0x00007fc21ded02bf in syncenv_task (proc=0x121ee60) at syncop.c:493<br />
<br />
#2 0x00007fc21ded6300 in syncenv_processor (thdata=0x121ee60) at syncop.c:571<br />
<br />
#3 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#4 0x00007fc21c56e93d in clone () from /lib64/libc.so.6<br />
<br />
<br />
<br />
Thread 7 (Thread 0x7fc2198ad700 (LWP 70803)):<br />
<br />
#0 0x00007fc21cc08a0e in pthread_cond_timedwait@@GLIBC_2.3.2 () from<br />
/lib64/libpthread.so.0<br />
<br />
#1 0x00007fc21ded02bf in syncenv_task (proc=0x121f220) at syncop.c:493<br />
<br />
#2 0x00007fc21ded6300 in syncenv_processor (thdata=0x121f220) at syncop.c:571<br />
<br />
#3 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#4 0x00007fc21c56e93d in clone () from /lib64/libc.so.6<br />
<br />
<br />
<br />
Thread 6 (Thread 0x7fc21767d700 (LWP 70805)):<br />
<br />
#0 0x00007fc21cc0bfbd in nanosleep () from /lib64/libpthread.so.0<br />
<br />
#1 0x00007fc21deb16bc in gf_timer_proc (ctx=0x11f2010) at timer.c:170<br />
<br />
#2 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#3 0x00007fc21c56e93d in clone () from /lib64/libc.so.6<br />
<br />
<br />
<br />
Thread 5 (Thread 0x7fc20fb1e700 (LWP 70810)):<br />
<br />
#0 0x00007fc21c566987 in readv () from /lib64/libc.so.6<br />
<br />
#1 0x00007fc21accbc55 in fuse_thread_proc (data=0x120f450) at<br />
fuse-bridge.c:4752<br />
<br />
#2 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#3 0x00007fc21c56e93d in clone () from /lib64/libc.so.6 时间最多<br />
<br />
<br />
<br />
Thread 4 (Thread 0x7fc20f11d700 (LWP 70811)): 少点<br />
<br />
#0 0x00007fc21cc0b7dd in read () from /lib64/libpthread.so.0<br />
<br />
#1 0x00007fc21acc0e73 in read (data=<value optimized out>) at<br />
/usr/include/bits/unistd.h:45<br />
<br />
#2 notify_kernel_loop (data=<value optimized out>) at fuse-bridge.c:3786<br />
<br />
#3 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#4 0x00007fc21c56e93d in clone () from /lib64/libc.so.6<br />
<br />
<br />
<br />
Thread 3 (Thread 0x7fc1b16fe700 (LWP 206224)):<br />
<br />
---Type <return> to continue, or q <return> to quit---<br />
<br />
#0 0x00007fc21cc08a0e in pthread_cond_timedwait@@GLIBC_2.3.2 () from<br />
/lib64/libpthread.so.0<br />
<br />
#1 0x00007fc20e515e60 in iot_worker (data=0x19eeda0) at io-threads.c:157<br />
<br />
#2 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#3 0x00007fc21c56e93d in clone () from /lib64/libc.so.6<br />
<br />
<br />
<br />
Thread 2 (Thread 0x7fc1b0bfb700 (LWP 214361)):<br />
<br />
#0 0x00007fc21cc08a0e in pthread_cond_timedwait@@GLIBC_2.3.2 () from<br />
/lib64/libpthread.so.0<br />
<br />
#1 0x00007fc20e515e60 in iot_worker (data=0x19eeda0) at io-threads.c:157<br />
<br />
#2 0x00007fc21cc04a51 in start_thread () from /lib64/libpthread.so.0<br />
<br />
#3 0x00007fc21c56e93d in clone () from /lib64/libc.so.6<br />
<br />
<br />
<br />
Thread 1 (Thread 0x7fc21e31e700 (LWP 70800)):<br />
<br />
#0 0x00007fc21c56ef33 in epoll_wait () from /lib64/libc.so.6<br />
<br />
#1 0x00007fc21deea3e7 in event_dispatch_epoll (event_pool=0x120dec0)<br />
at event-epoll.c:428<br />
<br />
#2 0x00000000004075e4 in main (argc=4, argv=0x7fff3dc93698) at<br />
glusterfsd.c:1983<br />
<br />
On Thu, Jan 28, 2016 at 5:29 PM, baul jianguo <<a href="mailto:roidinev@gmail.com" target="_blank">roidinev@gmail.com</a>> wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<a href="http://pastebin.centos.org/38941/" rel="noreferrer" target="_blank">http://pastebin.centos.org/38941/</a><br />
client statedump,only the pid 27419,168030,208655 hang,you can search<br />
this pid in the statedump file。<br />
<br />
On Wed, Jan 27, 2016 at 4:35 PM, Pranith Kumar Karampuri<br />
<<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br />
If the hang appears on enabling client side io-threads then it could<br />
be because of some race that is seen when io-threads is enabled on the<br />
client side. 2 things will help us debug this issue:<br />
1) thread apply all bt inside gdb (with debuginfo rpms/debs installed )<br />
2) Complete statedump of the mount at two intervals preferably 10 seconds<br />
apart. It becomes difficult to find out which ones are stuck vs the ones<br />
that are on-going when we have just one statedump. If we have two, we can<br />
find which frames are common in both of the statedumps and then take a<br />
closer look there.<br />
<br />
Feel free to ping me on #gluster-dev, nick: pranithk if you have the process<br />
hung in that state and you guys don't mind me do a live debugging with you<br />
guys. This option is the best of the lot!<br />
<br />
Thanks a lot baul, Oleksandr for the debugging so far!<br />
<br />
Pranith<br />
<br />
<br />
On 01/25/2016 01:03 PM, baul jianguo wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
3.5.7 also hangs.only the flush op hung. Yes,off the<br />
performance.client-io-threads ,no hang.<br />
<br />
The hang does not relate the client kernel version.<br />
<br />
One client statdump about flush op,any abnormal?<br />
<br />
[global.callpool.stack.12]<br />
<br />
uid=0<br />
<br />
gid=0<br />
<br />
pid=14432<br />
<br />
unique=16336007098<br />
<br />
lk-owner=77cb199aa36f3641<br />
<br />
op=FLUSH<br />
<br />
type=1<br />
<br />
cnt=6<br />
<br />
<br />
<br />
[global.callpool.stack.12.frame.1]<br />
<br />
ref_count=1<br />
<br />
translator=fuse<br />
<br />
complete=0<br />
<br />
<br />
<br />
[global.callpool.stack.12.frame.2]<br />
<br />
ref_count=0<br />
<br />
translator=datavolume-write-behind<br />
<br />
complete=0<br />
<br />
parent=datavolume-read-ahead<br />
<br />
wind_from=ra_flush<br />
<br />
wind_to=FIRST_CHILD (this)->fops->flush<br />
<br />
unwind_to=ra_flush_cbk<br />
<br />
<br />
<br />
[global.callpool.stack.12.frame.3]<br />
<br />
ref_count=1<br />
<br />
translator=datavolume-read-ahead<br />
<br />
complete=0<br />
<br />
parent=datavolume-open-behind<br />
<br />
wind_from=default_flush_resume<br />
<br />
wind_to=FIRST_CHILD(this)->fops->flush<br />
<br />
unwind_to=default_flush_cbk<br />
<br />
<br />
<br />
[global.callpool.stack.12.frame.4]<br />
<br />
ref_count=1<br />
<br />
translator=datavolume-open-behind<br />
<br />
complete=0<br />
<br />
parent=datavolume-io-threads<br />
<br />
wind_from=iot_flush_wrapper<br />
<br />
wind_to=FIRST_CHILD(this)->fops->flush<br />
<br />
unwind_to=iot_flush_cbk<br />
<br />
<br />
<br />
[global.callpool.stack.12.frame.5]<br />
<br />
ref_count=1<br />
<br />
translator=datavolume-io-threads<br />
<br />
complete=0<br />
<br />
parent=datavolume<br />
<br />
wind_from=io_stats_flush<br />
<br />
wind_to=FIRST_CHILD(this)->fops->flush<br />
<br />
unwind_to=io_stats_flush_cbk<br />
<br />
<br />
<br />
[global.callpool.stack.12.frame.6]<br />
<br />
ref_count=1<br />
<br />
translator=datavolume<br />
<br />
complete=0<br />
<br />
parent=fuse<br />
<br />
wind_from=fuse_flush_resume<br />
<br />
wind_to=xl->fops->flush<br />
<br />
unwind_to=fuse_err_cbk<br />
<br />
<br />
<br />
On Sun, Jan 24, 2016 at 5:35 AM, Oleksandr Natalenko<br />
<<a href="mailto:oleksandr@natalenko.name" target="_blank">oleksandr@natalenko.name</a>> wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
With "performance.client-io-threads" set to "off" no hangs occurred in 3<br />
rsync/rm rounds. Could that be some fuse-bridge lock race? Will bring<br />
that<br />
option to "on" back again and try to get full statedump.<br />
<br />
On четвер, 21 січня 2016 р. 14:54:47 EET Raghavendra G wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Thu, Jan 21, 2016 at 10:49 AM, Pranith Kumar Karampuri <<br />
<br />
<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 01/18/2016 02:28 PM, Oleksandr Natalenko wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
XFS. Server side works OK, I'm able to mount volume again. Brick is<br />
30%<br />
full.<br />
</blockquote>
Oleksandr,<br />
<br />
Will it be possible to get the statedump of the client, bricks<br />
<br />
output next time it happens?<br />
<br />
<br />
<a href="https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.m" rel="noreferrer" target="_blank">https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.m</a> <br />
d#how-to-generate-statedump<br />
</blockquote>
We also need to dump inode information. To do that you've to add<br />
"all=yes"<br />
to /var/run/gluster/glusterdump.options before you issue commands to get<br />
statedump.<br />
<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Pranith<br />
<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On понеділок, 18 січня 2016 р. 15:07:18 EET baul jianguo wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
What is your brick file system? and the glusterfsd process and all<br />
thread status?<br />
I met same issue when client app such as rsync stay in D status,and<br />
the brick process and relate thread also be in the D status.<br />
And the brick dev disk util is 100% .<br />
<br />
On Sun, Jan 17, 2016 at 6:13 AM, Oleksandr Natalenko<br />
<br />
<<a href="mailto:oleksandr@natalenko.name" target="_blank">oleksandr@natalenko.name</a>> wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Wrong assumption, rsync hung again.<br />
<br />
On субота, 16 січня 2016 р. 22:53:04 EET Oleksandr Natalenko wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
One possible reason:<br />
<br />
cluster.lookup-optimize: on<br />
cluster.readdir-optimize: on<br />
<br />
I've disabled both optimizations, and at least as of now rsync<br />
still<br />
does<br />
its job with no issues. I would like to find out what option causes<br />
such<br />
a<br />
behavior and why. Will test more.<br />
<br />
On пʼятниця, 15 січня 2016 р. 16:09:51 EET Oleksandr Natalenko<br />
wrote:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Another observation: if rsyncing is resumed after hang, rsync<br />
itself<br />
hangs a lot faster because it does stat of already copied files.<br />
So,<br />
the<br />
reason may be not writing itself, but massive stat on GlusterFS<br />
volume<br />
as well.<br />
<br />
15.01.2016 09:40, Oleksandr Natalenko написав:<br />
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
While doing rsync over millions of files from ordinary partition<br />
to<br />
GlusterFS volume, just after approx. first 2 million rsync hang<br />
happens, and the following info appears in dmesg:<br />
<br />
===<br />
[17075038.924481] INFO: task rsync:10310 blocked for more than<br />
120<br />
seconds.<br />
[17075038.931948] "echo 0 ><br />
/proc/sys/kernel/hung_task_timeout_secs"<br />
disables this message.<br />
[17075038.940748] rsync D ffff88207fc13680 0 10310<br />
10309 0x00000080<br />
[17075038.940752] ffff8809c578be18 0000000000000086<br />
ffff8809c578bfd8<br />
0000000000013680<br />
[17075038.940756] ffff8809c578bfd8 0000000000013680<br />
ffff880310cbe660<br />
ffff881159d16a30<br />
[17075038.940759] ffff881e3aa25800 ffff8809c578be48<br />
ffff881159d16b10<br />
ffff88087d553980<br />
[17075038.940762] Call Trace:<br />
[17075038.940770] [<ffffffff8160a1d9>] schedule+0x29/0x70<br />
[17075038.940797] [<ffffffffa023a53d>]<br />
__fuse_request_send+0x13d/0x2c0<br />
[fuse]<br />
[17075038.940801] [<ffffffffa023db30>] ?<br />
fuse_get_req_nofail_nopages+0xc0/0x1e0 [fuse]<br />
[17075038.940805] [<ffffffff81098350>] ? wake_up_bit+0x30/0x30<br />
[17075038.940809] [<ffffffffa023a6d2>]<br />
fuse_request_send+0x12/0x20<br />
[fuse]<br />
[17075038.940813] [<ffffffffa024382f>] fuse_flush+0xff/0x150<br />
[fuse]<br />
[17075038.940817] [<ffffffff811c45c4>] filp_close+0x34/0x80<br />
[17075038.940821] [<ffffffff811e4ed8>] __close_fd+0x78/0xa0<br />
[17075038.940824] [<ffffffff811c6103>] SyS_close+0x23/0x50<br />
[17075038.940828] [<ffffffff81614de9>]<br />
system_call_fastpath+0x16/0x1b<br />
===<br />
<br />
rsync blocks in D state, and to kill it, I have to do umount<br />
--lazy<br />
on<br />
GlusterFS mountpoint, and then kill corresponding client<br />
glusterfs<br />
process. Then rsync exits.<br />
<br />
Here is GlusterFS volume info:<br />
<br />
===<br />
Volume Name: asterisk_records<br />
Type: Distributed-Replicate<br />
Volume ID: dc1fe561-fa3a-4f2e-8330-ec7e52c75ba4<br />
Status: Started<br />
Number of Bricks: 3 x 2 = 6<br />
Transport-type: tcp<br />
Bricks:<br />
Brick1:<br />
<br />
server1:/bricks/10_megaraid_0_3_9_x_0_4_3_hdd_r1_nolvm_hdd_storage_0 <br />
1<br />
/as<br />
te<br />
risk/records Brick2:<br />
<br />
server2:/bricks/10_megaraid_8_5_14_x_8_6_16_hdd_r1_nolvm_hdd_storage <br />
_<br />
01/<br />
as<br />
terisk/records Brick3:<br />
<br />
server1:/bricks/11_megaraid_0_5_4_x_0_6_5_hdd_r1_nolvm_hdd_storage_0 <br />
2<br />
/as<br />
te<br />
risk/records Brick4:<br />
<br />
server2:/bricks/11_megaraid_8_7_15_x_8_8_20_hdd_r1_nolvm_hdd_storage <br />
_<br />
02/<br />
as<br />
terisk/records Brick5:<br />
<br />
server1:/bricks/12_megaraid_0_7_6_x_0_13_14_hdd_r1_nolvm_hdd_storage <br />
_<br />
03/<br />
as<br />
terisk/records Brick6:<br />
<br />
server2:/bricks/12_megaraid_8_9_19_x_8_13_24_hdd_r1_nolvm_hdd_storag <br />
e<br />
_03<br />
/a<br />
sterisk/records Options Reconfigured:<br />
cluster.lookup-optimize: on<br />
cluster.readdir-optimize: on<br />
client.event-threads: 2<br />
network.inode-lru-limit: 4096<br />
server.event-threads: 4<br />
performance.client-io-threads: on<br />
storage.linux-aio: on<br />
performance.write-behind-window-size: 4194304<br />
performance.stat-prefetch: on<br />
performance.quick-read: on<br />
performance.read-ahead: on<br />
performance.flush-behind: on<br />
performance.write-behind: on<br />
performance.io-thread-count: 2<br />
performance.cache-max-file-size: 1048576<br />
performance.cache-size: 33554432<br />
features.cache-invalidation: on<br />
performance.readdir-ahead: on<br />
===<br />
<br />
The issue reproduces each time I rsync such an amount of files.<br />
<br />
How could I debug this issue better?<br />
_______________________________________________<br />
Gluster-users mailing list<br />
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br />
</blockquote>
_______________________________________________<br />
Gluster-devel mailing list<br />
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br />
</blockquote>
_______________________________________________<br />
Gluster-devel mailing list<br />
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br />
</blockquote>
_______________________________________________<br />
Gluster-devel mailing list<br />
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br />
</blockquote></blockquote>
_______________________________________________<br />
Gluster-users mailing list<br />
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br />
</blockquote>
_______________________________________________<br />
Gluster-devel mailing list<br />
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br />
</blockquote></blockquote>
<br />
</blockquote></blockquote></blockquote></blockquote></blockquote>
<br />
_______________________________________________<br />
Gluster-devel mailing list<br />
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br />
</blockquote>
<br />
_______________________________________________<br />
Gluster-devel mailing list<br />
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br />
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a></div></div></blockquote></div><br /><br clear="all" /><div><br /></div>-- <br /><div class="gmail_signature">Raghavendra G<br /></div>
</div>
<p style="margin-top: 2.5em; margin-bottom: 1em; border-bottom: 1px solid #000"></p><pre class="k9mail"><hr /><br />Gluster-devel mailing list<br />Gluster-devel@gluster.org<br /><a href="http://www.gluster.org/mailman/listinfo/gluster-devel">http://www.gluster.org/mailman/listinfo/gluster-devel</a></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>