<div dir="ltr"><div><div>From the log snippet:<br><br>[2016-12-07 09:15:35.677645] I [MSGID: 106482] [glusterd-brick-ops.c:442:__glusterd_handle_add_brick] 0-management: Received add brick req <br>[2016-12-07 09:15:35.677708] I [MSGID: 106062] [glusterd-brick-ops.c:494:__glusterd_handle_add_brick] 0-management: replica-count is 2<br>[2016-12-07 09:15:35.677735] E [MSGID: 106291] [glusterd-brick-ops.c:614:__glusterd_handle_add_brick] 0-management:<br><br></div>The last log entry indicates that we hit the code path in gd_addbr_validate_replica_count ()<br><br> if (replica_count == volinfo->replica_count) { <br> if (!(total_bricks % volinfo->dist_leaf_count)) { <br> ret = 1; <br> goto out; <br> } <br> } <br><br></div>@Pranith, Ravi - Milos was trying to convert a dist (1 X 1) volume to a replicate (1 X 2) using add brick and hit this issue where add-brick failed. The cluster is operating with 3.7.6. Could you help on what scenario this code path can be hit? One straight forward issue I see here is missing err_str in this path.<br><br><div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 7, 2016 at 7:56 PM, Miloš Čučulović - MDPI <span dir="ltr"><<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sure Atin, logs are attached.<span class=""><br>
<br>
- Kindest regards,<br>
<br>
Milos Cuculovic<br>
IT Manager<br>
<br>
---<br>
MDPI AG<br>
Postfach, CH-4020 Basel, Switzerland<br>
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
Tel. +41 61 683 77 35<br>
Fax +41 61 302 89 18<br>
Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
Skype: milos.cuculovic.mdpi<br>
<br></span><span class="">
On 07.12.2016 11:32, Atin Mukherjee wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
Milos,<br>
<br>
Giving snippets wouldn't help much, could you get me all the log files<br>
(/var/log/glusterfs/*) from both the nodes?<br>
<br>
On Wed, Dec 7, 2016 at 3:54 PM, Miloš Čučulović - MDPI<br></span><div><div class="h5">
<<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>> wrote:<br>
<br>
Thanks, here is the log after volume force:<br>
<br>
[2016-12-07 10:23:39.157234] I [MSGID: 115036]<br>
[server.c:552:server_rpc_notif<wbr>y] 0-storage-server: disconnecting<br>
connection from<br>
storage2-23175-2016/12/07-10:1<wbr>4:56:951307-storage-client-0-0<wbr>-0<br>
[2016-12-07 10:23:39.157301] I [MSGID: 101055]<br>
[client_t.c:419:gf_client_unre<wbr>f] 0-storage-server: Shutting down<br>
connection<br>
storage2-23175-2016/12/07-10:1<wbr>4:56:951307-storage-client-0-0<wbr>-0<br>
[2016-12-07 10:23:40.187805] I [login.c:81:gf_auth] 0-auth/login:<br>
allowed user names: ef4e608d-487b-49a3-85dd-0b36b3<wbr>554312<br>
[2016-12-07 10:23:40.187848] I [MSGID: 115029]<br>
[server-handshake.c:612:server<wbr>_setvolume] 0-storage-server: accepted<br>
client from<br>
storage2-23679-2016/12/07-10:2<wbr>3:40:160327-storage-client-0-0<wbr>-0<br>
(version: 3.7.6)<br>
[2016-12-07 10:23:52.817529] E [MSGID: 113001]<br>
[posix-helpers.c:1177:posix_ha<wbr>ndle_pair] 0-storage-posix:<br>
/data/data-cluster/dms/submiss<wbr>ions/User - 226485:<br>
key:glusterfs.preop.parent.key<wbr>flags: 1 length:22 [Operation not<br>
supported]<br>
[2016-12-07 10:23:52.817598] E [MSGID: 113001]<br>
[posix.c:1384:posix_mkdir] 0-storage-posix: setting xattrs on<br>
/data/data-cluster/dms/submiss<wbr>ions/User - 226485 failed [Operation<br>
not supported]<br>
[2016-12-07 10:23:52.821388] E [MSGID: 113001]<br>
[posix-helpers.c:1177:posix_ha<wbr>ndle_pair] 0-storage-posix:<br>
/data/data-cluster/dms/submiss<wbr>ions/User -<br>
226485/815a39ccc2cb41dadba45fe<wbr>7c1e226d4:<br>
key:glusterfs.preop.parent.key<wbr>flags: 1 length:22 [Operation not<br>
supported]<br>
[2016-12-07 10:23:52.821434] E [MSGID: 113001]<br>
[posix.c:1384:posix_mkdir] 0-storage-posix: setting xattrs on<br>
/data/data-cluster/dms/submiss<wbr>ions/User -<br>
226485/815a39ccc2cb41dadba45fe<wbr>7c1e226d4 failed [Operation not supported]<br>
<br>
- Kindest regards,<br>
<br>
Milos Cuculovic<br>
IT Manager<br>
<br>
---<br>
MDPI AG<br>
Postfach, CH-4020 Basel, Switzerland<br>
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
Tel. +41 61 683 77 35<br>
Fax +41 61 302 89 18<br></div></div><span class="">
Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
Skype: milos.cuculovic.mdpi<br>
<br></span><span class="">
On 07.12.2016 11:19, Atin Mukherjee wrote:<br>
<br>
You are referring to wrong log file which is for self heal<br>
daemon. You'd<br>
need to get back with the brick log file.<br>
<br>
On Wed, Dec 7, 2016 at 3:45 PM, Miloš Čučulović - MDPI<br>
<<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br></span><span class="">
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>>> wrote:<br>
<br></span><div><div class="h5">
This is the log file after force command:<br>
<br>
<br>
[2016-12-07 10:14:55.945937] W<br>
[glusterfsd.c:1236:cleanup_and<wbr>_exit]<br>
(-->/lib/x86_64-linux-gnu/libp<wbr>thread.so.0(+0x770a)<br>
[0x7fe9d905570a]<br>
-->/usr/sbin/glusterfs(gluster<wbr>fs_sigwaiter+0xdd) [0x40810d]<br>
-->/usr/sbin/glusterfs(cleanup<wbr>_and_exit+0x4d) [0x407f8d] ) 0-:<br>
received signum (15), shutting down<br>
[2016-12-07 10:14:56.960573] I [MSGID: 100030]<br>
[glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started running<br>
/usr/sbin/glusterfs version 3.7.6 (args: /usr/sbin/glusterfs -s<br>
localhost --volfile-id gluster/glustershd -p<br>
/var/lib/glusterd/glustershd/r<wbr>un/glustershd.pid -l<br>
/var/log/glusterfs/glustershd.<wbr>log -S<br>
/var/run/gluster/2599dc977214c<wbr>2895ef1b090a26c1518.socket<br>
--xlator-option<br>
*replicate*.node-uuid=7c988af2<wbr>-9f76-4843-8e6f-d94866d57bb0)<br>
[2016-12-07 10:14:56.968437] I [MSGID: 101190]<br>
[event-epoll.c:632:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started<br>
thread with index 1<br>
[2016-12-07 10:14:56.969774] I<br>
[graph.c:269:gf_add_cmdline_op<wbr>tions]<br>
0-storage-replicate-0: adding option 'node-uuid' for volume<br>
'storage-replicate-0' with value<br>
'7c988af2-9f76-4843-8e6f-d9486<wbr>6d57bb0'<br>
[2016-12-07 10:14:56.985257] I [MSGID: 101190]<br>
[event-epoll.c:632:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started<br>
thread with index 2<br>
[2016-12-07 10:14:56.986105] I [MSGID: 114020]<br>
[client.c:2118:notify] 0-storage-client-0: parent<br>
translators are<br>
ready, attempting connect on transport<br>
[2016-12-07 10:14:56.986668] I [MSGID: 114020]<br>
[client.c:2118:notify] 0-storage-client-1: parent<br>
translators are<br>
ready, attempting connect on transport<br>
Final graph:<br>
<br>
+-----------------------------<wbr>------------------------------<wbr>-------------------+<br>
1: volume storage-client-0<br>
2: type protocol/client<br>
3: option ping-timeout 42<br>
4: option remote-host storage2<br>
5: option remote-subvolume /data/data-cluster<br>
6: option transport-type socket<br>
7: option username ef4e608d-487b-49a3-85dd-0b36b3<wbr>554312<br>
8: option password dda0bdbf-95c1-4206-a57d-686756<wbr>210170<br>
9: end-volume<br>
10:<br>
11: volume storage-client-1<br>
12: type protocol/client<br>
13: option ping-timeout 42<br>
14: option remote-host storage<br>
15: option remote-subvolume /data/data-cluster<br>
16: option transport-type socket<br>
17: option username ef4e608d-487b-49a3-85dd-0b36b3<wbr>554312<br>
18: option password dda0bdbf-95c1-4206-a57d-686756<wbr>210170<br>
19: end-volume<br>
20:<br>
21: volume storage-replicate-0<br>
22: type cluster/replicate<br>
23: option node-uuid 7c988af2-9f76-4843-8e6f-d94866<wbr>d57bb0<br>
24: option background-self-heal-count 0<br>
25: option metadata-self-heal on<br>
26: option data-self-heal on<br>
27: option entry-self-heal on<br>
28: option self-heal-daemon enable<br>
29: option iam-self-heal-daemon yes<br>
[2016-12-07 10:14:56.987096] I<br>
[rpc-clnt.c:1847:rpc_clnt_reco<wbr>nfig]<br>
0-storage-client-0: changing port to 49152 (from 0)<br>
30: subvolumes storage-client-0 storage-client-1<br>
31: end-volume<br>
32:<br>
33: volume glustershd<br>
34: type debug/io-stats<br>
35: subvolumes storage-replicate-0<br>
36: end-volume<br>
37:<br>
<br>
+-----------------------------<wbr>------------------------------<wbr>-------------------+<br>
[2016-12-07 10:14:56.987685] E [MSGID: 114058]<br>
[client-handshake.c:1524:clien<wbr>t_query_portmap_cbk]<br>
0-storage-client-1: failed to get the port number for remote<br>
subvolume. Please run 'gluster volume status' on server to<br>
see if<br>
brick process is running.<br>
[2016-12-07 10:14:56.987766] I [MSGID: 114018]<br>
[client.c:2042:client_rpc_noti<wbr>fy] 0-storage-client-1:<br>
disconnected<br>
from storage-client-1. Client process will keep trying to<br>
connect to<br>
glusterd until brick's port is available<br>
[2016-12-07 10:14:56.988065] I [MSGID: 114057]<br>
[client-handshake.c:1437:selec<wbr>t_server_supported_programs]<br>
0-storage-client-0: Using Program GlusterFS 3.3, Num (1298437),<br>
Version (330)<br>
[2016-12-07 10:14:56.988387] I [MSGID: 114046]<br>
[client-handshake.c:1213:clien<wbr>t_setvolume_cbk]<br>
0-storage-client-0:<br>
Connected to storage-client-0, attached to remote volume<br>
'/data/data-cluster'.<br>
[2016-12-07 10:14:56.988409] I [MSGID: 114047]<br>
[client-handshake.c:1224:clien<wbr>t_setvolume_cbk]<br>
0-storage-client-0:<br>
Server and Client lk-version numbers are not same, reopening<br>
the fds<br>
[2016-12-07 10:14:56.988476] I [MSGID: 108005]<br>
[afr-common.c:3841:afr_notify] 0-storage-replicate-0: Subvolume<br>
'storage-client-0' came back up; going online.<br>
[2016-12-07 10:14:56.988581] I [MSGID: 114035]<br>
[client-handshake.c:193:client<wbr>_set_lk_version_cbk]<br>
0-storage-client-0: Server lk version = 1<br>
<br>
<br>
- Kindest regards,<br>
<br>
Milos Cuculovic<br>
IT Manager<br>
<br>
---<br>
MDPI AG<br>
Postfach, CH-4020 Basel, Switzerland<br>
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
Tel. +41 61 683 77 35<br>
Fax +41 61 302 89 18<br>
Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br></div></div><span class="">
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>><br>
Skype: milos.cuculovic.mdpi<br>
<br></span><span class="">
On 07.12.2016 11:09, Atin Mukherjee wrote:<br>
<br>
<br>
<br>
On Wed, Dec 7, 2016 at 3:37 PM, Miloš Čučulović - MDPI<br>
<<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>><br></span><span class="">
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>>>> wrote:<br>
<br></span><div><div class="h5">
Hi Akin,<br>
<br>
thanks for your reply.<br>
<br>
I was trying to debug it since yesterday and today I<br>
completely<br>
purget the glusterfs-server from the storage server.<br>
<br>
I installed it again, checked the firewall and the<br>
current<br>
status is<br>
as follows now:<br>
<br>
On storage2, I am running:<br>
sudo gluster volume add-brick storage replica 2<br>
storage:/data/data-cluster<br>
Answer => volume add-brick: failed: Operation failed<br>
cmd_history says:<br>
[2016-12-07 09:57:28.471009] : volume add-brick storage<br>
replica 2<br>
storage:/data/data-cluster : FAILED : Operation failed<br>
<br>
glustershd.log => no new entry on runing the<br>
add-brick command.<br>
<br>
etc-glusterfs-glusterd.vol.log =><br>
[2016-12-07 10:01:56.567564] I [MSGID: 106482]<br>
[glusterd-brick-ops.c:442:__gl<wbr>usterd_handle_add_brick]<br>
0-management:<br>
Received add brick req<br>
[2016-12-07 10:01:56.567626] I [MSGID: 106062]<br>
[glusterd-brick-ops.c:494:__gl<wbr>usterd_handle_add_brick]<br>
0-management:<br>
replica-count is 2<br>
[2016-12-07 10:01:56.567655] E [MSGID: 106291]<br>
[glusterd-brick-ops.c:614:__gl<wbr>usterd_handle_add_brick]<br>
0-management:<br>
<br>
<br>
Logs from storage (new server), there is no relevant log<br>
when I am<br>
running the command add-brick on storage2.<br>
<br>
<br>
Now, after reinstalling glusterfs-server on storage,<br>
I can<br>
see on<br>
storage2:<br>
<br>
Status of volume: storage<br>
Gluster process TCP Port RDMA<br>
Port<br>
Online Pid<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick storage2:/data/data-cluster 49152 0<br>
Y<br>
2160<br>
Self-heal Daemon on localhost N/A N/A<br>
Y<br>
7906<br>
<br>
Task Status of Volume storage<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<br>
<br>
<br>
By running the "gluster volume start storage force",<br>
do I<br>
risk to<br>
broke the storage2? This is a production server and<br>
needs to<br>
stay live.<br>
<br>
<br>
No, its going to bring up the brick process(es) if its<br>
not up.<br>
<br>
<br>
- Kindest regards,<br>
<br>
Milos Cuculovic<br>
IT Manager<br>
<br>
---<br>
MDPI AG<br>
Postfach, CH-4020 Basel, Switzerland<br>
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
Tel. +41 61 683 77 35<br>
Fax +41 61 302 89 18<br>
Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>><br></div></div><span class="">
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>>><br>
Skype: milos.cuculovic.mdpi<br>
<br></span><div><div class="h5">
On 07.12.2016 10:44, Atin Mukherjee wrote:<br>
<br>
<br>
<br>
On Tue, Dec 6, 2016 at 10:08 PM, Miloš Čučulović<br>
- MDPI<br>
<<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>>>><wbr>> wrote:<br>
<br>
Dear All,<br>
<br>
I have two servers, storage and storage2.<br>
The storage2 had a volume called storage.<br>
I then decided to add a replica brick (storage).<br>
<br>
I did this in the following way:<br>
<br>
1. sudo gluster peer probe storage (on<br>
storage server2)<br>
2. sudo gluster volume add-brick storage<br>
replica 2<br>
storage:/data/data-cluster<br>
<br>
Then I was getting the following error:<br>
volume add-brick: failed: Operation failed<br>
<br>
But, it seems the brick was somehow added,<br>
as when<br>
checking<br>
on storage2:<br>
sudo gluster volume info storage<br>
I am getting:<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: storage2:/data/data-cluster<br>
Brick2: storage:/data/data-cluster<br>
<br>
<br>
So, seems ok here, however, when doing:<br>
sudo gluster volume heal storage info<br>
I am getting:<br>
Volume storage is not of type replicate/disperse<br>
Volume heal failed.<br>
<br>
<br>
Also, when doing<br>
sudo gluster volume status all<br>
<br>
I am getting:<br>
Status of volume: storage<br>
Gluster process TCP<br>
Port RDMA<br>
Port<br>
Online Pid<br>
<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick storage2:/data/data-cluster 49152 0<br>
Y<br>
2160<br>
Brick storage:/data/data-cluster N/A<br>
N/A<br>
N<br>
N/A<br>
Self-heal Daemon on localhost N/A<br>
N/A<br>
Y<br>
7906<br>
Self-heal Daemon on storage N/A<br>
N/A<br>
N<br>
N/A<br>
<br>
Task Status of Volume storage<br>
<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
<br>
Any idea please?<br>
<br>
<br>
It looks like the brick didn't come up during an<br>
add-brick.<br>
Could you<br>
share cmd_history, glusterd and the new brick<br>
log file<br>
from both the<br>
nodes? As a workaround, could you try 'gluster<br>
volume<br>
start storage<br>
force' and see if the issue persists?<br>
<br>
<br>
<br>
--<br>
- Kindest regards,<br>
<br>
Milos Cuculovic<br>
IT Manager<br>
<br>
---<br>
MDPI AG<br>
Postfach, CH-4020 Basel, Switzerland<br>
Office: St. Alban-Anlage 66, 4052 Basel,<br>
Switzerland<br>
Tel. +41 61 683 77 35<br>
Fax +41 61 302 89 18<br>
Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>><br>
<mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> <mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>>>>><br>
Skype: milos.cuculovic.mdpi<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>>><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>>>><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>>><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a><br>
<mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.<wbr>org</a>>>>><br>
<br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>>><br>
<br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>>>><br>
<br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>>><br>
<br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a><br>
<<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailma<wbr>n/listinfo/gluster-users</a>>>>><br>
<br>
<br>
<br>
<br>
--<br>
<br>
~ Atin (atinm)<br>
<br>
<br>
<br>
<br>
--<br>
<br>
~ Atin (atinm)<br>
<br>
<br>
<br>
<br>
--<br>
<br>
~ Atin (atinm)<br>
<br>
<br>
<br>
<br>
--<br>
<br>
~ Atin (atinm)<br>
</div></div></blockquote>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><br></div><div>~ Atin (atinm)<br></div></div></div></div>
</div>