<div dir="ltr">Not historically, but we are using bonding for replication between the servers. It's been stable for at least 6 months, but it's possible that one of the links in the bond is failing or something.<div><br></div><div>Would this type of restart be triggered by a loss of communication between bricks in a replica set? It seems like it would defeat one of the points of having a replicated volume if that were the case.</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><br><div>Thank You,</div><div><br></div><div>Logan Barfield</div><div>Tranquil Hosting</div></div></div></div>
<br><div class="gmail_quote">On Tue, Feb 2, 2016 at 12:02 AM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Initially I was suspecting about server-quorum be the culprit which is<br>
not the case. By any chance is your network flaky?<br>
<div><div class="h5"><br>
On 02/01/2016 10:33 PM, Logan Barfield wrote:<br>
> Volume Name: data02<br>
> Type: Replicate<br>
> Volume ID: 1c8928b1-f49e-4950-be06-0f8ce5adf870<br>
> Status: Started<br>
> Number of Bricks: 1 x 2 = 2<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: gluster-stor01:/export/data/brick02Â Â <-- 10.1.1.10<br>
> Brick2: gluster-stor02:/export/data/brick02Â Â <-- 10.1.1.11<br>
> Options Reconfigured:<br>
> server.event-threads: 5<br>
> client.event-threads: 11<br>
> geo-replication.indexing: on<br>
> geo-replication.ignore-pid-check: on<br>
> changelog.changelog: on<br>
> server.statedump-path: /tmp<br>
> server.outstanding-rpc-limit: 128<br>
> performance.io-thread-count: 64<br>
> performance.nfs.read-ahead: on<br>
> performance.nfs.io-cache: on<br>
> performance.nfs.quick-read: on<br>
> performance.cache-max-file-size: 1MB<br>
> performance.client-io-threads: on<br>
> cluster.lookup-optimize: on<br>
> performance.cache-size: 1073741824<br>
> performance.write-behind-window-size: 4MB<br>
> performance.nfs.write-behind-window-size: 4MB<br>
> performance.read-ahead: off<br>
> performance.nfs.stat-prefetch: on<br>
><br>
><br>
> Status of volume: data02<br>
> Gluster process               TCP Port RDMA Port Online Pid<br>
> ------------------------------------------------------------------------------<br>
> Brick gluster-stor01:/export/data/brick02Â Â Â 49153Â Â Â 0Â Â Â Â Â Y<br>
>Â Â Â 17411<br>
> Brick gluster-stor02:/export/data/brick02Â Â Â 49155Â Â Â 0Â Â Â Â Â Y<br>
>Â Â Â 4717<br>
> NFS Server on localhost           2049   0     Y<br>
> 17395<br>
> Self-heal Daemon on localhost        N/A    N/A    Y<br>
> 17405<br>
> NFS Server on gluster-stor02Â Â Â Â Â Â Â Â Â Â 2049Â Â Â 0Â Â Â Â Â Y<br>
>Â Â Â 4701<br>
> Self-heal Daemon on gluster-stor02Â Â Â Â Â Â Â N/AÂ Â Â Â N/AÂ Â Â Â Y<br>
>Â Â Â 4712<br>
><br>
> Task Status of Volume data02<br>
> ------------------------------------------------------------------------------<br>
> There are no active volume tasks<br>
><br>
><br>
><br>
> Note that this problem was occurring with the same frequency before we<br>
> added all of the volume options above. We were running defaults up<br>
> until last week, and changing them had no impact on this particular problem.<br>
><br>
><br>
><br>
><br>
> Thank You,<br>
><br>
> Logan Barfield<br>
> Tranquil Hosting<br>
><br>
> On Fri, Jan 29, 2016 at 9:28 PM, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
</div></div><div><div class="h5">> <mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>>> wrote:<br>
><br>
>Â Â Â Could you paste output of gluster volume info?<br>
><br>
>Â Â Â ~Atin<br>
><br>
>Â Â Â On 01/29/2016 11:59 PM, Logan Barfield wrote:<br>
>   > We're running a fairly large 2-replica volume across two servers. The<br>
>   > volume is approximately 20TB of small 1K-4MB files. The volume is<br>
>Â Â Â > exported via NFS, and mounted remotely by two clients.<br>
>Â Â Â ><br>
>Â Â Â > For the past few weeks the Gluster brick processes have been randomly<br>
>   > restarting. Luckily they've been doing so at non-peak times, so we<br>
>Â Â Â > didn't notice until our monitoring checks happened to pick up on<br>
>Â Â Â zombied<br>
>Â Â Â > 'glusterfs' process.<br>
>Â Â Â ><br>
>Â Â Â > From the logs it looks like something is blocking communication to the<br>
>Â Â Â > brick processes, and Gluster automatically restarts everything to<br>
>   > compensate. I've so far not been able to figure out the<br>
>Â Â Â underlying cause.<br>
>Â Â Â ><br>
>Â Â Â > I've included log snippets from 'glustershd.log' and<br>
>   > 'etc-glusterfs-glusterd.vol.log' here. If anyone can provide some<br>
>   > insight into the issue it would be greatly appreciated. I'll also be<br>
>Â Â Â > happy to provide any further details as needed.<br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â > [2016-01-29 05:03:47.039886] I [MSGID: 106144]<br>
>Â Â Â > [glusterd-pmap.c:274:pmap_registry_remove] 0-pmap: removing brick<br>
>Â Â Â > /export/data/brick02 on port 49155<br>
>Â Â Â > [2016-01-29 05:03:47.075521] W [socket.c:588:__socket_rwv]<br>
>Â Â Â 0-management:<br>
>Â Â Â > readv on /var/run/gluster/53a233b05f5d4be45dc94391bc3ebfe5.socket<br>
>Â Â Â failed<br>
>Â Â Â > (No data available)<br>
>Â Â Â > [2016-01-29 05:03:47.078282] I [MSGID: 106005]<br>
>Â Â Â > [glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:<br>
>Â Â Â > Brick gluster-stor02:/export/data/brick02 has disconnected from<br>
>Â Â Â glusterd.<br>
>Â Â Â > [2016-01-29 05:03:47.149161] W [glusterfsd.c:1236:cleanup_and_exit]<br>
>Â Â Â > (-->/lib64/libpthread.so.0() [0x3e47a079d1]<br>
>Â Â Â > -->/usr/sbin/glusterd(glusterfs_sigwaiter+0xcd) [0x405e6d]<br>
>Â Â Â > -->/usr/sbin/glusterd(cleanup_and_exit+0x65) [0x4059d5] ) 0-: recei<br>
>Â Â Â > ved signum (15), shutting down<br>
>Â Â Â > [2016-01-29 05:03:54.067012] I [MSGID: 100030]<br>
>Â Â Â [glusterfsd.c:2318:main]<br>
>Â Â Â > 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.6<br>
>Â Â Â > (args: /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid)<br>
>Â Â Â > [2016-01-29 05:03:54.071901] I [MSGID: 106478] [glusterd.c:1350:init]<br>
>Â Â Â > 0-management: Maximum allowed open file descriptors set to 65536<br>
>Â Â Â > [2016-01-29 05:03:54.071935] I [MSGID: 106479] [glusterd.c:1399:init]<br>
>Â Â Â > 0-management: Using /var/lib/glusterd as working directory<br>
>Â Â Â > [2016-01-29 05:03:54.075655] E<br>
>Â Â Â [rpc-transport.c:292:rpc_transport_load]<br>
>Â Â Â > 0-rpc-transport: /usr/lib64/glusterfs/3.7.6/rpc-transport/rdma.so:<br>
>Â Â Â > cannot open shared object file: No such file or directory<br>
>Â Â Â > [2016-01-29 05:03:54.075672] W<br>
>Â Â Â [rpc-transport.c:296:rpc_transport_load]<br>
>Â Â Â > 0-rpc-transport: volume 'rdma.management': transport-type 'rdma'<br>
>Â Â Â is not<br>
>Â Â Â > valid or not found on this machine<br>
>Â Â Â > [2016-01-29 05:03:54.075680] W [rpcsvc.c:1597:rpcsvc_transport_create]<br>
>Â Â Â > 0-rpc-service: cannot create listener, initing the transport failed<br>
>Â Â Â > [2016-01-29 05:03:54.075687] E [MSGID: 106243] [glusterd.c:1623:init]<br>
>Â Â Â > 0-management: creation of 1 listeners failed, continuing with<br>
>Â Â Â succeeded<br>
>Â Â Â > transport<br>
>Â Â Â > [2016-01-29 05:03:55.869717] I [MSGID: 106513]<br>
>Â Â Â > [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd:<br>
>Â Â Â > retrieved op-version: 30702<br>
>Â Â Â > [2016-01-29 05:03:55.995747] I [MSGID: 106498]<br>
>Â Â Â > [glusterd-handler.c:3579:glusterd_friend_add_from_peerinfo]<br>
>Â Â Â > 0-management: connect returned 0<br>
>Â Â Â > [2016-01-29 05:03:55.995866] I<br>
>Â Â Â [rpc-clnt.c:984:rpc_clnt_connection_init]<br>
>Â Â Â > 0-management: setting frame-timeout to 600<br>
>Â Â Â > [2016-01-29 05:03:56.000937] I [MSGID: 106544]<br>
>Â Â Â > [glusterd.c:159:glusterd_uuid_init] 0-management: retrieved UUID:<br>
>Â Â Â > 9b103ea8-d248-44fc-8f80-3e87f7c4971c<br>
>Â Â Â > Final graph:<br>
>Â Â Â ><br>
>Â Â Â +------------------------------------------------------------------------------+<br>
>Â Â Â >Â Â 1: volume management<br>
>Â Â Â >Â Â 2:Â Â Â type mgmt/glusterd<br>
>Â Â Â >Â Â 3:Â Â Â option rpc-auth.auth-glusterfs on<br>
>Â Â Â >Â Â 4:Â Â Â option rpc-auth.auth-unix on<br>
>Â Â Â >Â Â 5:Â Â Â option rpc-auth.auth-null on<br>
>Â Â Â >Â Â 6:Â Â Â option rpc-auth-allow-insecure on<br>
>Â Â Â >Â Â 7:Â Â Â option transport.socket.listen-backlog 128<br>
>Â Â Â >Â Â 8:Â Â Â option ping-timeout 30<br>
>Â Â Â >Â Â 9:Â Â Â option transport.socket.read-fail-log off<br>
>Â Â Â >Â 10:Â Â Â option transport.socket.keepalive-interval 2<br>
>Â Â Â >Â 11:Â Â Â option transport.socket.keepalive-time 10<br>
>Â Â Â >Â 12:Â Â Â option transport-type rdma<br>
>Â Â Â >Â 13:Â Â Â option working-directory /var/lib/glusterd<br>
>Â Â Â >Â 14: end-volume<br>
>Â Â Â >Â 15:<br>
>Â Â Â ><br>
>Â Â Â +------------------------------------------------------------------------------+<br>
>Â Â Â > [2016-01-29 05:03:56.002570] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 2<br>
>Â Â Â > [2016-01-29 05:03:56.003098] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 1<br>
>Â Â Â > [2016-01-29 05:03:56.003158] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 2<br>
>Â Â Â > [2016-01-29 05:03:56.855628] I [MSGID: 106493]<br>
>Â Â Â > [glusterd-rpc-ops.c:480:__glusterd_friend_add_cbk] 0-glusterd:<br>
>Â Â Â Received<br>
>Â Â Â > ACC from uuid: 388a8bb4-c530-44ff-838b-8f7b9e4c95db, host: 10.1.1.10,<br>
>Â Â Â > port: 0<br>
>Â Â Â > [2016-01-29 05:03:56.856787] I<br>
>Â Â Â [rpc-clnt.c:984:rpc_clnt_connection_init]<br>
>Â Â Â > 0-nfs: setting frame-timeout to 600<br>
>Â Â Â > [2016-01-29 05:03:57.859093] I [MSGID: 106540]<br>
>Â Â Â > [glusterd-utils.c:4191:glusterd_nfs_pmap_deregister] 0-glusterd:<br>
>Â Â Â > De-registered MOUNTV3 successfully<br>
>Â Â Â > [2016-01-29 05:03:57.860228] I [MSGID: 106540]<br>
>Â Â Â > [glusterd-utils.c:4200:glusterd_nfs_pmap_deregister] 0-glusterd:<br>
>Â Â Â > De-registered MOUNTV1 successfully<br>
>Â Â Â > [2016-01-29 05:03:57.861329] I [MSGID: 106540]<br>
>Â Â Â > [glusterd-utils.c:4209:glusterd_nfs_pmap_deregister] 0-glusterd:<br>
>Â Â Â > De-registered NFSV3 successfully<br>
>Â Â Â > [2016-01-29 05:03:57.862421] I [MSGID: 106540]<br>
>Â Â Â > [glusterd-utils.c:4218:glusterd_nfs_pmap_deregister] 0-glusterd:<br>
>Â Â Â > De-registered NLM v4 successfully<br>
>Â Â Â > [2016-01-29 05:03:57.863510] I [MSGID: 106540]<br>
>Â Â Â > [glusterd-utils.c:4227:glusterd_nfs_pmap_deregister] 0-glusterd:<br>
>Â Â Â > De-registered NLM v1 successfully<br>
>Â Â Â > [2016-01-29 05:03:57.864600] I [MSGID: 106540]<br>
>Â Â Â > [glusterd-utils.c:4236:glusterd_nfs_pmap_deregister] 0-glusterd:<br>
>Â Â Â > De-registered ACL v3 successfully<br>
>Â Â Â > [2016-01-29 05:03:57.870948] W [socket.c:3009:socket_connect] 0-nfs:<br>
>Â Â Â > Ignore failed connection attempt on , (No such file or directory)<br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â > [2016-01-29 05:03:47.075614] W [socket.c:588:__socket_rwv]<br>
</div></div>>Â Â Â > 0-data02-client-1: readv on <a href="http://10.1.1.10:49155" rel="noreferrer" target="_blank">10.1.1.10:49155</a> <<a href="http://10.1.1.10:49155" rel="noreferrer" target="_blank">http://10.1.1.10:49155</a>><br>
<span class="">>Â Â Â > <<a href="http://10.1.1.10:49155/" rel="noreferrer" target="_blank">http://10.1.1.10:49155/</a>> failed (No data available)<br>
>Â Â Â > [2016-01-29 05:03:47.076871] I [MSGID: 114018]<br>
>Â Â Â > [client.c:2042:client_rpc_notify] 0-data02-client-1: disconnected from<br>
>Â Â Â > data02-client-1. Client process will keep trying to connect to glusterd<br>
>Â Â Â > until brick's port is available<br>
>Â Â Â > [2016-01-29 05:03:47.170284] W [socket.c:588:__socket_rwv] 0-glusterfs:<br>
</span>>Â Â Â > readv on <a href="http://127.0.0.1:24007" rel="noreferrer" target="_blank">127.0.0.1:24007</a> <<a href="http://127.0.0.1:24007" rel="noreferrer" target="_blank">http://127.0.0.1:24007</a>><br>
<span class="">>Â Â Â <<a href="http://127.0.0.1:24007/" rel="noreferrer" target="_blank">http://127.0.0.1:24007/</a>> failed (No data<br>
>Â Â Â > available)<br>
>Â Â Â > [2016-01-29 05:03:47.639163] W [socket.c:588:__socket_rwv]<br>
</span>>Â Â Â > 0-data02-client-0: readv on <a href="http://10.1.1.11:49153" rel="noreferrer" target="_blank">10.1.1.11:49153</a> <<a href="http://10.1.1.11:49153" rel="noreferrer" target="_blank">http://10.1.1.11:49153</a>><br>
<div><div class="h5">>Â Â Â > <<a href="http://10.1.1.11:49153/" rel="noreferrer" target="_blank">http://10.1.1.11:49153/</a>> failed (No data available)<br>
>Â Â Â > [2016-01-29 05:03:47.639206] I [MSGID: 114018]<br>
>Â Â Â > [client.c:2042:client_rpc_notify] 0-data02-client-0: disconnected from<br>
>Â Â Â > data02-client-0. Client process will keep trying to connect to<br>
>Â Â Â glusterd<br>
>Â Â Â > until brick's port is available<br>
>Â Â Â > [2016-01-29 05:03:47.640222] E [MSGID: 108006]<br>
>Â Â Â > [afr-common.c:3880:afr_notify] 0-data02-replicate-0: All<br>
>Â Â Â subvolumes are<br>
>Â Â Â > down. Going offline until atleast one of them comes back up.<br>
>Â Â Â > [2016-01-29 05:03:57.872983] W [glusterfsd.c:1236:cleanup_and_exit]<br>
>Â Â Â > (-->/lib64/libpthread.so.0() [0x3e47a079d1]<br>
>Â Â Â > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e6d]<br>
>Â Â Â > -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059d5] ) 0-: rec<br>
>Â Â Â > eived signum (15), shutting down<br>
>Â Â Â > [2016-01-29 05:03:58.881541] I [MSGID: 100030]<br>
>Â Â Â [glusterfsd.c:2318:main]<br>
>Â Â Â > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version<br>
>Â Â Â 3.7.6<br>
>Â Â Â > (args: /usr/sbin/glusterfs -s localhost --volfile-id<br>
>Â Â Â gluster/glustershd<br>
>Â Â Â > -p /var/lib/glusterd/<br>
>Â Â Â > glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S<br>
>Â Â Â > /var/run/gluster/8d72de580ccac07d2ecfc2491a9b1648.socket<br>
>Â Â Â --xlator-option<br>
>Â Â Â > *replicate*.node-uuid=9b103ea8-d248-44fc-8f80-3e87f7c4971c)<br>
>Â Â Â > [2016-01-29 05:03:58.890833] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 1<br>
>Â Â Â > [2016-01-29 05:03:59.340030] I [graph.c:269:gf_add_cmdline_options]<br>
>Â Â Â > 0-data02-replicate-0: adding option 'node-uuid' for volume<br>
>Â Â Â > 'data02-replicate-0' with value '9b103ea8-d248-44fc-8f80-3e87f7c4971c'<br>
>Â Â Â > [2016-01-29 05:03:59.342682] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 2<br>
>Â Â Â > [2016-01-29 05:03:59.342742] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 3<br>
>Â Â Â > [2016-01-29 05:03:59.342827] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 4<br>
>Â Â Â > [2016-01-29 05:03:59.342892] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 5<br>
>Â Â Â > [2016-01-29 05:03:59.342917] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 6<br>
>Â Â Â > [2016-01-29 05:03:59.343563] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 8<br>
>Â Â Â > [2016-01-29 05:03:59.343569] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 7<br>
>Â Â Â > [2016-01-29 05:03:59.343657] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 9<br>
>Â Â Â > [2016-01-29 05:03:59.343705] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 11<br>
>Â Â Â > [2016-01-29 05:03:59.343710] I [MSGID: 101190]<br>
>Â Â Â > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>Â Â Â thread<br>
>Â Â Â > with index 10<br>
>Â Â Â > [2016-01-29 05:03:59.344278] I [MSGID: 114020] [client.c:2118:notify]<br>
>Â Â Â > 0-data02-client-0: parent translators are ready, attempting connect on<br>
>Â Â Â > transport<br>
>Â Â Â > [2016-01-29 05:03:59.346553] I [MSGID: 114020] [client.c:2118:notify]<br>
>Â Â Â > 0-data02-client-1: parent translators are ready, attempting connect on<br>
>Â Â Â > transport<br>
>Â Â Â > Final graph:<br>
>Â Â Â ><br>
>Â Â Â +------------------------------------------------------------------------------+<br>
>Â Â Â >Â Â 1: volume data02-client-0<br>
>Â Â Â >Â Â 2:Â Â Â type protocol/client<br>
>Â Â Â >Â Â 3:Â Â Â option ping-timeout 42<br>
>Â Â Â >Â Â 4:Â Â Â option remote-host gluster-stor01<br>
>Â Â Â >Â Â 5:Â Â Â option remote-subvolume /export/data/brick02<br>
>Â Â Â >Â Â 6:Â Â Â option transport-type socket<br>
>Â Â Â >Â Â 7:Â Â Â option username 5cc4f5d1-bcc8-4e06-ac74-520b20e2b452<br>
>Â Â Â >Â Â 8:Â Â Â option password 66b85782-5833-4f2d-ad0e-8de75247b094F<br>
>Â Â Â >Â Â 9:Â Â Â option event-threads 11<br>
>Â Â Â >Â 10: end-volume<br>
>Â Â Â >Â 11:<br>
>Â Â Â >Â 12: volume data02-client-1<br>
>Â Â Â >Â 13:Â Â Â type protocol/client<br>
>Â Â Â >Â 14:Â Â Â option ping-timeout 42<br>
>Â Â Â >Â 15:Â Â Â option remote-host gluster-stor02<br>
>Â Â Â >Â 16:Â Â Â option remote-subvolume /export/data/brick02<br>
>Â Â Â >Â 17:Â Â Â option transport-type socket<br>
>Â Â Â >Â 18:Â Â Â option username 5cc4f5d1-bcc8-4e06-ac74-520b20e2b452<br>
>Â Â Â >Â 19:Â Â Â option password 66b85782-5833-4f2d-ad0e-8de75247b094<br>
>Â Â Â >Â 20:Â Â Â option event-threads 11<br>
>Â Â Â >Â 21: end-volume<br>
>Â Â Â >Â 22:<br>
>Â Â Â >Â 23: volume data02-replicate-0<br>
>Â Â Â >Â 24:Â Â Â type cluster/replicate<br>
>Â Â Â >Â 25:Â Â Â option node-uuid 9b103ea8-d248-44fc-8f80-3e87f7c4971c<br>
>Â Â Â >Â 26:Â Â Â option background-self-heal-count 0<br>
>Â Â Â >Â 27:Â Â Â option metadata-self-heal on<br>
>Â Â Â >Â 28:Â Â Â option data-self-heal on<br>
>Â Â Â >Â 29:Â Â Â option entry-self-heal on<br>
>Â Â Â >Â 30:Â Â Â option self-heal-daemon enable<br>
>Â Â Â >Â 31:Â Â Â option iam-self-heal-daemon yes<br>
>Â Â Â >Â 32:Â Â Â subvolumes data02-client-0 data02-client-1<br>
>Â Â Â >Â 33: end-volume<br>
>Â Â Â >Â 34:<br>
>Â Â Â >Â 35: volume glustershd<br>
>Â Â Â >Â 36:Â Â Â type debug/io-stats<br>
>Â Â Â >Â 37:Â Â Â subvolumes data02-replicate-0<br>
>Â Â Â >Â 38: end-volume<br>
>Â Â Â >Â 39:<br>
>Â Â Â ><br>
>Â Â Â +------------------------------------------------------------------------------+<br>
>Â Â Â > [2016-01-29 05:03:59.348913] E [MSGID: 114058]<br>
>Â Â Â > [client-handshake.c:1524:client_query_portmap_cbk] 0-data02-client-1:<br>
>Â Â Â > failed to get the port number for remote subvolume. Please run<br>
>Â Â Â 'gluster<br>
>Â Â Â > volume status' on server to see if brick process<br>
>Â Â Â > is running.<br>
>Â Â Â > [2016-01-29 05:03:59.348960] I [MSGID: 114018]<br>
>Â Â Â > [client.c:2042:client_rpc_notify] 0-data02-client-1: disconnected from<br>
>Â Â Â > data02-client-1. Client process will keep trying to connect to<br>
>Â Â Â glusterd<br>
>Â Â Â > until brick's port is available<br>
>Â Â Â > [2016-01-29 05:03:59.436909] E [MSGID: 114058]<br>
>Â Â Â > [client-handshake.c:1524:client_query_portmap_cbk] 0-data02-client-0:<br>
>Â Â Â > failed to get the port number for remote subvolume. Please run<br>
>Â Â Â 'gluster<br>
>Â Â Â > volume status' on server to see if brick process<br>
>Â Â Â > is running.<br>
>Â Â Â > [2016-01-29 05:03:59.436974] I [MSGID: 114018]<br>
>Â Â Â > [client.c:2042:client_rpc_notify] 0-data02-client-0: disconnected from<br>
>Â Â Â > data02-client-0. Client process will keep trying to connect to<br>
>Â Â Â glusterd<br>
>Â Â Â > until brick's port is available<br>
>Â Â Â > [2016-01-29 05:03:59.436991] E [MSGID: 108006]<br>
>Â Â Â > [afr-common.c:3880:afr_notify] 0-data02-replicate-0: All<br>
>Â Â Â subvolumes are<br>
>Â Â Â > down. Going offline until atleast one of them comes back up.<br>
>Â Â Â > [2016-01-29 05:04:02.886317] I [rpc-clnt.c:1847:rpc_clnt_reconfig]<br>
>Â Â Â > 0-data02-client-0: changing port to 49153 (from 0)<br>
>Â Â Â > [2016-01-29 05:04:02.888761] I [rpc-clnt.c:1847:rpc_clnt_reconfig]<br>
>Â Â Â > 0-data02-client-1: changing port to 49155 (from 0)<br>
>Â Â Â > [2016-01-29 05:04:02.891105] I [MSGID: 114057]<br>
>Â Â Â > [client-handshake.c:1437:select_server_supported_programs]<br>
>Â Â Â > 0-data02-client-0: Using Program GlusterFS 3.3, Num (1298437),<br>
>Â Â Â Version (330)<br>
>Â Â Â > [2016-01-29 05:04:02.891360] I [MSGID: 114046]<br>
>Â Â Â > [client-handshake.c:1213:client_setvolume_cbk] 0-data02-client-0:<br>
>Â Â Â > Connected to data02-client-0, attached to remote volume<br>
>Â Â Â > '/export/data/brick02'.<br>
>Â Â Â > [2016-01-29 05:04:02.891373] I [MSGID: 114047]<br>
>Â Â Â > [client-handshake.c:1224:client_setvolume_cbk] 0-data02-client-0:<br>
>Â Â Â Server<br>
>Â Â Â > and Client lk-version numbers are not same, reopening the fds<br>
>Â Â Â > [2016-01-29 05:04:02.891403] I [MSGID: 108005]<br>
>Â Â Â > [afr-common.c:3841:afr_notify] 0-data02-replicate-0: Subvolume<br>
>Â Â Â > 'data02-client-0' came back up; going online.<br>
>Â Â Â > [2016-01-29 05:04:02.891518] I [MSGID: 114035]<br>
>Â Â Â > [client-handshake.c:193:client_set_lk_version_cbk] 0-data02-client-0:<br>
>Â Â Â > Server lk version = 1<br>
>Â Â Â > [2016-01-29 05:04:02.893074] I [MSGID: 114057]<br>
>Â Â Â > [client-handshake.c:1437:select_server_supported_programs]<br>
>Â Â Â > 0-data02-client-1: Using Program GlusterFS 3.3, Num (1298437),<br>
>Â Â Â Version (330)<br>
>Â Â Â > [2016-01-29 05:04:02.893251] I [MSGID: 114046]<br>
>Â Â Â > [client-handshake.c:1213:client_setvolume_cbk] 0-data02-client-1:<br>
>Â Â Â > Connected to data02-client-1, attached to remote volume<br>
>Â Â Â > '/export/data/brick02'.<br>
>Â Â Â > [2016-01-29 05:04:02.893276] I [MSGID: 114047]<br>
>Â Â Â > [client-handshake.c:1224:client_setvolume_cbk] 0-data02-client-1:<br>
>Â Â Â Server<br>
>Â Â Â > and Client lk-version numbers are not same, reopening the fds<br>
>Â Â Â > [2016-01-29 05:04:02.893401] I [MSGID: 114035]<br>
>Â Â Â > [client-handshake.c:193:client_set_lk_version_cbk] 0-data02-client-1:<br>
>Â Â Â > Server lk version = 1<br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â > _______________________________________________<br>
>Â Â Â > Gluster-devel mailing list<br>
</div></div>>Â Â Â > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a> <mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>><br>
>Â Â Â > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
>Â Â Â ><br>
><br>
><br>
</blockquote></div><br></div>