<div dir="ltr">Hi Atin, Thanks for the reply. Im not sure which logs are relevant so ill just attach them all in a gz file. <div><br></div><div>I ran a sudo gluster volume start gfsvolume force at 2015-03-19 05:49 </div><div>i hope this helps. </div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div>Thank You Kindly,</div><div>Kaamesh</div><div><br></div></div></div></div><div class="gmail_quote">On Sun, Mar 15, 2015 at 11:41 PM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Could you attach the logs for the analysis?<br>
<br>
~Atin<br>
<div><div class="h5"><br>
On 03/13/2015 03:29 PM, Kaamesh Kamalaaharan wrote:<br>
> Hi guys. Ive been using gluster for a while now and despite a few hiccups,<br>
> i find its a great system to use. One of my more persistent hiccups is an<br>
> issue with one brick going offline.<br>
><br>
> My setup is a 2 brick 2 node setup. my main brick is gfs1 which has not<br>
> given me any problem. gfs2 however keeps going offline. Following<br>
> <a href="http://www.gluster.org/pipermail/gluster-users/2014-June/017583.html" target="_blank">http://www.gluster.org/pipermail/gluster-users/2014-June/017583.html</a><br>
> temporarily fixed the error but the brick goes offline within the hour.<br>
><br>
> This is what i get from my volume status command :<br>
><br>
> sudo gluster volume status<br>
>><br>
>> Status of volume: gfsvolume<br>
>> Gluster process Port Online Pid<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick gfs1:/export/sda/brick 49153 Y 9760<br>
>> Brick gfs2:/export/sda/brick N/A N 13461<br>
>> NFS Server on localhost 2049 Y 13473<br>
>> Self-heal Daemon on localhost N/A Y 13480<br>
>> NFS Server on gfs1 2049 Y 16166<br>
>> Self-heal Daemon on gfs1 N/A Y 16173<br>
>><br>
>> Task Status of Volume gfsvolume<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> There are no active volume tasks<br>
>><br>
>><br>
> doing sudo gluster volume start gfsvolume force gives me this:<br>
><br>
> sudo gluster volume status<br>
>><br>
>> Status of volume: gfsvolume<br>
>> Gluster process Port Online Pid<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> Brick gfs1:/export/sda/brick 49153 Y 9760<br>
>> Brick gfs2:/export/sda/brick 49153 Y 13461<br>
>> NFS Server on localhost 2049 Y 13473<br>
>> Self-heal Daemon on localhost N/A Y 13480<br>
>> NFS Server on gfs1 2049 Y 16166<br>
>> Self-heal Daemon on gfs1 N/A Y 16173<br>
>><br>
>> Task Status of Volume gfsvolume<br>
>><br>
>> ------------------------------------------------------------------------------<br>
>> There are no active volume tasks<br>
>><br>
>> half an hour later and my brick goes down again.<br>
><br>
>><br>
>><br>
>> This is my glustershd.log. I snipped it because the rest of the log is a<br>
> repeat of the same error<br>
><br>
><br>
>><br>
>> [2015-03-13 02:09:41.951556] I [glusterfsd.c:1959:main]<br>
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.0<br>
>> (/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p<br>
>> /var/lib/glus<br>
>> terd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S<br>
>> /var/run/deac2f873d0ac5b6c3e84b23c4790172.socket --xlator-option<br>
>> *replicate*.node-uuid=adbb7505-3342-4c6d-be3d-75938633612c)<br>
>> [2015-03-13 02:09:41.954173] I [socket.c:3561:socket_init]<br>
>> 0-socket.glusterfsd: SSL support is NOT enabled<br>
>> [2015-03-13 02:09:41.954236] I [socket.c:3576:socket_init]<br>
>> 0-socket.glusterfsd: using system polling thread<br>
>> [2015-03-13 02:09:41.954421] I [socket.c:3561:socket_init] 0-glusterfs:<br>
>> SSL support is NOT enabled<br>
>> [2015-03-13 02:09:41.954443] I [socket.c:3576:socket_init] 0-glusterfs:<br>
>> using system polling thread<br>
>> [2015-03-13 02:09:41.956731] I [graph.c:254:gf_add_cmdline_options]<br>
>> 0-gfsvolume-replicate-0: adding option 'node-uuid' for volume<br>
>> 'gfsvolume-replicate-0' with value 'adbb7505-3342-4c6d-be3d-75938633612c'<br>
>> [2015-03-13 02:09:41.960210] I [rpc-clnt.c:972:rpc_clnt_connection_init]<br>
>> 0-gfsvolume-client-1: setting frame-timeout to 90<br>
>> [2015-03-13 02:09:41.960288] I [socket.c:3561:socket_init]<br>
>> 0-gfsvolume-client-1: SSL support is NOT enabled<br>
>> [2015-03-13 02:09:41.960301] I [socket.c:3576:socket_init]<br>
>> 0-gfsvolume-client-1: using system polling thread<br>
>> [2015-03-13 02:09:41.961095] I [rpc-clnt.c:972:rpc_clnt_connection_init]<br>
>> 0-gfsvolume-client-0: setting frame-timeout to 90<br>
>> [2015-03-13 02:09:41.961134] I [socket.c:3561:socket_init]<br>
>> 0-gfsvolume-client-0: SSL support is NOT enabled<br>
>> [2015-03-13 02:09:41.961145] I [socket.c:3576:socket_init]<br>
>> 0-gfsvolume-client-0: using system polling thread<br>
>> [2015-03-13 02:09:41.961173] I [client.c:2273:notify]<br>
>> 0-gfsvolume-client-0: parent translators are ready, attempting connect on<br>
>> transport<br>
>> [2015-03-13 02:09:41.961412] I [client.c:2273:notify]<br>
>> 0-gfsvolume-client-1: parent translators are ready, attempting connect on<br>
>> transport<br>
>> Final graph:<br>
>><br>
>> +------------------------------------------------------------------------------+<br>
>> 1: volume gfsvolume-client-0<br>
>> 2: type protocol/client<br>
>> 3: option remote-host gfs1<br>
>> 4: option remote-subvolume /export/sda/brick<br>
>> 5: option transport-type socket<br>
>> 6: option frame-timeout 90<br>
>> 7: option ping-timeout 30<br>
>> 8: end-volume<br>
>> 9:<br>
>> 10: volume gfsvolume-client-1<br>
>> 11: type protocol/client<br>
>> 12: option remote-host gfs2<br>
>> 13: option remote-subvolume /export/sda/brick<br>
>> 14: option transport-type socket<br>
>> 15: option frame-timeout 90<br>
>> 16: option ping-timeout 30<br>
>> 17: end-volume<br>
>> 18:<br>
>> 19: volume gfsvolume-replicate-0<br>
>> 20: type cluster/replicate<br>
>> 21: option node-uuid adbb7505-3342-4c6d-be3d-75938633612c<br>
>> 22: option background-self-heal-count 0<br>
>> 23: option metadata-self-heal on<br>
>> 24: option data-self-heal on<br>
>> 25: option entry-self-heal on<br>
>> 26: option self-heal-daemon on<br>
>> 27: option data-self-heal-algorithm diff<br>
>> 28: option quorum-type fixed<br>
>> 29: option quorum-count 1<br>
>> 30: option iam-self-heal-daemon yes<br>
>> 31: subvolumes gfsvolume-client-0 gfsvolume-client-1<br>
>> 32: end-volume<br>
>> 33:<br>
>> 34: volume glustershd<br>
>> 35: type debug/io-stats<br>
>> 36: subvolumes gfsvolume-replicate-0<br>
>> 37: end-volume<br>
>><br>
>> +------------------------------------------------------------------------------+<br>
>> [2015-03-13 02:09:41.961871] I [rpc-clnt.c:1685:rpc_clnt_reconfig]<br>
>> 0-gfsvolume-client-1: changing port to 49153 (from 0)<br>
>> [2015-03-13 02:09:41.962129] I<br>
>> [client-handshake.c:1659:select_server_supported_programs]<br>
>> 0-gfsvolume-client-1: Using Program GlusterFS 3.3, Num (1298437), Version<br>
>> (330)<br>
>> [2015-03-13 02:09:41.962344] I<br>
>> [client-handshake.c:1456:client_setvolume_cbk] 0-gfsvolume-client-1:<br>
>> Connected to <a href="http://172.20.20.22:49153" target="_blank">172.20.20.22:49153</a>, attached to remote volume<br>
>> '/export/sda/brick'.<br>
>> [2015-03-13 02:09:41.962363] I<br>
>> [client-handshake.c:1468:client_setvolume_cbk] 0-gfsvolume-client-1: Server<br>
>> and Client lk-version numbers are not same, reopening the fds<br>
>> [2015-03-13 02:09:41.962416] I [afr-common.c:3922:afr_notify]<br>
>> 0-gfsvolume-replicate-0: Subvolume 'gfsvolume-client-1' came back up; going<br>
>> online.<br>
>> [2015-03-13 02:09:41.962487] I<br>
>> [client-handshake.c:450:client_set_lk_version_cbk] 0-gfsvolume-client-1:<br>
>> Server lk version = 1<br>
>> [2015-03-13 02:09:41.963109] E<br>
>> [afr-self-heald.c:1479:afr_find_child_position] 0-gfsvolume-replicate-0:<br>
>> getxattr failed on gfsvolume-client-0 - (Transport endpoint is not<br>
>> connected)<br>
>> [2015-03-13 02:09:41.963502] I<br>
>> [afr-self-heald.c:1687:afr_dir_exclusive_crawl] 0-gfsvolume-replicate-0:<br>
>> Another crawl is in progress for gfsvolume-client-1<br>
>> [2015-03-13 02:09:41.967478] E<br>
>> [afr-self-heal-entry.c:2364:afr_sh_post_nonblocking_entry_cbk]<br>
>> 0-gfsvolume-replicate-0: Non Blocking entrylks failed for<br>
>> <gfid:66af7dc1-a2e6-4919-9ea1-ad75fe2d40b9>.<br>
>> [2015-03-13 02:09:41.968550] E<br>
>> [afr-self-heal-entry.c:2364:afr_sh_post_nonblocking_entry_cbk]<br>
>> 0-gfsvolume-replicate-0: Non Blocking entrylks failed for<br>
>> <gfid:8a7cfa39-9a12-43cd-a9f3-9142b7403d0e>.<br>
>> [2015-03-13 02:09:41.969663] E<br>
>> [afr-self-heal-entry.c:2364:afr_sh_post_nonblocking_entry_cbk]<br>
>> 0-gfsvolume-replicate-0: Non Blocking entrylks failed for<br>
>> <gfid:3762920e-9631-4a52-9a9f-4f04d09e8d84>.<br>
>> [2015-03-13 02:09:41.974345] E<br>
>> [afr-self-heal-entry.c:2364:afr_sh_post_nonblocking_entry_cbk]<br>
>> 0-gfsvolume-replicate-0: Non Blocking entrylks failed for<br>
>> <gfid:66af7dc1-a2e6-4919-9ea1-ad75fe2d40b9>.<br>
>> [2015-03-13 02:09:41.975657] E<br>
>> [afr-self-heal-entry.c:2364:afr_sh_post_nonblocking_entry_cbk]<br>
>> 0-gfsvolume-replicate-0: Non Blocking entrylks failed for<br>
>> <gfid:8a7cfa39-9a12-43cd-a9f3-9142b7403d0e>.<br>
>> [2015-03-13 02:09:41.977020] E<br>
>> [afr-self-heal-entry.c:2364:afr_sh_post_nonblocking_entry_cbk]<br>
>> 0-gfsvolume-replicate-0: Non Blocking entrylks failed for<br>
>> <gfid:3762920e-9631-4a52-9a9f-4f04d09e8d84>.<br>
>> [2015-03-13 02:09:44.307219] I [rpc-clnt.c:1685:rpc_clnt_reconfig]<br>
>> 0-gfsvolume-client-0: changing port to 49153 (from 0)<br>
>> [2015-03-13 02:09:44.307748] I<br>
>> [client-handshake.c:1659:select_server_supported_programs]<br>
>> 0-gfsvolume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version<br>
>> (330)<br>
>> [2015-03-13 02:09:44.448377] I<br>
>> [client-handshake.c:1456:client_setvolume_cbk] 0-gfsvolume-client-0:<br>
>> Connected to <a href="http://172.20.20.21:49153" target="_blank">172.20.20.21:49153</a>, attached to remote volume<br>
>> '/export/sda/brick'.<br>
>> [2015-03-13 02:09:44.448418] I<br>
>> [client-handshake.c:1468:client_setvolume_cbk] 0-gfsvolume-client-0: Server<br>
>> and Client lk-version numbers are not same, reopening the fds<br>
>> [2015-03-13 02:09:44.448713] I<br>
>> [client-handshake.c:450:client_set_lk_version_cbk] 0-gfsvolume-client-0:<br>
>> Server lk version = 1<br>
>> [2015-03-13 02:09:44.515112] I<br>
>> [afr-self-heal-common.c:2859:afr_log_self_heal_completion_status]<br>
>> 0-gfsvolume-replicate-0: foreground data self heal is successfully<br>
>> completed, data self heal from gfsvolume-client-0 to sinks<br>
>> gfsvolume-client-1, with 892928 bytes on gfsvolume-client-0, 892928 bytes<br>
>> on gfsvolume-client-1, data - Pending matrix: [ [ 0 155762 ] [ 0 0 ] ]<br>
>> on <gfid:123536cc-c34b-43d7-b0c6-cf80eefa8322><br>
>> [2015-03-13 02:09:44.809988] I<br>
>> [afr-self-heal-common.c:2859:afr_log_self_heal_completion_status]<br>
>> 0-gfsvolume-replicate-0: foreground data self heal is successfully<br>
>> completed, data self heal from gfsvolume-client-0 to sinks<br>
>> gfsvolume-client-1, with 15998976 bytes on gfsvolume-client-0, 15998976<br>
>> bytes on gfsvolume-client-1, data - Pending matrix: [ [ 0 36506 ] [ 0 0 ]<br>
>> ] on <gfid:b6dc0e74-31bf-469a-b629-ee51ab4cf729><br>
>> [2015-03-13 02:09:44.946050] W<br>
>> [client-rpc-fops.c:574:client3_3_readlink_cbk] 0-gfsvolume-client-0: remote<br>
>> operation failed: Stale NFS file handle<br>
>> [2015-03-13 02:09:44.946097] I<br>
>> [afr-self-heal-entry.c:1538:afr_sh_entry_impunge_readlink_sink_cbk]<br>
>> 0-gfsvolume-replicate-0: readlink of<br>
>> <gfid:66af7dc1-a2e6-4919-9ea1-ad75fe2d40b9>/PB2_corrected.fastq on<br>
>> gfsvolume-client-1 failed (Stale NFS file handle)<br>
>> [2015-03-13 02:09:44.951370] I<br>
>> [afr-self-heal-entry.c:2321:afr_sh_entry_fix] 0-gfsvolume-replicate-0:<br>
>> <gfid:8a7cfa39-9a12-43cd-a9f3-9142b7403d0e>: Performing conservative merge<br>
>> [2015-03-13 02:09:45.149995] W<br>
>> [client-rpc-fops.c:574:client3_3_readlink_cbk] 0-gfsvolume-client-0: remote<br>
>> operation failed: Stale NFS file handle<br>
>> [2015-03-13 02:09:45.150036] I<br>
>> [afr-self-heal-entry.c:1538:afr_sh_entry_impunge_readlink_sink_cbk]<br>
>> 0-gfsvolume-replicate-0: readlink of<br>
>> <gfid:8a7cfa39-9a12-43cd-a9f3-9142b7403d0e>/Rscript on gfsvolume-client-1<br>
>> failed (Stale NFS file handle)<br>
>> [2015-03-13 02:09:45.214253] W<br>
>> [client-rpc-fops.c:574:client3_3_readlink_cbk] 0-gfsvolume-client-0: remote<br>
>> operation failed: Stale NFS file handle<br>
>> [2015-03-13 02:09:45.214295] I<br>
>> [afr-self-heal-entry.c:1538:afr_sh_entry_impunge_readlink_sink_cbk]<br>
>> 0-gfsvolume-replicate-0: readlink of<br>
>> <gfid:3762920e-9631-4a52-9a9f-4f04d09e8d84>/ananas_d_tmp on<br>
>> gfsvolume-client-1 failed (Stale NFS file handle)<br>
>> [2015-03-13 02:13:27.324856] W [socket.c:522:__socket_rwv]<br>
>> 0-gfsvolume-client-1: readv on <a href="http://172.20.20.22:49153" target="_blank">172.20.20.22:49153</a> failed (No data<br>
>> available)<br>
>> [2015-03-13 02:13:27.324961] I [client.c:2208:client_rpc_notify]<br>
>> 0-gfsvolume-client-1: disconnected from <a href="http://172.20.20.22:49153" target="_blank">172.20.20.22:49153</a>. Client<br>
>> process will keep trying to connect to glusterd until brick's port is<br>
>> available<br>
>> [2015-03-13 02:13:37.981531] I [rpc-clnt.c:1685:rpc_clnt_reconfig]<br>
>> 0-gfsvolume-client-1: changing port to 49153 (from 0)<br>
>> [2015-03-13 02:13:37.981781] E [socket.c:2161:socket_connect_finish]<br>
>> 0-gfsvolume-client-1: connection to <a href="http://172.20.20.22:49153" target="_blank">172.20.20.22:49153</a> failed (Connection<br>
>> refused)<br>
>> [2015-03-13 02:13:41.982125] I [rpc-clnt.c:1685:rpc_clnt_reconfig]<br>
>> 0-gfsvolume-client-1: changing port to 49153 (from 0)<br>
>> [2015-03-13 02:13:41.982353] E [socket.c:2161:socket_connect_finish]<br>
>> 0-gfsvolume-client-1: connection to <a href="http://172.20.20.22:49153" target="_blank">172.20.20.22:49153</a> failed (Connection<br>
>> refused)<br>
>> [2015-03-13 02:13:45.982693] I [rpc-clnt.c:1685:rpc_clnt_reconfig]<br>
>> 0-gfsvolume-client-1: changing port to 49153 (from 0)<br>
>> [2015-03-13 02:13:45.982926] E [socket.c:2161:socket_connect_finish]<br>
>> 0-gfsvolume-client-1: connection to <a href="http://172.20.20.22:49153" target="_blank">172.20.20.22:49153</a> failed (Connection<br>
>> refused)<br>
>> [2015-03-13 02:13:49.983309] I [rpc-clnt.c:1685:rpc_clnt_reconfig]<br>
>> 0-gfsvolume-client-1: changing port to 49153 (from 0)<br>
>><br>
>><br>
><br>
> Any help would be greatly appreciated.<br>
> Thank You Kindly,<br>
> Kaamesh<br>
><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
<br>
<br>
</blockquote></div><br></div></div>