<p dir="ltr">I'll dive into it in detail in some time. Could you provide cli/cmd_history/glusterd log files for further debugging.</p>
<p dir="ltr">-Atin<br>
Sent from one plus one</p>
<div class="gmail_quote">On Dec 29, 2015 9:53 PM, "Christophe TREFOIS" <<a href="mailto:christophe.trefois@uni.lu">christophe.trefois@uni.lu</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Atin,<br>
<br>
Same issue. I restarted glusterd and glusterfsd everywhere and it seems the thing is still in STATEDUMP.<br>
<br>
Any other pointers?<br>
<br>
Kind regards,<br>
<br>
—<br>
Christophe<br>
<br>
<br>
> On 29 Dec 2015, at 16:19, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> wrote:<br>
><br>
><br>
><br>
> On 12/29/2015 07:09 PM, Christophe TREFOIS wrote:<br>
>> Hi,<br>
>><br>
>><br>
>>> On 29 Dec 2015, at 14:27, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> wrote:<br>
>>><br>
>>> It seems like your opCode is STATEDUMP instead of STATUS which is weird.<br>
>>> Are you running a heterogeneous cluster?<br>
>><br>
>> What does that mean? In principle no.<br>
> This means when all the nodes in the cluster are not running with same bits.<br>
>><br>
>>> What is the last version you<br>
>>> were running with?<br>
>><br>
>> I think it was 3.7.3 or 3.7.something. I’m not sure, but it was in the 3.7 repo.<br>
>><br>
>>> What's the current cluster op-version?<br>
>><br>
>> [root@highlander ~]# gluster volume get live cluster.op-version<br>
>> Option Value<br>
>> ------ -----<br>
>> cluster.op-version 30600<br>
> Can you bump up the op-version with 'gluster volume set all<br>
> cluster.op-version 30706 and see if the problem goes away. But you can<br>
> only bump this up if all nodes are upgraded to 3.7.6<br>
>><br>
>> Thank you,<br>
>><br>
>>><br>
>>> Thanks,<br>
>>> Atin<br>
>>><br>
>>> On 12/29/2015 06:31 PM, Christophe TREFOIS wrote:<br>
>>>> Dear all,<br>
>>>><br>
>>>> I have a 3-node distribute setup with a controller of GlusterFS and<br>
>>>> upgraded to 3.7.6 today and to CentOS 7.2.<br>
>>>><br>
>>>> After the ugprade (reboot), I can start the volume fine and see mounted<br>
>>>> volume as well on the controller.<br>
>>>><br>
>>>> However, a gluster volume info <volname> results in an<br>
>>>><br>
>>>> [root@stor104 glusterfs]# gluster volume status live<br>
>>>> Commit failed on localhost. Please check the log file for more details.<br>
>>>><br>
>>>> error.<br>
>>>><br>
>>>> Below some information and log extracts.<br>
>>>><br>
>>>> Thank you for any hints on where to start fixing this,<br>
>>>><br>
>>>> Kind regards,<br>
>>>><br>
>>>> —<br>
>>>> Christophe<br>
>>>><br>
>>>> ——————<br>
>>>><br>
>>>> Here is the gluster info command for the volume.<br>
>>>><br>
>>>> [root@stor104 glusterfs]# gluster volume info live<br>
>>>><br>
>>>> Volume Name: live<br>
>>>> Type: Distribute<br>
>>>> Volume ID: 1328637d-7730-4627-8945-bbe43626d527<br>
>>>> Status: Started<br>
>>>> Number of Bricks: 9<br>
>>>> Transport-type: tcp<br>
>>>> Bricks:<br>
>>>> Brick1: stor104:/zfs/brick0/brick<br>
>>>> Brick2: stor104:/zfs/brick1/brick<br>
>>>> Brick3: stor104:/zfs/brick2/brick<br>
>>>> Brick4: stor106:/zfs/brick0/brick<br>
>>>> Brick5: stor106:/zfs/brick1/brick<br>
>>>> Brick6: stor106:/zfs/brick2/brick<br>
>>>> Brick7: stor105:/zfs/brick0/brick<br>
>>>> Brick8: stor105:/zfs/brick1/brick<br>
>>>> Brick9: stor105:/zfs/brick2/brick<br>
>>>> Options Reconfigured:<br>
>>>> <a href="http://performance.io" rel="noreferrer" target="_blank">performance.io</a> <<a href="http://performance.io" rel="noreferrer" target="_blank">http://performance.io</a>>-thread-count: 8<br>
>>>> nfs.disable: on<br>
>>>> performance.write-behind-window-size: 4MB<br>
>>>> performance.client-io-threads: on<br>
>>>> performance.cache-size: 1GB<br>
>>>> performance.cache-refresh-timeout: 60<br>
>>>> performance.cache-max-file-size: 4MB<br>
>>>> cluster.data-self-heal-algorithm: full<br>
>>>> diagnostics.client-log-level: ERROR<br>
>>>> diagnostics.brick-log-level: ERROR<br>
>>>> cluster.min-free-disk: 1%<br>
>>>> server.allow-insecure: on<br>
>>>><br>
>>>> Relevant log parts when carrying out the command:<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cli.log <==<br>
>>>> [2015-12-29 12:51:03.216998] I [cli.c:721:main] 0-cli: Started running<br>
>>>> gluster with version 3.7.6<br>
>>>> [2015-12-29 12:51:03.226123] I<br>
>>>> [cli-cmd-volume.c:1926:cli_check_gsync_present] 0-: geo-replication not<br>
>>>> installed<br>
>>>> [2015-12-29 12:51:03.226623] I [MSGID: 101190]<br>
>>>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread<br>
>>>> with index 1<br>
>>>> [2015-12-29 12:51:03.226723] I [socket.c:2355:socket_event_handler]<br>
>>>> 0-transport: disconnecting now<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cmd_history.log <==<br>
>>>> [2015-12-29 12:51:03.236182] : volume status : SUCCESS<br>
>>>><br>
>>>> ==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==<br>
>>>> [2015-12-29 12:51:03.238562] I [MSGID: 106499]<br>
>>>> [glusterd-handler.c:4267:__glusterd_handle_status_volume] 0-management:<br>
>>>> Received status volume req for volume live<br>
>>>> [2015-12-29 12:51:03.246043] E [MSGID: 106396]<br>
>>>> [glusterd-op-sm.c:2985:_add_task_to_dict] 0-management: Statedump<br>
>>>> operation doesn't have a task_id<br>
>>>> [2015-12-29 12:51:03.246081] E [MSGID: 106060]<br>
>>>> [glusterd-op-sm.c:3055:glusterd_aggregate_task_status] 0-management:<br>
>>>> Failed to add task details to dict<br>
>>>> [2015-12-29 12:51:03.246098] E [MSGID: 106123]<br>
>>>> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of<br>
>>>> operation 'Volume Status' failed on localhost<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cmd_history.log <==<br>
>>>> [2015-12-29 12:51:03.249364] : volume status : FAILED : Commit failed<br>
>>>> on localhost. Please check the log file for more details.<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cli.log <==<br>
>>>> [2015-12-29 12:51:03.249647] I [input.c:36:cli_batch] 0-: Exiting with: 0<br>
>>>><br>
>>>> All bricks are up<br>
>>>><br>
>>>> [root@highlander glusterfs]# pdsh -g live 'df -h | grep brick*'<br>
>>>> stor106: brick0 50T 33T 18T 66% /zfs/brick0<br>
>>>> stor106: brick1 50T 33T 18T 66% /zfs/brick1<br>
>>>> stor106: brick2 50T 33T 18T 66% /zfs/brick2<br>
>>>> stor105: brick0 40T 23T 18T 57% /zfs/brick0<br>
>>>> stor105: brick1 40T 23T 18T 57% /zfs/brick1<br>
>>>> stor105: brick2 40T 23T 18T 57% /zfs/brick2<br>
>>>> stor104: brick0 40T 23T 18T 57% /zfs/brick0<br>
>>>> stor104: brick1 40T 23T 18T 57% /zfs/brick1<br>
>>>> stor104: brick2 40T 23T 18T 57% /zfs/brick2<br>
>>>><br>
>>>> Package details<br>
>>>><br>
>>>> [root@highlander glusterfs]# rpm -qa | grep gluster<br>
>>>> glusterfs-client-xlators-3.7.6-1.el7.x86_64<br>
>>>> glusterfs-server-3.7.6-1.el7.x86_64<br>
>>>> samba-vfs-glusterfs-4.2.3-10.el7.x86_64<br>
>>>> glusterfs-3.7.6-1.el7.x86_64<br>
>>>> glusterfs-cli-3.7.6-1.el7.x86_64<br>
>>>> glusterfs-libs-3.7.6-1.el7.x86_64<br>
>>>> glusterfs-fuse-3.7.6-1.el7.x86_64<br>
>>>> glusterfs-api-3.7.6-1.el7.x86_64<br>
>>>><br>
>>>> [root@highlander glusterfs]# pdsh -g live 'rpm -qa | grep gluster'<br>
>>>> stor105: glusterfs-libs-3.7.6-1.el7.x86_64<br>
>>>> stor105: glusterfs-api-3.7.6-1.el7.x86_64<br>
>>>> stor105: glusterfs-3.7.6-1.el7.x86_64<br>
>>>> stor105: glusterfs-fuse-3.7.6-1.el7.x86_64<br>
>>>> stor105: glusterfs-cli-3.7.6-1.el7.x86_64<br>
>>>> stor105: glusterfs-client-xlators-3.7.6-1.el7.x86_64<br>
>>>> stor105: glusterfs-server-3.7.6-1.el7.x86_64<br>
>>>><br>
>>>> stor104: glusterfs-server-3.7.6-1.el7.x86_64<br>
>>>> stor104: glusterfs-libs-3.7.6-1.el7.x86_64<br>
>>>> stor104: glusterfs-api-3.7.6-1.el7.x86_64<br>
>>>> stor104: glusterfs-3.7.6-1.el7.x86_64<br>
>>>> stor104: glusterfs-fuse-3.7.6-1.el7.x86_64<br>
>>>> stor104: glusterfs-cli-3.7.6-1.el7.x86_64<br>
>>>> stor104: glusterfs-client-xlators-3.7.6-1.el7.x86_64<br>
>>>><br>
>>>> stor106: glusterfs-3.7.6-1.el7.x86_64<br>
>>>> stor106: glusterfs-cli-3.7.6-1.el7.x86_64<br>
>>>> stor106: glusterfs-server-3.7.6-1.el7.x86_64<br>
>>>> stor106: glusterfs-libs-3.7.6-1.el7.x86_64<br>
>>>> stor106: glusterfs-client-xlators-3.7.6-1.el7.x86_64<br>
>>>> stor106: glusterfs-api-3.7.6-1.el7.x86_64<br>
>>>> stor106: glusterfs-fuse-3.7.6-1.el7.x86_64<br>
>>>><br>
>>>> More detailled logs:<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cli.log <==<br>
>>>> [2015-12-29 12:57:23.520821] I [cli.c:721:main] 0-cli: Started running<br>
>>>> gluster with version 3.7.6<br>
>>>> [2015-12-29 12:57:23.530898] I<br>
>>>> [cli-cmd-volume.c:1926:cli_check_gsync_present] 0-: geo-replication not<br>
>>>> installed<br>
>>>> [2015-12-29 12:57:23.531844] I [MSGID: 101190]<br>
>>>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread<br>
>>>> with index 1<br>
>>>> [2015-12-29 12:57:23.532004] I [socket.c:2355:socket_event_handler]<br>
>>>> 0-transport: disconnecting now<br>
>>>><br>
>>>> ==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==<br>
>>>> [2015-12-29 12:57:23.534830] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:214:glusterd_generate_txn_id] 0-management:<br>
>>>> Transaction_id = c2886398-03c1-4d32-924a-9f92367be85c<br>
>>>> [2015-12-29 12:57:23.534946] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:311:glusterd_set_txn_opinfo] 0-management:<br>
>>>> Successfully set opinfo for transaction ID :<br>
>>>> c2886398-03c1-4d32-924a-9f92367be85c<br>
>>>> [2015-12-29 12:57:23.534975] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:318:glusterd_set_txn_opinfo] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.535001] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1767:gd_sync_task_begin] 0-management: Transaction ID<br>
>>>> : c2886398-03c1-4d32-924a-9f92367be85c<br>
>>>> [2015-12-29 12:57:23.535031] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1807:gd_sync_task_begin] 0-glusterd: Failed to get<br>
>>>> volume name<br>
>>>> [2015-12-29 12:57:23.535078] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:1695:glusterd_op_stage_status_volume] 0-management:<br>
>>>> Returning: 0<br>
>>>> [2015-12-29 12:57:23.535103] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:5515:glusterd_op_stage_validate] 0-management: OP =<br>
>>>> 18. Returning 0<br>
>>>> [2015-12-29 12:57:23.535555] D<br>
>>>> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (--><br>
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f410a9d5a82] (--><br>
>>>> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f410a7a587b]<br>
>>>> (--> /lib64/libgfrpc.so.0(+0x13e74)[0x7f410a7a5e74] (--><br>
>>>> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x34f)[0x7f410a7a1c4f] (--><br>
>>>> /usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(gd_syncop_submit_request+0x1a5)[0x7f40ff5d24b5]<br>
>>>> ))))) 0-: <a href="http://192.168.123.105:24007" rel="noreferrer" target="_blank">192.168.123.105:24007</a>: ping timer event already removed<br>
>>>> [2015-12-29 12:57:23.535935] D<br>
>>>> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (--><br>
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f410a9d5a82] (--><br>
>>>> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f410a7a587b]<br>
>>>> (--> /lib64/libgfrpc.so.0(+0x13e74)[0x7f410a7a5e74] (--><br>
>>>> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x34f)[0x7f410a7a1c4f] (--><br>
>>>> /usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(gd_syncop_submit_request+0x1a5)[0x7f40ff5d24b5]<br>
>>>> ))))) 0-: <a href="http://192.168.123.1:24007" rel="noreferrer" target="_blank">192.168.123.1:24007</a>: ping timer event already removed<br>
>>>> [2015-12-29 12:57:23.536279] D<br>
>>>> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (--><br>
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f410a9d5a82] (--><br>
>>>> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f410a7a587b]<br>
>>>> (--> /lib64/libgfrpc.so.0(+0x13e74)[0x7f410a7a5e74] (--><br>
>>>> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x34f)[0x7f410a7a1c4f] (--><br>
>>>> /usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(gd_syncop_submit_request+0x1a5)[0x7f40ff5d24b5]<br>
>>>> ))))) 0-: <a href="http://192.168.123.106:24007" rel="noreferrer" target="_blank">192.168.123.106:24007</a>: ping timer event already removed<br>
>>>> [2015-12-29 12:57:23.536358] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1312:gd_stage_op_phase] 0-management: Sent stage op<br>
>>>> req for 'Volume Status' to 3 peers<br>
>>>> [2015-12-29 12:57:23.538064] D [MSGID: 0]<br>
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:<br>
>>>> Friend found... state: Peer in Cluster<br>
>>>> [2015-12-29 12:57:23.539059] D [logging.c:1952:_gf_msg_internal]<br>
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
>>>> About to flush least recently used log message to disk<br>
>>>> The message "D [MSGID: 0]<br>
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:<br>
>>>> Friend found... state: Peer in Cluster" repeated 2 times between<br>
>>>> [2015-12-29 12:57:23.538064] and [2015-12-29 12:57:23.538833]<br>
>>>> [2015-12-29 12:57:23.539057] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:7010:glusterd_op_bricks_select] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.539163] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1709:gd_brick_op_phase] 0-management: Sent op req to<br>
>>>> 0 bricks<br>
>>>> [2015-12-29 12:57:23.539213] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:3360:glusterd_op_status_volume] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.539237] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:5642:glusterd_op_commit_perform] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.541960] D [MSGID: 0]<br>
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:<br>
>>>> Friend found... state: Peer in Cluster<br>
>>>> [2015-12-29 12:57:23.542525] D [logging.c:1952:_gf_msg_internal]<br>
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
>>>> About to flush least recently used log message to disk<br>
>>>> The message "D [MSGID: 0]<br>
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:<br>
>>>> Friend found... state: Peer in Cluster" repeated 2 times between<br>
>>>> [2015-12-29 12:57:23.541960] and [2015-12-29 12:57:23.542186]<br>
>>>> [2015-12-29 12:57:23.542523] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1449:gd_commit_op_phase] 0-management: Sent commit op<br>
>>>> req for 'Volume Status' to 3 peers<br>
>>>> [2015-12-29 12:57:23.542643] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:4487:glusterd_op_modify_op_ctx] 0-management: op_ctx<br>
>>>> modification not required for status operation being performed<br>
>>>> [2015-12-29 12:57:23.542680] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:255:glusterd_get_txn_opinfo] 0-management:<br>
>>>> Successfully got opinfo for transaction ID :<br>
>>>> c2886398-03c1-4d32-924a-9f92367be85c<br>
>>>> [2015-12-29 12:57:23.542704] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:259:glusterd_get_txn_opinfo] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.542732] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:360:glusterd_clear_txn_opinfo] 0-management:<br>
>>>> Successfully cleared opinfo for transaction ID :<br>
>>>> c2886398-03c1-4d32-924a-9f92367be85c<br>
>>>> [2015-12-29 12:57:23.542756] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:364:glusterd_clear_txn_opinfo] 0-management: Returning 0<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cmd_history.log <==<br>
>>>> [2015-12-29 12:57:23.542781] : volume status : SUCCESS<br>
>>>><br>
>>>> ==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==<br>
>>>> [2015-12-29 12:57:23.542847] D [MSGID: 0]<br>
>>>> [glusterd-rpc-ops.c:205:glusterd_op_send_cli_response] 0-management:<br>
>>>> Returning 0<br>
>>>> [2015-12-29 12:57:23.545697] I [MSGID: 106499]<br>
>>>> [glusterd-handler.c:4267:__glusterd_handle_status_volume] 0-management:<br>
>>>> Received status volume req for volume live<br>
>>>> [2015-12-29 12:57:23.545818] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:214:glusterd_generate_txn_id] 0-management:<br>
>>>> Transaction_id = fd3671e4-fa7a-4913-a6dd-b37884a3a715<br>
>>>> [2015-12-29 12:57:23.545872] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:311:glusterd_set_txn_opinfo] 0-management:<br>
>>>> Successfully set opinfo for transaction ID :<br>
>>>> fd3671e4-fa7a-4913-a6dd-b37884a3a715<br>
>>>> [2015-12-29 12:57:23.545905] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:318:glusterd_set_txn_opinfo] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.545926] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1767:gd_sync_task_begin] 0-management: Transaction ID<br>
>>>> : fd3671e4-fa7a-4913-a6dd-b37884a3a715<br>
>>>> [2015-12-29 12:57:23.545978] D [MSGID: 0]<br>
>>>> [glusterd-locks.c:562:glusterd_mgmt_v3_lock] 0-management: Trying to<br>
>>>> acquire lock of vol live for 305c0f00-0f11-4da3-a470-50b6e6c14976 as<br>
>>>> live_vol<br>
>>>> [2015-12-29 12:57:23.546388] D [MSGID: 0]<br>
>>>> [glusterd-locks.c:618:glusterd_mgmt_v3_lock] 0-management: Lock for vol<br>
>>>> live successfully held by 305c0f00-0f11-4da3-a470-50b6e6c14976<br>
>>>> [2015-12-29 12:57:23.546478] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:411:gd_syncop_mgmt_v3_lock] 0-glusterd: Returning 0<br>
>>>> [2015-12-29 12:57:23.549943] D [logging.c:1952:_gf_msg_internal]<br>
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
>>>> About to flush least recently used log message to disk<br>
>>>> The message "D [MSGID: 0] [glusterd-syncop.c:411:gd_syncop_mgmt_v3_lock]<br>
>>>> 0-glusterd: Returning 0" repeated 2 times between [2015-12-29<br>
>>>> 12:57:23.546478] and [2015-12-29 12:57:23.546565]<br>
>>>> [2015-12-29 12:57:23.549941] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1205:gd_lock_op_phase] 0-management: Sent lock op req<br>
>>>> for 'Volume Status' to 3 peers. Returning 0<br>
>>>> [2015-12-29 12:57:23.550063] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1424:glusterd_volinfo_find] 0-management: Volume live<br>
>>>> found<br>
>>>> [2015-12-29 12:57:23.550087] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1431:glusterd_volinfo_find] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.550121] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1424:glusterd_volinfo_find] 0-management: Volume live<br>
>>>> found<br>
>>>> [2015-12-29 12:57:23.550140] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1431:glusterd_volinfo_find] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.550167] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:1695:glusterd_op_stage_status_volume] 0-management:<br>
>>>> Returning: 0<br>
>>>> [2015-12-29 12:57:23.550192] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:5515:glusterd_op_stage_validate] 0-management: OP =<br>
>>>> 18. Returning 0<br>
>>>> [2015-12-29 12:57:23.550309] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1312:gd_stage_op_phase] 0-management: Sent stage op<br>
>>>> req for 'Volume Status' to 3 peers<br>
>>>> [2015-12-29 12:57:23.552791] D [MSGID: 0]<br>
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:<br>
>>>> Friend found... state: Peer in Cluster<br>
>>>> [2015-12-29 12:57:23.553395] D [logging.c:1952:_gf_msg_internal]<br>
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
>>>> About to flush least recently used log message to disk<br>
>>>> The message "D [MSGID: 0]<br>
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:<br>
>>>> Friend found... state: Peer in Cluster" repeated 2 times between<br>
>>>> [2015-12-29 12:57:23.552791] and [2015-12-29 12:57:23.553087]<br>
>>>> [2015-12-29 12:57:23.553394] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:7010:glusterd_op_bricks_select] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.553499] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1709:gd_brick_op_phase] 0-management: Sent op req to<br>
>>>> 0 bricks<br>
>>>> [2015-12-29 12:57:23.553535] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1424:glusterd_volinfo_find] 0-management: Volume live<br>
>>>> found<br>
>>>> [2015-12-29 12:57:23.553556] D [MSGID: 0]<br>
>>>> [glusterd-utils.c:1431:glusterd_volinfo_find] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.553786] D [MSGID: 0]<br>
>>>> [glusterd-snapshot-utils.c:3595:glusterd_is_snapd_enabled] 0-management:<br>
>>>> Key features.uss not present in the dict for volume live<br>
>>>> [2015-12-29 12:57:23.554065] E [MSGID: 106396]<br>
>>>> [glusterd-op-sm.c:2985:_add_task_to_dict] 0-management: Statedump<br>
>>>> operation doesn't have a task_id<br>
>>>> [2015-12-29 12:57:23.554097] E [MSGID: 106060]<br>
>>>> [glusterd-op-sm.c:3055:glusterd_aggregate_task_status] 0-management:<br>
>>>> Failed to add task details to dict<br>
>>>> [2015-12-29 12:57:23.554117] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:3360:glusterd_op_status_volume] 0-management: Returning -1<br>
>>>> [2015-12-29 12:57:23.554135] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:5642:glusterd_op_commit_perform] 0-management:<br>
>>>> Returning -1<br>
>>>> [2015-12-29 12:57:23.554155] E [MSGID: 106123]<br>
>>>> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of<br>
>>>> operation 'Volume Status' failed on localhost<br>
>>>> [2015-12-29 12:57:23.554240] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:510:gd_syncop_mgmt_v3_unlock] 0-glusterd: Returning 0<br>
>>>> [2015-12-29 12:57:23.557147] D [logging.c:1952:_gf_msg_internal]<br>
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.<br>
>>>> About to flush least recently used log message to disk<br>
>>>> The message "D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:510:gd_syncop_mgmt_v3_unlock] 0-glusterd: Returning<br>
>>>> 0" repeated 2 times between [2015-12-29 12:57:23.554240] and [2015-12-29<br>
>>>> 12:57:23.554319]<br>
>>>> [2015-12-29 12:57:23.557146] D [MSGID: 0]<br>
>>>> [glusterd-syncop.c:1558:gd_unlock_op_phase] 0-management: Sent unlock op<br>
>>>> req for 'Volume Status' to 3 peers. Returning 0<br>
>>>> [2015-12-29 12:57:23.557255] D [MSGID: 0]<br>
>>>> [glusterd-locks.c:669:glusterd_mgmt_v3_unlock] 0-management: Trying to<br>
>>>> release lock of vol live for 305c0f00-0f11-4da3-a470-50b6e6c14976 as<br>
>>>> live_vol<br>
>>>> [2015-12-29 12:57:23.557286] D [MSGID: 0]<br>
>>>> [glusterd-locks.c:714:glusterd_mgmt_v3_unlock] 0-management: Lock for<br>
>>>> vol live successfully released<br>
>>>> [2015-12-29 12:57:23.557312] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:255:glusterd_get_txn_opinfo] 0-management:<br>
>>>> Successfully got opinfo for transaction ID :<br>
>>>> fd3671e4-fa7a-4913-a6dd-b37884a3a715<br>
>>>> [2015-12-29 12:57:23.557332] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:259:glusterd_get_txn_opinfo] 0-management: Returning 0<br>
>>>> [2015-12-29 12:57:23.557356] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:360:glusterd_clear_txn_opinfo] 0-management:<br>
>>>> Successfully cleared opinfo for transaction ID :<br>
>>>> fd3671e4-fa7a-4913-a6dd-b37884a3a715<br>
>>>> [2015-12-29 12:57:23.557420] D [MSGID: 0]<br>
>>>> [glusterd-op-sm.c:364:glusterd_clear_txn_opinfo] 0-management: Returning 0<br>
>>>><br>
>>>> ==> /var/log/glusterfs/cmd_history.log <==<br>
>>>> [2015-12-29 12:57:23.557447] : volume status : FAILED : Commit failed<br>
>>>> on localhost. Please check the log file for more details.<br>
>>>><br>
>>>> Dr Christophe Trefois, Dipl.-Ing.<br>
>>>> Technical Specialist / Post-Doc<br>
>>>><br>
>>>> UNIVERSITÉ DU LUXEMBOURG<br>
>>>><br>
>>>> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE<br>
>>>> Campus Belval | House of Biomedicine<br>
>>>> 6, avenue du Swing<br>
>>>> L-4367 Belvaux<br>
>>>> T: +352 46 66 44 6124<br>
>>>> F: +352 46 66 44 6949<br>
>>>> <a href="http://www.uni.lu/lcsb" rel="noreferrer" target="_blank">http://www.uni.lu/lcsb</a><br>
>>>><br>
>>>> Facebook <<a href="https://www.facebook.com/trefex" rel="noreferrer" target="_blank">https://www.facebook.com/trefex</a>> Twitter<br>
>>>> <<a href="https://twitter.com/Trefex" rel="noreferrer" target="_blank">https://twitter.com/Trefex</a>> Google Plus<br>
>>>> <<a href="https://plus.google.com/+ChristopheTrefois/" rel="noreferrer" target="_blank">https://plus.google.com/+ChristopheTrefois/</a>> Linkedin<br>
>>>> <<a href="https://www.linkedin.com/in/trefoischristophe" rel="noreferrer" target="_blank">https://www.linkedin.com/in/trefoischristophe</a>> skype<br>
>>>> <http://skype:Trefex?call><br>
>>>><br>
>>>> ----<br>
>>>> This message is confidential and may contain privileged information.<br>
>>>> It is intended for the named recipient only.<br>
>>>> If you receive it in error please notify me and permanently delete the<br>
>>>> original message and any copies.<br>
>>>> ----<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> _______________________________________________<br>
>>>> Gluster-users mailing list<br>
>>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>>><br>
>><br>
>><br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>