<p dir="ltr">I do not see any mount related failures in the glusterd log you have pasted. Ideally if mount request fails it could be either the GlusterD is down or the brick processes are down. There&#39;d be an error log entry from mgmt_getspec().</p>
<p dir="ltr">The log entries do indicate that the n/w is unstable. If you are still stuck could you provide the mount log and glusterd log please along with gluster volume info output and mount command semantics?</p>
<p dir="ltr">-Atin<br>
Sent from one plus one</p>
<div class="gmail_quote">On 20-Feb-2016 4:21 pm, &quot;Ml Ml&quot; &lt;<a href="mailto:mliebherr99@googlemail.com">mliebherr99@googlemail.com</a>&gt; wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello List,<br>
<br>
i am running ovirt (CentOS) on top of glusterfs. I have a 3 Node<br>
replica. Versions see below.<br>
<br>
Looks like i can not get my node1 (v 3.7.8) together with the othet<br>
two (v3.7.0). The error i get when i try to &quot; mount -t glusterfs<br>
10.10.3.7:/RaidVolC /mnt/&quot;:<br>
<br>
[2016-02-20 10:27:30.890701] W [socket.c:869:__socket_keepalive]<br>
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 14, Invalid<br>
argument<br>
[2016-02-20 10:27:30.890728] E [socket.c:2965:socket_connect]<br>
0-management: Failed to set keep-alive: Invalid argument<br>
[2016-02-20 10:27:30.891296] W [socket.c:588:__socket_rwv]<br>
0-management: readv on <a href="http://10.10.3.7:24007" rel="noreferrer" target="_blank">10.10.3.7:24007</a> failed (No data available)<br>
[2016-02-20 10:27:30.891671] E [rpc-clnt.c:362:saved_frames_unwind]<br>
(--&gt; /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7ff82c50bab2]<br>
(--&gt; /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7ff82c2d68de]<br>
(--&gt; /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7ff82c2d69ee]<br>
(--&gt; /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7ff82c2d837a]<br>
(--&gt; /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7ff82c2d8ba8] )))))<br>
0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1))<br>
called at 2016-02-20 10:27:30.891063 (xid=0x35)<br>
The message &quot;W [MSGID: 106118]<br>
[glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:<br>
Lock not released for RaidVolC&quot; repeated 3 times between [2016-02-20<br>
10:27:24.873207] and [2016-02-20 10:27:27.886916]<br>
[2016-02-20 10:27:30.891704] E [MSGID: 106167]<br>
[glusterd-handshake.c:2074:__glusterd_peer_dump_version_cbk]<br>
0-management: Error through RPC layer, retry again later<br>
[2016-02-20 10:27:30.891871] W<br>
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]<br>
(--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)<br>
[0x7ff821062b9c]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)<br>
[0x7ff82106ce72]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)<br>
[0x7ff82110c73a] ) 0-management: Lock for vol RaidVolB not held<br>
[2016-02-20 10:27:30.892001] W<br>
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]<br>
(--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)<br>
[0x7ff821062b9c]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)<br>
[0x7ff82106ce72]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)<br>
[0x7ff82110c73a] ) 0-management: Lock for vol RaidVolC not held<br>
The message &quot;W [MSGID: 106118]<br>
[glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:<br>
Lock not released for RaidVolB&quot; repeated 3 times between [2016-02-20<br>
10:27:24.877923] and [2016-02-20 10:27:30.891888]<br>
[2016-02-20 10:27:30.892023] W [MSGID: 106118]<br>
[glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:<br>
Lock not released for RaidVolC<br>
[2016-02-20 10:27:30.895617] W [socket.c:869:__socket_keepalive]<br>
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 14, Invalid<br>
argument<br>
[2016-02-20 10:27:30.895641] E [socket.c:2965:socket_connect]<br>
0-management: Failed to set keep-alive: Invalid argument<br>
[2016-02-20 10:27:30.896300] W [socket.c:588:__socket_rwv]<br>
0-management: readv on <a href="http://10.10.1.6:24007" rel="noreferrer" target="_blank">10.10.1.6:24007</a> failed (No data available)<br>
[2016-02-20 10:27:30.896541] E [rpc-clnt.c:362:saved_frames_unwind]<br>
(--&gt; /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7ff82c50bab2]<br>
(--&gt; /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7ff82c2d68de]<br>
(--&gt; /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7ff82c2d69ee]<br>
(--&gt; /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7ff82c2d837a]<br>
(--&gt; /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7ff82c2d8ba8] )))))<br>
0-management: forced unwinding frame type(GLUSTERD-DUMP) op(DUMP(1))<br>
called at 2016-02-20 10:27:30.895995 (xid=0x35)<br>
[2016-02-20 10:27:30.896703] W<br>
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]<br>
(--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)<br>
[0x7ff821062b9c]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)<br>
[0x7ff82106ce72]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)<br>
[0x7ff82110c73a] ) 0-management: Lock for vol RaidVolB not held<br>
[2016-02-20 10:27:30.896584] I [MSGID: 106004]<br>
[glusterd-handler.c:5127:__glusterd_peer_rpc_notify] 0-management:<br>
Peer &lt;<a href="http://ovirt-node06-stgt.stuttgart.imos.net" rel="noreferrer" target="_blank">ovirt-node06-stgt.stuttgart.imos.net</a>&gt;<br>
(&lt;08884518-2db7-4429-ab2f-019d03a02b76&gt;), in state &lt;Peer in Cluster&gt;,<br>
has disconnected from glusterd.<br>
[2016-02-20 10:27:30.896720] W [MSGID: 106118]<br>
[glusterd-handler.c:5149:__glusterd_peer_rpc_notify] 0-management:<br>
Lock not released for RaidVolB<br>
[2016-02-20 10:27:30.896854] W<br>
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock]<br>
(--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)<br>
[0x7ff821062b9c]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)<br>
[0x7ff82106ce72]<br>
--&gt;/usr/lib64/glusterfs/3.7.8/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)<br>
[0x7ff82110c73a] ) 0-management: Lock for vol RaidVolC not held<br>
<br>
<br>
Any idea what the problem is? I had a network problem which is sloved now.<br>
But now i am stuck with this.<br>
<br>
<br>
Node1:<br>
============<br>
rpm -qa |grep gluster<br>
glusterfs-fuse-3.7.8-1.el7.x86_64<br>
glusterfs-3.7.8-1.el7.x86_64<br>
glusterfs-cli-3.7.8-1.el7.x86_64<br>
glusterfs-client-xlators-3.7.8-1.el7.x86_64<br>
glusterfs-rdma-3.7.8-1.el7.x86_64<br>
vdsm-gluster-4.16.30-0.el7.centos.noarch<br>
glusterfs-api-3.7.8-1.el7.x86_64<br>
glusterfs-libs-3.7.8-1.el7.x86_64<br>
glusterfs-server-3.7.8-1.el7.x86_64<br>
<br>
<br>
Node2:<br>
==============<br>
 rpm -qa |grep gluster<br>
glusterfs-fuse-3.7.0-1.el7.x86_64<br>
glusterfs-libs-3.7.0-1.el7.x86_64<br>
glusterfs-api-3.7.0-1.el7.x86_64<br>
glusterfs-cli-3.7.0-1.el7.x86_64<br>
glusterfs-server-3.7.0-1.el7.x86_64<br>
glusterfs-3.7.0-1.el7.x86_64<br>
glusterfs-rdma-3.7.0-1.el7.x86_64<br>
vdsm-gluster-4.16.14-0.el7.noarch<br>
glusterfs-client-xlators-3.7.0-1.el7.x86_64<br>
<br>
<br>
Node3:<br>
=================<br>
rpm -qa|grep glus<br>
glusterfs-3.7.0-1.el7.x86_64<br>
glusterfs-rdma-3.7.0-1.el7.x86_64<br>
glusterfs-client-xlators-3.7.0-1.el7.x86_64<br>
glusterfs-libs-3.7.0-1.el7.x86_64<br>
glusterfs-api-3.7.0-1.el7.x86_64<br>
glusterfs-cli-3.7.0-1.el7.x86_64<br>
glusterfs-server-3.7.0-1.el7.x86_64<br>
vdsm-gluster-4.16.14-0.el7.noarch<br>
glusterfs-fuse-3.7.0-1.el7.x86_64<br>
<br>
<br>
Thanks,<br>
Mario<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>