<p dir="ltr"></p>
<p dir="ltr">-Atin<br>
Sent from one plus one<br>
On Aug 3, 2015 8:31 PM, "Osborne, Paul (<a href="mailto:paul.osborne@canterbury.ac.uk">paul.osborne@canterbury.ac.uk</a>)" <<a href="mailto:paul.osborne@canterbury.ac.uk">paul.osborne@canterbury.ac.uk</a>> wrote:<br>
><br>
> Hi,<br>
><br>
><br>
> OK I have tracked through the logs which of the hosts apparently has a lock open:<br>
><br>
><br>
> [2015-08-03 14:55:37.602717] I [glusterd-handler.c:3836:__glusterd_handle_status_volume] 0-management: Received status volume req for volume blogs<br>
><br>
> [2015-08-03 14:51:57.791081] E [glusterd-utils.c:148:glusterd_lock] 0-management: Unable to get lock for uuid: 76e4398c-e00a-4f3b-9206-4f885c4e5206, lock held by: 76e4398c-e00a-4f3b-9206-4f885c4e5206<br>
><br>
This indicates that cluster is still operating at older op version. You would need to bump up the op version to 30604 using Gluster volume set all cluster.op-version 30604<br>
><br>
> I have identified the UID for each peer via gluster peer status and working backwards. <br>
><br>
> I see that gluster volume clear-locks may the locks on the volume - but is not clear from the logs is what the path is that has the lock or the kind that is locked.<br>
><br>
> Incidentally my clients (using NFS) through manual testing appear to still be able to read/write to the volume - it is the volume status and heal checks that are failing. All of my clients and servers have been sequentially rebooted in the hope that this would clear any issue - however that doe not appear to be the case.<br>
><br>
><br>
><br>
> Thanks<br>
><br>
> Paul<br>
><br>
><br>
><br>
><br>
> Paul Osborne<br>
> Senior Systems Engineer<br>
> Canterbury Christ Church University<br>
> Tel: 01227 782751<br>
><br>
><br>
> ________________________________<br>
> From: Atin Mukherjee <<a href="mailto:atin.mukherjee83@gmail.com">atin.mukherjee83@gmail.com</a>><br>
> Sent: 03 August 2015 15:22<br>
> To: Osborne, Paul (<a href="mailto:paul.osborne@canterbury.ac.uk">paul.osborne@canterbury.ac.uk</a>)<br>
> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> Subject: Re: [Gluster-users] Locking failed - since upgrade to 3.6.4<br>
> <br>
><br>
> Could you check the glusterd log at the other nodes, that would give you the hint of the exact issue. Also looking at .cmd_log_history will give you the time interval at which volume status commands are executed. If the gap is in milisecs then you are bound to hit it and its expected.<br>
><br>
> -Atin<br>
> Sent from one plus one<br>
><br>
> On Aug 3, 2015 7:32 PM, "Osborne, Paul (<a href="mailto:paul.osborne@canterbury.ac.uk">paul.osborne@canterbury.ac.uk</a>)" <<a href="mailto:paul.osborne@canterbury.ac.uk">paul.osborne@canterbury.ac.uk</a>> wrote:<br>
>><br>
>><br>
>> Hi,<br>
>><br>
>> Last week I upgraded one of my gluster clusters (3 hosts with bricks as replica 3) to 3.6.4 from 3.5.4 and all seemed well.<br>
>><br>
>> Today I am getting reports that locking has failed:<br>
>><br>
>><br>
>> gfse-cant-01:/var/log/glusterfs# gluster volume status<br>
>> Locking failed on <a href="http://gfse-rh-01.core.canterbury.ac.uk">gfse-rh-01.core.canterbury.ac.uk</a>. Please check log file for details.<br>
>> Locking failed on <a href="http://gfse-isr-01.core.canterbury.ac.uk">gfse-isr-01.core.canterbury.ac.uk</a>. Please check log file for details.<br>
>><br>
>> Logs:<br>
>> [2015-08-03 13:45:29.974560] E [glusterd-syncop.c:1640:gd_sync_task_begin] 0-management: Locking Peers Failed.<br>
>> [2015-08-03 13:49:48.273159] E [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking failed on <a href="http://gfse-rh-01.core.canterbury.ac.uk">gfse-rh-01.core.canterbury.ac.uk</a>. Please ch<br>
>> eck log file for details.<br>
>> [2015-08-03 13:49:48.273778] E [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking failed on <a href="http://gfse-isr-01.core.canterbury.ac.uk">gfse-isr-01.core.canterbury.ac.uk</a>. Please c<br>
>> heck log file for details.<br>
>><br>
>><br>
>> I am wondering if this is a new feature due to 3.6.4 or something that has gone wrong.<br>
>><br>
>> Restarting gluster entirely (btw the restart script does not actually appear to kill the processes...) resolves the issue but then it repeats a few minutes later which is rather suboptimal for a running service.<br>
>><br>
>> Googling suggests that there may be simultaneous actions going on that can cause a locking issue.<br>
>><br>
>> I know that I have nagios running volume status <volname> for each of my volumes on each host every few minutes however this is not new and has been in place for the last 8-9 months that against 3.5 without issue so would hope that this is not causing the issue.<br>
>><br>
>> I am not sure where to look now tbh.<br>
>><br>
>><br>
>><br>
>><br>
>> Paul Osborne<br>
>> Senior Systems Engineer<br>
>> Canterbury Christ Church University<br>
>> Tel: 01227 782751<br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</p>