<div dir="ltr"><div>Thanks, I'll do that. Is it possible/likely that this lower op-version is also causing the issue I posted on gluster-users earlier: 0-rpcsvc: rpc actor failed to complete successfully<br></div><div><a href="https://www.mail-archive.com/gluster-users@gluster.org/msg20569.html">https://www.mail-archive.com/gluster-users@gluster.org/msg20569.html</a></div><div><br></div><div>Any pointers on that would be greatly appreciated, since we've had multiple occurences of this since Sunday, three today only.<br></div><div class="gmail_extra"><div class="gmail_quote"><br></div><div class="gmail_quote">Thanks,</div><div class="gmail_quote"><br></div><div class="gmail_quote">On 10 June 2015 at 14:42, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 06/10/2015 05:32 PM, Tiemen Ruiten wrote:<br>
> Hello Atin,<br>
><br>
> We are running 3.7.0 on our storage nodes and suffer from the same issue.<br>
> Is it safe to perform the same command or should we first upgrade to 3.7.1?<br>
</span>You should upgrade to 3.7.1<br>
<div class="HOEnZb"><div class="h5">><br>
> On 10 June 2015 at 13:45, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> wrote:<br>
><br>
>><br>
>><br>
>> On 06/10/2015 02:58 PM, Sergio Traldi wrote:<br>
>>> On 06/10/2015 10:27 AM, Krishnan Parthasarathi wrote:<br>
>>>>> Hi all,<br>
>>>>> I two servers with 3.7.1 and have the same problem of this issue:<br>
>>>>> <a href="http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693" target="_blank">http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20693</a><br>
>>>>><br>
>>>>> My servers packages:<br>
>>>>> # rpm -qa | grep gluster | sort<br>
>>>>> glusterfs-3.7.1-1.el6.x86_64<br>
>>>>> glusterfs-api-3.7.1-1.el6.x86_64<br>
>>>>> glusterfs-cli-3.7.1-1.el6.x86_64<br>
>>>>> glusterfs-client-xlators-3.7.1-1.el6.x86_64<br>
>>>>> glusterfs-fuse-3.7.1-1.el6.x86_64<br>
>>>>> glusterfs-geo-replication-3.7.1-1.el6.x86_64<br>
>>>>> glusterfs-libs-3.7.1-1.el6.x86_64<br>
>>>>> glusterfs-server-3.7.1-1.el6.x86_64<br>
>>>>><br>
>>>>> Command:<br>
>>>>> # gluster volume status<br>
>>>>> Another transaction is in progress. Please try again after sometime.<br>
>> The problem is although you are running 3.7.1 binaries the cluster<br>
>> op-version is set to 30501, because of glusterd still goes for acquiring<br>
>> cluster lock instead of volume wise lock for every request. Command log<br>
>> history indicates glusterD is getting multiple volume's status requests<br>
>> and because of it fails to acquire cluster lock. Could you bump up your<br>
>> cluster's op-version by the following command and recheck?<br>
>><br>
>> gluster volume set all cluster.op-version 30701<br>
>><br>
>> ~Atin<br>
>>>>><br>
>>>>><br>
>>>>> In /var/log/gluster/etc-glusterfs-glusterd.vol.log I found:<br>
>>>>><br>
>>>>> [2015-06-09 16:12:38.949842] E [glusterd-utils.c:164:glusterd_lock]<br>
>>>>> 0-management: Unable to get lock for uuid:<br>
>>>>> 99a41a2a-2ce5-461c-aec0-510bd5b37bf2, lock held by:<br>
>>>>> 04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c<br>
>>>>> [2015-06-09 16:12:38.949864] E<br>
>>>>> [glusterd-syncop.c:1766:gd_sync_task_begin]<br>
>>>>> 0-management: Unable to acquire lock<br>
>>>>><br>
>>>>> I check the files:<br>
>>>>>Â Â From server 1:<br>
>>>>> # cat /var/lib/glusterd/peers/04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c<br>
>>>>> uuid=04a7d2bb-bdd9-4e0d-b460-87ad4adbe12c<br>
>>>>> state=3<br>
>>>>> hostname1=192.168.61.101<br>
>>>>><br>
>>>>>Â Â From server 2:<br>
>>>>> # cat /var/lib/glusterd/peers/99a41a2a-2ce5-461c-aec0-510bd5b37bf2<br>
>>>>> uuid=99a41a2a-2ce5-461c-aec0-510bd5b37bf2<br>
>>>>> state=3<br>
>>>>> hostname1=192.168.61.100<br>
>>>> Could you attach the complete glusterd log file and cmd-history.log<br>
>>>> file under /var/log/glusterfs directory? Could you provide a more<br>
>>>> detailed listing of things you did before hitting this issue?<br>
>>> Hi Krishnan,<br>
>>> thanks to a quick answer.<br>
>>> In attach you can found the two log you request:<br>
>>> cmd_history.log<br>
>>> etc-glusterfs-glusterd.vol.log<br>
>>><br>
>>> We use the gluster volume as openstack nova, glance, cinder backend.<br>
>>><br>
>>> The volume is configured using 2 bricks mounted by an iscsi device:<br>
>>> [root@cld-stg-01 glusterfs]# gluster volume info volume-nova-prod<br>
>>> Volume Name: volume-nova-prod<br>
>>> Type: Distribute<br>
>>> Volume ID: 4bbef4c8-0441-4e81-a2c5-559401adadc0<br>
>>> Status: Started<br>
>>> Number of Bricks: 2<br>
>>> Transport-type: tcp<br>
>>> Bricks:<br>
>>> Brick1: 192.168.61.100:/brickOpenstack/nova-prod/mpathb<br>
>>> Brick2: 192.168.61.101:/brickOpenstack/nova-prod/mpathb<br>
>>> Options Reconfigured:<br>
>>> storage.owner-gid: 162<br>
>>> storage.owner-uid: 162<br>
>>><br>
>>> Last week we update openstack from havana to icehouse and we rename the<br>
>>> storage hosts but we didn't change the IP.<br>
>>> All volume have been created using ip addresses.<br>
>>><br>
>>> So last week we stop all services (openstack gluster and also iscsi).<br>
>>> We change the name in DNS of private ip of the 2 nics.<br>
>>> We reboot the storage servers<br>
>>> We start agian iscsi, multipath, glusterd process.<br>
>>> We have to stop and start the volumes, but after that everything works<br>
>>> fine.<br>
>>> Now we don't observe any other problems except this.<br>
>>><br>
>>> We have a nagios probe which check the volume status each 5 minutes to<br>
>>> ensure all gluster process is working fine and so we find this problem I<br>
>>> post.<br>
>>><br>
>>> Cheer<br>
>>> Sergio<br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>><br>
>><br>
>> --<br>
>> ~Atin<br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
><br>
><br>
><br>
<br>
--<br>
</div></div><span class="HOEnZb"><font color="#888888">~Atin<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr">Tiemen Ruiten<br>Systems Engineer<br>R&D Media<br></div></div>
</div></div>