<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Jul 16, 2016 at 9:53 PM, Jesper Led Lauridsen TS Infra server <span dir="ltr"><<a href="mailto:jly@dr.dk" target="_blank">jly@dr.dk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span class="">
On 07/16/2016 04:10 AM, Pranith Kumar Karampuri wrote:<br>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Fri, Jul 15, 2016 at 5:20 PM,
Jesper Led Lauridsen TS Infra server <span dir="ltr"><<a href="mailto:JLY@dr.dk" target="_blank"></a><a href="mailto:JLY@dr.dk" target="_blank">JLY@dr.dk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
How do I determine in which log etc. that a healing is in
progress or startet and how do I if not startet force it.<br>
<br>
Additional info, is that I have some problem with a volume
if I execute 'gluster volume heal <volume> info' the
command just hangs but if I execute 'gluster volume heal
<volume> info split-brain' it return that no file
are in split-brain. Yet there is and I have successfully
recovered another one.<br>
</blockquote>
<div><br>
</div>
<div>If the command hangs there is a chance that operations
on the file may have lead to stale locks. Could you give
the output of statedump? <br>
</div>
<div>You can follow <a href="https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/" target="_blank">https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/</a>
to generat the files.<br>
</div>
</div>
</div>
</div>
</blockquote>
<br></span>
Thanks for you response. You are right there was a stale lock. But I
am sorry I booted all my cluster nodes, so I guess (without knowing)
that there is no reason to give you the output of a statedump?<br>
<br>
What I can confirm and give of information is:<br>
* All the servers failed to reboot so I had to push the button.
They all failed with the message<br>
"Unmounting pipe file system: Cannot create link /etc/mtab~<br>
Perhaps there is a stale lock file?"<br>
* After 2 nodes had rebooted the command executed without any
problem and reported a couple off split-brain (Both Directory and
Files)<br>
* strace the command showed that it was just looping, so basically
the command didn't hanging. It just couldn't finish.<br>
* I am using "glusterfs-3.6.2-1.el6.x86_64". But hoping to upgrade
to 3.6.9 this weekend.<br>
* The file I refereed to here. Now has the same output on both
replicas when getting getfattr information. The
grusted.afr.glu_rhevtst_dr2_data_01-client-[0,1] and
trusted.afr.dirty are now all zero</div></blockquote><div><br></div><div>If you are anyway looking to upgrade, why not upgrade to 3.7.13, which is the latest stable version?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF"><div><div class="h5"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I just have problem with this one. I can determine if
there is a healing process running or not<br>
<br>
I have change
'trusted.afr.glu_rhevtst_dr2_data_01-client-1' to
0x000000000000000000000000 on the file located on
glustertst03 and executed a 'ls -lrt' on the file on the
gluster-mount.<br>
<br>
[root@glustertst04 ]# getfattr -d -m . -e hex
/bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab<br>
getfattr: Removing leading '/' from absolute path names<br>
# file:
bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.glu_rhevtst_dr2_data_01-client-0=0x00004c700000000000000000<br>
trusted.afr.glu_rhevtst_dr2_data_01-client-1=0x000000000000000000000000<br>
trusted.gfid=0x7575f870875b4c899fd81ef16be3b1a1<br>
trusted.glusterfs.quota.70145d52-bb80-42ce-b437-64be6ee4a7d4.contri=0x00000001606dc000<br>
trusted.pgfid.70145d52-bb80-42ce-b437-64be6ee4a7d4=0x00000001<br>
<br>
[root@glustertst03 ]# getfattr -d -m . -e hex
/bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab<br>
getfattr: Removing leading '/' from absolute path names<br>
# file:
bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.dirty=0x000000270000000000000000<br>
trusted.afr.glu_rhevtst_dr2_data_01-client-0=0x000000000000000000000000<br>
trusted.afr.glu_rhevtst_dr2_data_01-client-1=0x000000000000000000000000<br>
trusted.gfid=0x7575f870875b4c899fd81ef16be3b1a1<br>
trusted.glusterfs.quota.70145d52-bb80-42ce-b437-64be6ee4a7d4.contri=0x0000000160662000<br>
trusted.pgfid.70145d52-bb80-42ce-b437-64be6ee4a7d4=0x00000001<br>
<br>
[root@glustertst04 ]# stat
/var/run/gluster/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab<br>
File:
`/var/run/gluster/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab'<br>
Size: 21474836480 Blocks: 11548384 IO Block:
131072 regular file<br>
Device: 31h/49d Inode: 11517990069246079393 Links: 1<br>
Access: (0660/-rw-rw----) Uid: ( 36/ vdsm) Gid: (
36/ kvm)<br>
Access: 2016-07-15 13:33:47.860224289 +0200<br>
Modify: 2016-07-15 13:34:44.396125458 +0200<br>
Change: 2016-07-15 13:34:44.397125492 +0200<br>
<br>
[root@glustertst03 ]# stat
/bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab<br>
File:
`/bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/images/7669ca25-028e-40a5-9dc8-06c716101709/a1ae3612-bb89-45d8-8041-134c34592eab'<br>
Size: 21474836480 Blocks: 11547408 IO Block: 4096
regular file<br>
Device: fd02h/64770d Inode: 159515 Links: 2<br>
Access: (0660/-rw-rw----) Uid: ( 36/ vdsm) Gid: (
36/ kvm)<br>
Access: 2016-07-13 08:33:00.000561984 +0200<br>
Modify: 2016-07-13 08:32:59.969561154 +0200<br>
Change: 2016-07-15 12:52:28.414192052 +0200<br>
<br>
Thanks<br>
Jesper<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div data-smartmail="gmail_signature">
<div dir="ltr">Pranith<br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</div></div></div>
<br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>