<p dir="ltr"></p>
<p dir="ltr">-Atin<br>
Sent from one plus one<br>
On Aug 10, 2015 11:58 PM, "Kingsley" <<a href="mailto:gluster@gluster.dogwind.com">gluster@gluster.dogwind.com</a>> wrote:<br>
><br>
><br>
> On Mon, 2015-08-10 at 22:53 +0530, Atin Mukherjee wrote:<br>
> [snip]<br>
>><br>
>> > stat("/sys/fs/selinux", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0<br>
>><br>
>> > brk(0) = 0x8db000<br>
>> > brk(0x8fc000) = 0x8fc000<br>
>> > mkdir("test", 0777<br>
>> Can you also collect the statedump of all the brick processes when the command is hung?<br>
>> <br>
>> + Ravi, could you check this?<br>
><br>
><br>
> I ran the command but I could not find where it put the output:<br>
><br>
><br>
> [root@gluster1a-1 ~]# gluster volume statedump callrec all<br>
> volume statedump: success<br>
> [root@gluster1a-1 ~]# gluster volume info callrec<br>
><br>
> Volume Name: callrec<br>
> Type: Replicate<br>
> Volume ID: a39830b7-eddb-4061-b381-39411274131a<br>
> Status: Started<br>
> Number of Bricks: 1 x 4 = 4<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: gluster1a-1:/data/brick/callrec<br>
> Brick2: gluster1b-1:/data/brick/callrec<br>
> Brick3: gluster2a-1:/data/brick/callrec<br>
> Brick4: gluster2b-1:/data/brick/callrec<br>
> Options Reconfigured:<br>
> performance.flush-behind: off<br>
> [root@gluster1a-1 ~]#gluster volume status callrec<br>
> Status of volume: callrec<br>
> Gluster process Port Online Pid<br>
> ------------------------------------------------------------------------------<br>
> Brick gluster1a-1:/data/brick/callrec 49153 Y 29041<br>
> Brick gluster1b-1:/data/brick/callrec 49153 Y 31260<br>
> Brick gluster2a-1:/data/brick/callrec 49153 Y 31585<br>
> Brick gluster2b-1:/data/brick/callrec 49153 Y 12153<br>
> NFS Server on localhost 2049 Y 29733<br>
> Self-heal Daemon on localhost N/A Y 29741<br>
> NFS Server on gluster1b-1 2049 Y 31872<br>
> Self-heal Daemon on gluster1b-1 N/A Y 31882<br>
> NFS Server on gluster2a-1 2049 Y 32216<br>
> Self-heal Daemon on gluster2a-1 N/A Y 32226<br>
> NFS Server on gluster2b-1 2049 Y 12752<br>
> Self-heal Daemon on gluster2b-1 N/A Y 12762<br>
><br>
> Task Status of Volume callrec<br>
> ------------------------------------------------------------------------------<br>
> There are no active volume tasks<br>
><br>
> [root@gluster1a-1 ~]# ls -l /tmp<br>
> total 144<br>
> drwx------. 3 root root 16 Aug 8 22:20 systemd-private-Dp10Pz<br>
> -rw-------. 1 root root 5818 Jul 31 06:39 yum_save_tx.2015-07-31.06-39.JCvHd5.yumtx<br>
> -rw-------. 1 root root 5818 Aug 1 06:58 yum_save_tx.2015-08-01.06-58.wBytr2.yumtx<br>
> -rw-------. 1 root root 5818 Aug 2 05:18 yum_save_tx.2015-08-02.05-18.AXIFSe.yumtx<br>
> -rw-------. 1 root root 5818 Aug 3 07:15 yum_save_tx.2015-08-03.07-15.EDd8rg.yumtx<br>
> -rw-------. 1 root root 5818 Aug 4 03:48 yum_save_tx.2015-08-04.03-48.XE513B.yumtx<br>
> -rw-------. 1 root root 5818 Aug 5 09:03 yum_save_tx.2015-08-05.09-03.mX8xXF.yumtx<br>
> -rw-------. 1 root root 28869 Aug 6 06:39 yum_save_tx.2015-08-06.06-39.166wJX.yumtx<br>
> -rw-------. 1 root root 28869 Aug 7 07:20 yum_save_tx.2015-08-07.07-20.rLqJnT.yumtx<br>
> -rw-------. 1 root root 28869 Aug 8 08:29 yum_save_tx.2015-08-08.08-29.KKaite.yumtx<br>
> [root@gluster1a-1 ~]#<br>
><br>
><br>
> Where should I find the output of the statedump command?<br>
It should be there in var/run/gluster folder<br>
><br>
> Cheers,<br>
> Kingsley.<br>
><br>
><br>
>> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> >> ><br>
>> >> >> > Then ... do I need to run something on one of the bricks while strace is<br>
>> >> >> > running?<br>
>> >> >> ><br>
>> >> >> > Cheers,<br>
>> >> >> > Kingsley.<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > > ><br>
>> >> >> > > > [root@gluster1b-1 ~]# gluster volume heal callrec info<br>
>> >> >> > > > Brick gluster1a-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > <gfid:164f888f-2049-49e6-ad26-c758ee091863><br>
>> >> >> > > > /recordings/834723/14391 - Possibly undergoing heal<br>
>> >> >> > > ><br>
>> >> >> > > > <gfid:e280b40c-d8b7-43c5-9da7-4737054d7a7f><br>
>> >> >> > > > <gfid:b1fbda4a-732f-4f5d-b5a1-8355d786073e><br>
>> >> >> > > > <gfid:edb74524-b4b7-4190-85e7-4aad002f6e7c><br>
>> >> >> > > > <gfid:9b8b8446-1e27-4113-93c2-6727b1f457eb><br>
>> >> >> > > > <gfid:650efeca-b45c-413b-acc3-f0a5853ccebd><br>
>> >> >> > > > Number of entries: 7<br>
>> >> >> > > ><br>
>> >> >> > > > Brick gluster1b-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > Number of entries: 0<br>
>> >> >> > > ><br>
>> >> >> > > > Brick gluster2a-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > <gfid:e280b40c-d8b7-43c5-9da7-4737054d7a7f><br>
>> >> >> > > > <gfid:164f888f-2049-49e6-ad26-c758ee091863><br>
>> >> >> > > > <gfid:650efeca-b45c-413b-acc3-f0a5853ccebd><br>
>> >> >> > > > <gfid:b1fbda4a-732f-4f5d-b5a1-8355d786073e><br>
>> >> >> > > > /recordings/834723/14391 - Possibly undergoing heal<br>
>> >> >> > > ><br>
>> >> >> > > > <gfid:edb74524-b4b7-4190-85e7-4aad002f6e7c><br>
>> >> >> > > > <gfid:9b8b8446-1e27-4113-93c2-6727b1f457eb><br>
>> >> >> > > > Number of entries: 7<br>
>> >> >> > > ><br>
>> >> >> > > > Brick gluster2b-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > Number of entries: 0<br>
>> >> >> > > ><br>
>> >> >> > > ><br>
>> >> >> > > > If I query each brick directly for the number of files/directories<br>
>> >> >> > > > within that, I get 1731 on gluster1a-1 and gluster2a-1, but 1737 on<br>
>> >> >> > > the<br>
>> >> >> > > > other two, using this command:<br>
>> >> >> > > ><br>
>> >> >> > > > # find /data/brick/callrec/recordings/834723/14391 -print | wc -l<br>
>> >> >> > > ><br>
>> >> >> > > > Cheers,<br>
>> >> >> > > > Kingsley.<br>
>> >> >> > > ><br>
>> >> >> > > > On Mon, 2015-08-10 at 11:05 +0100, Kingsley wrote:<br>
>> >> >> > > > > Sorry for the blind panic - restarting the volume seems to have<br>
>> >> >> > > fixed<br>
>> >> >> > > > > it.<br>
>> >> >> > > > ><br>
>> >> >> > > > > But then my next question - why is this necessary? Surely it<br>
>> >> >> > > undermines<br>
>> >> >> > > > > the whole point of a high availability system?<br>
>> >> >> > > > ><br>
>> >> >> > > > > Cheers,<br>
>> >> >> > > > > Kingsley.<br>
>> >> >> > > > ><br>
>> >> >> > > > > On Mon, 2015-08-10 at 10:53 +0100, Kingsley wrote:<br>
>> >> >> > > > > > Hi,<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > We have a 4 way replicated volume using gluster 3.6.3 on CentOS<br>
>> >> >> > > 7.<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Over the weekend I did a yum update on each of the bricks in<br>
>> >> >> > > turn, but<br>
>> >> >> > > > > > now when clients (using fuse mounts) try to access the volume,<br>
>> >> >> > > it hangs.<br>
>> >> >> > > > > > Gluster itself wasn't updated (we've disabled that repo so that<br>
>> >> >> > > we keep<br>
>> >> >> > > > > > to 3.6.3 for now).<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > This was what I did:<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > * on first brick, "yum update"<br>
>> >> >> > > > > > * reboot brick<br>
>> >> >> > > > > > * watch "gluster volume status" on another brick and wait<br>
>> >> >> > > for it<br>
>> >> >> > > > > > to say all 4 bricks are online before proceeding to<br>
>> >> >> > > update the<br>
>> >> >> > > > > > next brick<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > I was expecting the clients might pause 30 seconds while they<br>
>> >> >> > > notice a<br>
>> >> >> > > > > > brick is offline, but then recover.<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > I've tried re-mounting clients, but that hasn't helped.<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > I can't see much data in any of the log files.<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > I've tried "gluster volume heal callrec" but it doesn't seem to<br>
>> >> >> > > have<br>
>> >> >> > > > > > helped.<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > What shall I do next?<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > I've pasted some stuff below in case any of it helps.<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Cheers,<br>
>> >> >> > > > > > Kingsley.<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > [root@gluster1b-1 ~]# gluster volume info callrec<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Volume Name: callrec<br>
>> >> >> > > > > > Type: Replicate<br>
>> >> >> > > > > > Volume ID: a39830b7-eddb-4061-b381-39411274131a<br>
>> >> >> > > > > > Status: Started<br>
>> >> >> > > > > > Number of Bricks: 1 x 4 = 4<br>
>> >> >> > > > > > Transport-type: tcp<br>
>> >> >> > > > > > Bricks:<br>
>> >> >> > > > > > Brick1: gluster1a-1:/data/brick/callrec<br>
>> >> >> > > > > > Brick2: gluster1b-1:/data/brick/callrec<br>
>> >> >> > > > > > Brick3: gluster2a-1:/data/brick/callrec<br>
>> >> >> > > > > > Brick4: gluster2b-1:/data/brick/callrec<br>
>> >> >> > > > > > Options Reconfigured:<br>
>> >> >> > > > > > performance.flush-behind: off<br>
>> >> >> > > > > > [root@gluster1b-1 ~]#<br>
>> >> >> > > > > ><br>
>> >> >> > > > > ><br>
>> >> >> > > > > > [root@gluster1b-1 ~]# gluster volume status callrec<br>
>> >> >> > > > > > Status of volume: callrec<br>
>> >> >> > > > > > Gluster process Port<br>
>> >> >> > > Online Pid<br>
>> >> >> > > > > ><br>
>> >> >> > > ------------------------------------------------------------------------------<br>
>> >> >> > > > > > Brick gluster1a-1:/data/brick/callrec 49153<br>
>> >> >> > > Y 6803<br>
>> >> >> > > > > > Brick gluster1b-1:/data/brick/callrec 49153<br>
>> >> >> > > Y 2614<br>
>> >> >> > > > > > Brick gluster2a-1:/data/brick/callrec 49153<br>
>> >> >> > > Y 2645<br>
>> >> >> > > > > > Brick gluster2b-1:/data/brick/callrec 49153<br>
>> >> >> > > Y 4325<br>
>> >> >> > > > > > NFS Server on localhost 2049<br>
>> >> >> > > Y 2769<br>
>> >> >> > > > > > Self-heal Daemon on localhost N/A<br>
>> >> >> > > Y 2789<br>
>> >> >> > > > > > NFS Server on gluster2a-1 2049<br>
>> >> >> > > Y 2857<br>
>> >> >> > > > > > Self-heal Daemon on gluster2a-1 N/A<br>
>> >> >> > > Y 2814<br>
>> >> >> > > > > > NFS Server on 88.151.41.100 2049<br>
>> >> >> > > Y 6833<br>
>> >> >> > > > > > Self-heal Daemon on 88.151.41.100 N/A<br>
>> >> >> > > Y 6824<br>
>> >> >> > > > > > NFS Server on gluster2b-1 2049<br>
>> >> >> > > Y 4428<br>
>> >> >> > > > > > Self-heal Daemon on gluster2b-1 N/A<br>
>> >> >> > > Y 4387<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Task Status of Volume callrec<br>
>> >> >> > > > > ><br>
>> >> >> > > ------------------------------------------------------------------------------<br>
>> >> >> > > > > > There are no active volume tasks<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > [root@gluster1b-1 ~]#<br>
>> >> >> > > > > ><br>
>> >> >> > > > > ><br>
>> >> >> > > > > > [root@gluster1b-1 ~]# gluster volume heal callrec info<br>
>> >> >> > > > > > Brick gluster1a-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > > > /to_process - Possibly undergoing heal<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Number of entries: 1<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Brick gluster1b-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > > > Number of entries: 0<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Brick gluster2a-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > > > /to_process - Possibly undergoing heal<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Number of entries: 1<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > Brick gluster2b-1.dns99.co.uk:/data/brick/callrec/<br>
>> >> >> > > > > > Number of entries: 0<br>
>> >> >> > > > > ><br>
>> >> >> > > > > > [root@gluster1b-1 ~]#<br>
>> >> >> > > > > ><br>
>> >> >> > > > > ><br>
>> >> >> > > > > > _______________________________________________<br>
>> >> >> > > > > > Gluster-users mailing list<br>
>> >> >> > > > > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> >> > > > > > <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >> >> > > > > ><br>
>> >> >> > > > ><br>
>> >> >> > > > > _______________________________________________<br>
>> >> >> > > > > Gluster-users mailing list<br>
>> >> >> > > > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> >> > > > > <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >> >> > > > ><br>
>> >> >> > > ><br>
>> >> >> > > > _______________________________________________<br>
>> >> >> > > > Gluster-users mailing list<br>
>> >> >> > > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> >> > > > <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >> >> > ><br>
>> >> >> > ><br>
>> >> >> ><br>
>> >> >><br>
>> >> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >> _______________________________________________<br>
>> >> Gluster-users mailing list<br>
>> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</p>