<div dir="ltr"><div>My gut still says it could be related to the multipath.<br>I never got the answer to whether the bricks are using the multipath'ed devices using mpathXX device or you are direclty using the dm-X device ?<br><br>If dm-X then are you ensuring that you are NOT using 2 dm-X device that map to the same LUN on the backend SAN ?<br>My hunch is that in case you are doing that and xfs'ing the 2 dm-X and using then as separate bricks anything can happen<br><br></div>So trying to remove multipath or even before that stop glusterfs volumes (which should stop glusterfsd process, hence no IO on the xfs bricks) and see if this re-creates<br>Since we are seeing glusterfsd everytime the kernel bug shows up, it may not be a co-incidence but a possibility due to invalud multipath setup<br><br>thanx,<br>deepak<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jan 22, 2015 at 12:57 AM, Niels de Vos <span dir="ltr"><<a href="mailto:ndevos@redhat.com" target="_blank">ndevos@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Jan 21, 2015 at 10:11:20PM +0530, chamara samarakoon wrote:<br>
> HI All,<br>
><br>
><br>
> Same error encountered again before trying anything else. So I took screen<br>
> shot with more details of the incident.<br>
<br>
</span>This shows an XFS error. So it can be a problem with XFS, or something<br>
that contributes to it in the XFS path. I would guess it is caused by an<br>
issue on the disk(s) because there is the mentioning of corruption.<br>
However, it could also be bad RAM, or an other hardware component that<br>
is used to access data from the disks. I suggest you take two<br>
approaches:<br>
<br>
1. run hardware tests - if the error is detected, contact your HW vendor<br>
2. open a support case with the vendor of the OS and check for updates<br>
<br>
Gluster can stress filesystems in ways that are not very common, and<br>
there have been issues found in XFS due to this. Your OS support vendor<br>
should be able to tell you if the latest and related XFS fixes are<br>
included in your kernel.<br>
<br>
HTH,<br>
Niels<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
><br>
> <br>
><br>
> Thank You,<br>
> Chamara<br>
><br>
><br>
><br>
> On Tue, Jan 20, 2015 at 5:33 PM, chamara samarakoon <<a href="mailto:chthsa123@gmail.com">chthsa123@gmail.com</a>><br>
> wrote:<br>
><br>
> > HI All,<br>
> ><br>
> > Thank You for valuable feedback , I will test the suggested solutions, and<br>
> > update the thread.<br>
> ><br>
> > Regards,<br>
> > Chamara<br>
> ><br>
> > On Tue, Jan 20, 2015 at 4:17 PM, Deepak Shetty <<a href="mailto:dpkshetty@gmail.com">dpkshetty@gmail.com</a>><br>
> > wrote:<br>
> ><br>
> >> In addition, I would also like to add that i do suspect (just my hunch)<br>
> >> that it could be related to multipath.<br>
> >> If you can try without multipath and if it doesn't re-create, i think<br>
> >> that would be a good data point for kernel/OS vendor to debug further.<br>
> >><br>
> >> my 2 cents again :)<br>
> >><br>
> >> thanx,<br>
> >> deepak<br>
> >><br>
> >><br>
> >> On Tue, Jan 20, 2015 at 2:32 PM, Niels de Vos <<a href="mailto:ndevos@redhat.com">ndevos@redhat.com</a>> wrote:<br>
> >><br>
> >>> On Tue, Jan 20, 2015 at 11:55:40AM +0530, Deepak Shetty wrote:<br>
> >>> > What does "Controller" mean, the openstack controller node or somethign<br>
> >>> > else (like HBA ) ?<br>
> >>> > You picture says its SAN but the text says multi-path mount.. SAN would<br>
> >>> > mean block devices, so I am assuming you have redundant block devices<br>
> >>> on<br>
> >>> > the compute host, mkfs'ing it and then creating bricks for gluster ?<br>
> >>> ><br>
> >>> ><br>
> >>> > The stack trace looks like you hit a kernel bug and glusterfsd happens<br>
> >>> to<br>
> >>> > be running on the CPU at the time... my 2 cents<br>
> >>><br>
> >>> That definitely is a kernel issue. You should contact your OS support<br>
> >>> vendor about this.<br>
> >>><br>
> >>> The bits you copy/pasted are not sufficient to see what caused it. The<br>
> >>> glusterfsd process is just a casualty of the kernel issue, and it is not<br>
> >>> likely this can be fixed in Gluster. I suspect you need a kernel<br>
> >>> patch/update.<br>
> >>><br>
> >>> Niels<br>
> >>><br>
> >>> ><br>
> >>> > thanx,<br>
> >>> > deepak<br>
> >>> ><br>
> >>> > On Tue, Jan 20, 2015 at 11:29 AM, chamara samarakoon <<br>
> >>> <a href="mailto:chthsa123@gmail.com">chthsa123@gmail.com</a>><br>
> >>> > wrote:<br>
> >>> ><br>
> >>> > > Hi All,<br>
> >>> > ><br>
> >>> > ><br>
> >>> > > We have setup Openstack cloud as below. And the<br>
> >>> "/va/lib/nova/instances"<br>
> >>> > > is a Gluster volume.<br>
> >>> > ><br>
> >>> > > CentOS - 6.5<br>
> >>> > > Kernel - 2.6.32-431.29.2.el6.x86_64<br>
> >>> > > GlusterFS - glusterfs 3.5.2 built on Jul 31 2014 18:47:54<br>
> >>> > > OpenStack - RDO using Packstack<br>
> >>> > ><br>
> >>> > ><br>
> >>> > ><br>
> >>> > ><br>
> >>> > > <br>
> >>> > ><br>
> >>> > ><br>
> >>> > > Recently Controller node freezes with following error (Which<br>
> >>> required hard<br>
> >>> > > reboot), as a result Gluster volumes on compute node can not reach<br>
> >>> the<br>
> >>> > > controller and due to that all the instances on compute nodes<br>
> >>> become to<br>
> >>> > > read-only status which causes to restart all instances.<br>
> >>> > ><br>
> >>> > ><br>
> >>> > ><br>
> >>> > ><br>
> >>> > > *BUG: scheduling while atomic : glusterfsd/42725/0xffffffff*<br>
> >>> > > *BUG: unable to handle kernel paging request at 0000000038a60d0a8*<br>
> >>> > > *IP: [<fffffffff81058e5d>] task_rq_lock+0x4d/0xa0*<br>
> >>> > > *PGD 1065525067 PUD 0*<br>
> >>> > > *Oops: 0000 [#1] SMP*<br>
> >>> > > *last sysfs file :<br>
> >>> > ><br>
> >>> /sys/device/pci0000:80/0000:80:02.0/0000:86:00.0/host2/port-2:0/end_device-2:0/target2:0:0/2:0:0:1/state*<br>
> >>> > > *CPU 0*<br>
> >>> > > *Modules linked in : xtconntrack iptable_filter ip_tables<br>
> >>> ipt_REDIRECT<br>
> >>> > > fuse ipv openvswitch vxlan iptable_mangle *<br>
> >>> > ><br>
> >>> > > Please advice on above incident , also feedback on the Openstack +<br>
> >>> > > GlusterFS setup is appreciated.<br>
> >>> > ><br>
> >>> > > Thank You,<br>
> >>> > > Chamara<br>
> >>> > ><br>
> >>> > ><br>
> >>> > > _______________________________________________<br>
> >>> > > Gluster-users mailing list<br>
> >>> > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >>> > > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
> >>> > ><br>
> >>><br>
> >>><br>
> >>><br>
> >>> > _______________________________________________<br>
> >>> > Gluster-users mailing list<br>
> >>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >>> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
> >>><br>
> >>><br>
> >><br>
> ><br>
> ><br>
> > --<br>
> > chthsa<br>
> ><br>
><br>
><br>
><br>
> --<br>
> chthsa<br>
<br>
<br>
</div></div></blockquote></div><br></div>