<p dir="ltr"></p>
<p dir="ltr">-Atin<br>
Sent from one plus one<br>
On 29-Apr-2016 9:36 PM, "Ashish Pandey" <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>> wrote:<br>
><br>
><br>
> Hi Jeff,<br>
><br>
> Where can we find the core dump?<br>
><br>
> ---<br>
> Ashish<br>
><br>
> ________________________________<br>
> From: "Pranith Kumar Karampuri" <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>><br>
> To: "Jeff Darcy" <<a href="mailto:jdarcy@redhat.com">jdarcy@redhat.com</a>><br>
> Cc: "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>>, "Ashish Pandey" <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>><br>
> Sent: Thursday, April 28, 2016 11:58:54 AM<br>
> Subject: Re: [Gluster-devel] Regression-test-burn-in crash in EC test<br>
><br>
><br>
> Ashish,<br>
> Could you take a look at this?<br>
><br>
> Pranith<br>
><br>
> ----- Original Message -----<br>
> > From: "Jeff Darcy" <<a href="mailto:jdarcy@redhat.com">jdarcy@redhat.com</a>><br>
> > To: "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > Sent: Wednesday, April 27, 2016 11:31:25 PM<br>
> > Subject: [Gluster-devel] Regression-test-burn-in crash in EC test<br>
> > <br>
> > One of the "rewards" of reviewing and merging people's patches is getting<br>
> > email if the next regression-test-burn-in should fail - even if it fails for<br>
> > a completely unrelated reason. Today I got one that's not among the usual<br>
> > suspects. The failure was a core dump in tests/bugs/disperse/bug-1304988.t,<br>
> > weighing in at a respectable 42 frames.<br>
> > <br>
> > #0 0x00007fef25976cb9 in dht_rename_lock_cbk<br>
> > #1 0x00007fef25955f62 in dht_inodelk_done<br>
> > #2 0x00007fef25957352 in dht_blocking_inodelk_cbk<br>
> > #3 0x00007fef32e02f8f in default_inodelk_cbk<br>
> > #4 0x00007fef25c029a3 in ec_manager_inodelk<br>
> > #5 0x00007fef25bf9802 in __ec_manager<br>
> > #6 0x00007fef25bf990c in ec_manager<br>
> > #7 0x00007fef25c03038 in ec_inodelk<br>
> > #8 0x00007fef25bee7ad in ec_gf_inodelk<br>
> > #9 0x00007fef25957758 in dht_blocking_inodelk_rec<br>
> > #10 0x00007fef25957b2d in dht_blocking_inodelk<br>
> > #11 0x00007fef2597713f in dht_rename_lock<br>
> > #12 0x00007fef25977835 in dht_rename<br>
> > #13 0x00007fef32e0f032 in default_rename<br>
> > #14 0x00007fef32e0f032 in default_rename<br>
> > #15 0x00007fef32e0f032 in default_rename<br>
> > #16 0x00007fef32e0f032 in default_rename<br>
> > #17 0x00007fef32e0f032 in default_rename<br>
> > #18 0x00007fef32e07c29 in default_rename_resume<br>
> > #19 0x00007fef32d8ed40 in call_resume_wind<br>
> > #20 0x00007fef32d98b2f in call_resume<br>
> > #21 0x00007fef24cfc568 in open_and_resume<br>
> > #22 0x00007fef24cffb99 in ob_rename<br>
> > #23 0x00007fef24aee482 in mdc_rename<br>
> > #24 0x00007fef248d68e5 in io_stats_rename<br>
> > #25 0x00007fef32e0f032 in default_rename<br>
> > #26 0x00007fef2ab1b2b9 in fuse_rename_resume<br>
> > #27 0x00007fef2ab12c47 in fuse_fop_resume<br>
> > #28 0x00007fef2ab107cc in fuse_resolve_done<br>
> > #29 0x00007fef2ab108a2 in fuse_resolve_all<br>
> > #30 0x00007fef2ab10900 in fuse_resolve_continue<br>
> > #31 0x00007fef2ab0fb7c in fuse_resolve_parent<br>
> > #32 0x00007fef2ab1077d in fuse_resolve<br>
> > #33 0x00007fef2ab10879 in fuse_resolve_all<br>
> > #34 0x00007fef2ab10900 in fuse_resolve_continue<br>
> > #35 0x00007fef2ab0fb7c in fuse_resolve_parent<br>
> > #36 0x00007fef2ab1077d in fuse_resolve<br>
> > #37 0x00007fef2ab10824 in fuse_resolve_all<br>
> > #38 0x00007fef2ab1093e in fuse_resolve_and_resume<br>
> > #39 0x00007fef2ab1b40e in fuse_rename<br>
> > #40 0x00007fef2ab2a96a in fuse_thread_proc<br>
> > #41 0x00007fef3204daa1 in start_thread<br>
> > <br>
> > In other words we started at FUSE, went through a bunch of performance<br>
> > translators, through DHT to EC, and then crashed on the way back. It seems<br>
> > a little odd that we turn the fop around immediately in EC, and that we have<br>
> > default_inodelk_cbk at frame 3. Could one of the DHT or EC people please<br>
> > take a look at it? Thanks!<br>
> > <br>
> > <br>
> > <a href="https://build.gluster.org/job/regression-test-burn-in/868/console">https://build.gluster.org/job/regression-test-burn-in/868/console</a><br>
This is the one.<br>
> > _______________________________________________<br>
> > Gluster-devel mailing list<br>
> > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> > <br>
><br>
><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
</p>