<HTML><HEAD>
<STYLE id=eMClientCss>blockquote.cite { margin-left: 5px; margin-right: 0px; padding-left: 10px; padding-right:0px; border-left: 1px solid #cccccc }
blockquote.cite2 {margin-left: 5px; margin-right: 0px; padding-left: 10px; padding-right:0px; border-left: 1px solid #cccccc; margin-top: 3px; padding-top: 0px; }
.plain pre, .plain tt { font-family: monospace; font-size: 100%; font-weight: normal; font-style: normal; white-space: pre-wrap; }
a img { border: 0px; }body {font-family: Times New Roman;font-size: 12pt;}
.plain pre, .plain tt {font-family: Times New Roman;font-size: 12pt;}
</STYLE>

<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff scroll=auto class>
<DIV>Pranith, </DIV>
<DIV>&nbsp;</DIV>
<DIV>This issue continues to happen.&nbsp; If you could provide instructions for getting you the statedump, I would be happy to send that information.</DIV>
<DIV>I am not sure how to get a statedump just before the crash as the crash is intermittent.</DIV>
<DIV>&nbsp;</DIV>
<DIV>David</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>------ Original Message ------</DIV>
<DIV>From: "Pranith Kumar Karampuri" &lt;<A href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</A>&gt;</DIV>
<DIV>To: "Glomski, Patrick" &lt;<A href="mailto:patrick.glomski@corvidtec.com">patrick.glomski@corvidtec.com</A>&gt;; <A href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</A>; <A href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</A></DIV>
<DIV>Cc: "David Robinson" &lt;<A href="mailto:david.robinson@corvidtec.com">david.robinson@corvidtec.com</A>&gt;</DIV>
<DIV>Sent: 12/21/2015 11:59:33 PM</DIV>
<DIV>Subject: Re: [Gluster-devel] glusterfsd crash due to page allocation failure</DIV>
<DIV>&nbsp;</DIV>
<DIV id=x78585c967b814e6d953499138e3d698c style="COLOR: #000000">
<BLOCKQUOTE class=cite2 cite=5678D8B5.1080605@redhat.com type="cite">hi Glomski,<BR>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; This is the second time I am hearing about memory allocation problems in 3.7.6 but this time on brick side. Are you able to recreate this issue? Will it be possible to get statedumps of the bricks processes just before they crash?<BR><BR>Pranith <BR><BR>
<DIV class=moz-cite-prefix>On 12/22/2015 02:25 AM, Glomski, Patrick wrote:<BR></DIV>
<BLOCKQUOTE class=cite cite=mid:CALkMjdB_eq2DrvW0WFV9gQWFPWDU07ipTxg4-g0uacRW6vz0Zw@mail.gmail.com type="cite">
<DIV dir=ltr>
<DIV>
<DIV>Hello,<BR><BR></DIV>We've recently upgraded from gluster 3.6.6 to 3.7.6 and have started encountering dmesg page allocation errors (stack trace is appended). <BR><BR>It appears that glusterfsd now sometimes fills up the cache completely and crashes with a page allocation failure. I *believe* it mainly happens when copying lots of new data to the system, running a 'find', or similar. Hosts are all Scientific Linux 6.6 and these errors occur consistently on two separate gluster pools.<BR></DIV>
<DIV><BR>Has anyone else seen this issue and are there any known fixes for it via sysctl kernel parameters or other means?<BR><BR></DIV>
<DIV>Please let me know of any other diagnostic information that would help.<BR></DIV>
<DIV><BR></DIV>
<DIV>Thanks,<BR></DIV>
<DIV>Patrick<BR></DIV>
<DIV><BR><BR>
<BLOCKQUOTE class=gmail_quote style="PADDING-LEFT: 1ex; BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0px 0px 0px 0.8ex">[1458118.134697] glusterfsd: page allocation failure. order:5, mode:0x20<BR>[1458118.134701] Pid: 6010, comm: glusterfsd Not tainted 2.6.32-573.3.1.el6.x86_64 #1<BR>[1458118.134702] Call Trace:<BR>[1458118.134714]&nbsp; [&lt;ffffffff8113770c&gt;] ? __alloc_pages_nodemask+0x7dc/0x950<BR>[1458118.134728]&nbsp; [&lt;ffffffffa0321800&gt;] ? mlx4_ib_post_send+0x680/0x1f90 [mlx4_ib]<BR>[1458118.134733]&nbsp; [&lt;ffffffff81176e92&gt;] ? kmem_getpages+0x62/0x170<BR>[1458118.134735]&nbsp; [&lt;ffffffff81177aaa&gt;] ? fallback_alloc+0x1ba/0x270<BR>[1458118.134736]&nbsp; [&lt;ffffffff811774ff&gt;] ? cache_grow+0x2cf/0x320<BR>[1458118.134738]&nbsp; [&lt;ffffffff81177829&gt;] ? ____cache_alloc_node+0x99/0x160<BR>[1458118.134743]&nbsp; [&lt;ffffffff8145f732&gt;] ? pskb_expand_head+0x62/0x280<BR>[1458118.134744]&nbsp; [&lt;ffffffff81178479&gt;] ? __kmalloc+0x199/0x230<BR>[1458118.134746]&nbsp; [&lt;ffffffff8145f732&gt;] ? pskb_expand_head+0x62/0x280<BR>[1458118.134748]&nbsp; [&lt;ffffffff8146001a&gt;] ? __pskb_pull_tail+0x2aa/0x360<BR>[1458118.134751]&nbsp; [&lt;ffffffff8146f389&gt;] ? harmonize_features+0x29/0x70<BR>[1458118.134753]&nbsp; [&lt;ffffffff8146f9f4&gt;] ? dev_hard_start_xmit+0x1c4/0x490<BR>[1458118.134758]&nbsp; [&lt;ffffffff8148cf8a&gt;] ? sch_direct_xmit+0x15a/0x1c0<BR>[1458118.134759]&nbsp; [&lt;ffffffff8146ff68&gt;] ? dev_queue_xmit+0x228/0x320<BR>[1458118.134762]&nbsp; [&lt;ffffffff8147665d&gt;] ? neigh_connected_output+0xbd/0x100<BR>[1458118.134766]&nbsp; [&lt;ffffffff814abc67&gt;] ? ip_finish_output+0x287/0x360<BR>[1458118.134767]&nbsp; [&lt;ffffffff814abdf8&gt;] ? ip_output+0xb8/0xc0<BR>[1458118.134769]&nbsp; [&lt;ffffffff814ab04f&gt;] ? __ip_local_out+0x9f/0xb0<BR>[1458118.134770]&nbsp; [&lt;ffffffff814ab085&gt;] ? ip_local_out+0x25/0x30<BR>[1458118.134772]&nbsp; [&lt;ffffffff814ab580&gt;] ? ip_queue_xmit+0x190/0x420<BR>[1458118.134773]&nbsp; [&lt;ffffffff81137059&gt;] ? __alloc_pages_nodemask+0x129/0x950<BR>[1458118.134776]&nbsp; [&lt;ffffffff814c0c54&gt;] ? tcp_transmit_skb+0x4b4/0x8b0<BR>[1458118.134778]&nbsp; [&lt;ffffffff814c319a&gt;] ? tcp_write_xmit+0x1da/0xa90<BR>[1458118.134779]&nbsp; [&lt;ffffffff81178cbd&gt;] ? __kmalloc_node+0x4d/0x60<BR>[1458118.134780]&nbsp; [&lt;ffffffff814c3a80&gt;] ? tcp_push_one+0x30/0x40<BR>[1458118.134782]&nbsp; [&lt;ffffffff814b410c&gt;] ? tcp_sendmsg+0x9cc/0xa20<BR>[1458118.134786]&nbsp; [&lt;ffffffff8145836b&gt;] ? sock_aio_write+0x19b/0x1c0<BR>[1458118.134788]&nbsp; [&lt;ffffffff814581d0&gt;] ? sock_aio_write+0x0/0x1c0<BR>[1458118.134791]&nbsp; [&lt;ffffffff8119169b&gt;] ? do_sync_readv_writev+0xfb/0x140<BR>[1458118.134797]&nbsp; [&lt;ffffffff810a14b0&gt;] ? autoremove_wake_function+0x0/0x40<BR>[1458118.134801]&nbsp; [&lt;ffffffff8123e92f&gt;] ? selinux_file_permission+0xbf/0x150<BR>[1458118.134804]&nbsp; [&lt;ffffffff812316d6&gt;] ? security_file_permission+0x16/0x20<BR>[1458118.134806]&nbsp; [&lt;ffffffff81192746&gt;] ? do_readv_writev+0xd6/0x1f0<BR>[1458118.134807]&nbsp; [&lt;ffffffff811928a6&gt;] ? vfs_writev+0x46/0x60<BR>[1458118.134809]&nbsp; [&lt;ffffffff811929d1&gt;] ? sys_writev+0x51/0xd0<BR>[1458118.134812]&nbsp; [&lt;ffffffff810e88ae&gt;] ? __audit_syscall_exit+0x25e/0x290<BR>[1458118.134816]&nbsp; [&lt;ffffffff8100b0d2&gt;] ? system_call_fastpath+0x16/0x1b<BR></BLOCKQUOTE><BR></DIV></DIV><BR>
<FIELDSET class=mimeAttachmentHeader></FIELDSET> <BR><PRE wrap="">_______________________________________________
Gluster-devel mailing list
<A class=moz-txt-link-abbreviated href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</A>
<A class=moz-txt-link-freetext href="http://www.gluster.org/mailman/listinfo/gluster-devel">http://www.gluster.org/mailman/listinfo/gluster-devel</A></PRE></BLOCKQUOTE><BR></BLOCKQUOTE></DIV></BODY></HTML>