<div dir="ltr"><div>I should mention that the problem is not currently occurring and there are no heals (output appended). By restarting the gluster services, we can stop the crawl, which lowers the load for a while. Subsequent crawls seem to finish properly. For what it's worth, files/folders that show up in the 'volume info' output during a hung crawl don't seem to be anything out of the ordinary. <br><br>Over the past four days, the typical time before the problem recurs after suppressing it in this manner is an hour. Last night when we reached out to you was the last time it happened and the load has been low since (a relief). David believes that recursively listing the files (ls -alR or similar) from a client mount can force the issue to happen, but obviously I'd rather not unless we have some precise thing we're looking for. Let me know if you'd like me to attempt to drive the system unstable like that and what I should look for. As it's a production system, I'd rather not leave it in this state for long.<br><br></div><div>[root@gfs01a xattrop]# gluster volume heal homegfs info<br>Brick gfs01a.corvidtec.com:/data/brick01a/homegfs/<br>Number of entries: 0<br><br>Brick gfs01b.corvidtec.com:/data/brick01b/homegfs/<br>Number of entries: 0<br><br>Brick gfs01a.corvidtec.com:/data/brick02a/homegfs/<br>Number of entries: 0<br><br>Brick gfs01b.corvidtec.com:/data/brick02b/homegfs/<br>Number of entries: 0<br><br>Brick gfs02a.corvidtec.com:/data/brick01a/homegfs/<br>Number of entries: 0<br><br>Brick gfs02b.corvidtec.com:/data/brick01b/homegfs/<br>Number of entries: 0<br><br>Brick gfs02a.corvidtec.com:/data/brick02a/homegfs/<br>Number of entries: 0<br><br>Brick gfs02b.corvidtec.com:/data/brick02b/homegfs/<br>Number of entries: 0<br><br><br><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jan 21, 2016 at 10:40 AM, Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class="">
<br>
<br>
<div>On 01/21/2016 08:25 PM, Glomski,
Patrick wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>Hello, Pranith. The typical behavior is that the %cpu on a
glusterfsd process jumps to number of processor cores
available (800% or 1200%, depending on the pair of nodes
involved) and the load average on the machine goes very high
(~20). The volume's heal statistics output shows that it is
crawling one of the bricks and trying to heal, but this crawl
hangs and never seems to finish.<br>
</div>
</div>
</blockquote>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
The number of files in the xattrop directory varies over time,
so I ran a wc -l as you requested periodically for some time and
then started including a datestamped list of the files that were
in the xattrops directory on each brick to see which were
persistent. All bricks had files in the xattrop folder, so all
results are attached.<br>
</div>
</blockquote></span>
Thanks this info is helpful. I don't see a lot of files. Could you
give output of "gluster volume heal <volname> info"? Is there
any directory in there which is LARGE?<span class="HOEnZb"><font color="#888888"><br>
<br>
Pranith</font></span><div><div class="h5"><br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Please let me know if there is anything else I can provide.<br>
</div>
<div><br>
</div>
<div>Patrick<br>
</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Jan 21, 2016 at 12:01 AM,
Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> hey,<br>
      Which process is consuming so much cpu? I went
through the logs you gave me. I see that the following
files are in gfid mismatch state:<br>
<br>
<066e4525-8f8b-43aa-b7a1-86bbcecc68b9/safebrowsing-backup>,<br>
<1d48754b-b38c-403d-94e2-0f5c41d5f885/recovery.bak>,<br>
<ddc92637-303a-4059-9c56-ab23b1bb6ae9/patch0008.cnvrg>,<br>
<br>
Could you give me the output of "ls
<brick-path>/indices/xattrop | wc -l" output on all
the bricks which are acting this way? This will tell us
the number of pending self-heals on the system.<br>
<br>
Pranith
<div>
<div><br>
<br>
<div>On 01/20/2016 09:26 PM, David Robinson wrote:<br>
</div>
</div>
</div>
<blockquote type="cite">
<div>
<div>
<div>resending with parsed logs... </div>
<div>Â </div>
<div>
<blockquote cite="http://em5ee26b0e-002a-4230-bdec-3020b98cff3c@dfrobins-vaio" type="cite">
<div>Â </div>
<div>Â </div>
<div>
<blockquote cite="http://eme3b2cb80-8be2-4fa5-9d08-4710955e237c@dfrobins-vaio" type="cite">
<div>I am having issues with 3.6.6 where the
load will spike up to 800% for one of the
glusterfsd processes and the users can no
longer access the system. If I reboot the
node, the heal will finish normally after
a few minutes and the system will be
responsive, but a few hours later the
issue will start again. It look like it
is hanging in a heal and spinning up the
load on one of the bricks. The heal gets
stuck and says it is crawling and never
returns. After a few minutes of the heal
saying it is crawling, the load spikes up
and the mounts become unresponsive.</div>
<div>Â </div>
<div>Any suggestions on how to fix this? It
has us stopped cold as the user can no
longer access the systems when the load
spikes... Logs attached.</div>
<div>Â </div>
<div>System setup info is: </div>
<div>Â </div>
<div>[root@gfs01a ~]# gluster volume info
homegfs<br>
 <br>
Volume Name: homegfs<br>
Type: Distributed-Replicate<br>
Volume ID:
1e32672a-f1b7-4b58-ba94-58c085e59071<br>
Status: Started<br>
Number of Bricks: 4 x 2 = 8<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1:
gfsib01a.corvidtec.com:/data/brick01a/homegfs<br>
Brick2:
gfsib01b.corvidtec.com:/data/brick01b/homegfs<br>
Brick3:
gfsib01a.corvidtec.com:/data/brick02a/homegfs<br>
Brick4:
gfsib01b.corvidtec.com:/data/brick02b/homegfs<br>
Brick5:
gfsib02a.corvidtec.com:/data/brick01a/homegfs<br>
Brick6:
gfsib02b.corvidtec.com:/data/brick01b/homegfs<br>
Brick7:
gfsib02a.corvidtec.com:/data/brick02a/homegfs<br>
Brick8:
gfsib02b.corvidtec.com:/data/brick02b/homegfs<br>
Options Reconfigured:<br>
performance.io-thread-count: 32<br>
performance.cache-size: 128MB<br>
performance.write-behind-window-size:
128MB<br>
server.allow-insecure: on<br>
network.ping-timeout: 42<br>
storage.owner-gid: 100<br>
geo-replication.indexing: off<br>
geo-replication.ignore-pid-check: on<br>
changelog.changelog: off<br>
changelog.fsync-interval: 3<br>
changelog.rollover-time: 15<br>
server.manage-gids: on<br>
diagnostics.client-log-level: WARNING</div>
<div>Â </div>
<div>[root@gfs01a ~]# rpm -qa | grep gluster<br>
gluster-nagios-common-0.1.1-0.el6.noarch<br>
glusterfs-fuse-3.6.6-1.el6.x86_64<br>
glusterfs-debuginfo-3.6.6-1.el6.x86_64<br>
glusterfs-libs-3.6.6-1.el6.x86_64<br>
glusterfs-geo-replication-3.6.6-1.el6.x86_64<br>
glusterfs-api-3.6.6-1.el6.x86_64<br>
glusterfs-devel-3.6.6-1.el6.x86_64<br>
glusterfs-api-devel-3.6.6-1.el6.x86_64<br>
glusterfs-3.6.6-1.el6.x86_64<br>
glusterfs-cli-3.6.6-1.el6.x86_64<br>
glusterfs-rdma-3.6.6-1.el6.x86_64<br>
samba-vfs-glusterfs-4.1.11-2.el6.x86_64<br>
glusterfs-server-3.6.6-1.el6.x86_64<br>
glusterfs-extra-xlators-3.6.6-1.el6.x86_64<br>
</div>
<div>Â </div>
<div>
<div style="FONT-SIZE:12pt;FONT-FAMILY:Times New Roman"><span><span>
<div>Â </div>
</span></span></div>
</div>
</blockquote>
</div>
</blockquote>
</div>
<br>
<fieldset></fieldset>
<br>
</div>
</div>
<pre>_______________________________________________
Gluster-devel mailing list
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a></pre>
</blockquote>
<br>
</div>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div></div></div>
</blockquote></div><br></div>