<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix">On 01/21/2016 08:25 PM, Glomski,
Patrick wrote:<br>
</div>
<blockquote
cite="mid:CALkMjdD5CF5E1vV1nzEqYj_pkuDKEH0pB80paTTGaVnTGs0bEw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>Hello, Pranith. The typical behavior is that the %cpu on a
glusterfsd process jumps to number of processor cores
available (800% or 1200%, depending on the pair of nodes
involved) and the load average on the machine goes very high
(~20). The volume's heal statistics output shows that it is
crawling one of the bricks and trying to heal, but this crawl
hangs and never seems to finish.<br>
</div>
</div>
</blockquote>
<blockquote
cite="mid:CALkMjdD5CF5E1vV1nzEqYj_pkuDKEH0pB80paTTGaVnTGs0bEw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
The number of files in the xattrop directory varies over time,
so I ran a wc -l as you requested periodically for some time and
then started including a datestamped list of the files that were
in the xattrops directory on each brick to see which were
persistent. All bricks had files in the xattrop folder, so all
results are attached.<br>
</div>
</blockquote>
Thanks this info is helpful. I don't see a lot of files. Could you
give output of "gluster volume heal <volname> info"? Is there
any directory in there which is LARGE?<br>
<br>
Pranith<br>
<blockquote
cite="mid:CALkMjdD5CF5E1vV1nzEqYj_pkuDKEH0pB80paTTGaVnTGs0bEw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Please let me know if there is anything else I can provide.<br>
</div>
<div><br>
</div>
<div>Patrick<br>
</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Jan 21, 2016 at 12:01 AM,
Pranith Kumar Karampuri <span dir="ltr"><<a
moz-do-not-send="true" href="mailto:pkarampu@redhat.com"
target="_blank">pkarampu@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> hey,<br>
Which process is consuming so much cpu? I went
through the logs you gave me. I see that the following
files are in gfid mismatch state:<br>
<br>
<066e4525-8f8b-43aa-b7a1-86bbcecc68b9/safebrowsing-backup>,<br>
<1d48754b-b38c-403d-94e2-0f5c41d5f885/recovery.bak>,<br>
<ddc92637-303a-4059-9c56-ab23b1bb6ae9/patch0008.cnvrg>,<br>
<br>
Could you give me the output of "ls
<brick-path>/indices/xattrop | wc -l" output on all
the bricks which are acting this way? This will tell us
the number of pending self-heals on the system.<br>
<br>
Pranith
<div>
<div class="h5"><br>
<br>
<div>On 01/20/2016 09:26 PM, David Robinson wrote:<br>
</div>
</div>
</div>
<blockquote type="cite">
<div>
<div class="h5">
<div>resending with parsed logs... </div>
<div> </div>
<div>
<blockquote
cite="http://em5ee26b0e-002a-4230-bdec-3020b98cff3c@dfrobins-vaio"
type="cite">
<div> </div>
<div> </div>
<div>
<blockquote
cite="http://eme3b2cb80-8be2-4fa5-9d08-4710955e237c@dfrobins-vaio"
type="cite">
<div>I am having issues with 3.6.6 where the
load will spike up to 800% for one of the
glusterfsd processes and the users can no
longer access the system. If I reboot the
node, the heal will finish normally after
a few minutes and the system will be
responsive, but a few hours later the
issue will start again. It look like it
is hanging in a heal and spinning up the
load on one of the bricks. The heal gets
stuck and says it is crawling and never
returns. After a few minutes of the heal
saying it is crawling, the load spikes up
and the mounts become unresponsive.</div>
<div> </div>
<div>Any suggestions on how to fix this? It
has us stopped cold as the user can no
longer access the systems when the load
spikes... Logs attached.</div>
<div> </div>
<div>System setup info is: </div>
<div> </div>
<div>[root@gfs01a ~]# gluster volume info
homegfs<br>
<br>
Volume Name: homegfs<br>
Type: Distributed-Replicate<br>
Volume ID:
1e32672a-f1b7-4b58-ba94-58c085e59071<br>
Status: Started<br>
Number of Bricks: 4 x 2 = 8<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1:
gfsib01a.corvidtec.com:/data/brick01a/homegfs<br>
Brick2:
gfsib01b.corvidtec.com:/data/brick01b/homegfs<br>
Brick3:
gfsib01a.corvidtec.com:/data/brick02a/homegfs<br>
Brick4:
gfsib01b.corvidtec.com:/data/brick02b/homegfs<br>
Brick5:
gfsib02a.corvidtec.com:/data/brick01a/homegfs<br>
Brick6:
gfsib02b.corvidtec.com:/data/brick01b/homegfs<br>
Brick7:
gfsib02a.corvidtec.com:/data/brick02a/homegfs<br>
Brick8:
gfsib02b.corvidtec.com:/data/brick02b/homegfs<br>
Options Reconfigured:<br>
performance.io-thread-count: 32<br>
performance.cache-size: 128MB<br>
performance.write-behind-window-size:
128MB<br>
server.allow-insecure: on<br>
network.ping-timeout: 42<br>
storage.owner-gid: 100<br>
geo-replication.indexing: off<br>
geo-replication.ignore-pid-check: on<br>
changelog.changelog: off<br>
changelog.fsync-interval: 3<br>
changelog.rollover-time: 15<br>
server.manage-gids: on<br>
diagnostics.client-log-level: WARNING</div>
<div> </div>
<div>[root@gfs01a ~]# rpm -qa | grep gluster<br>
gluster-nagios-common-0.1.1-0.el6.noarch<br>
glusterfs-fuse-3.6.6-1.el6.x86_64<br>
glusterfs-debuginfo-3.6.6-1.el6.x86_64<br>
glusterfs-libs-3.6.6-1.el6.x86_64<br>
glusterfs-geo-replication-3.6.6-1.el6.x86_64<br>
glusterfs-api-3.6.6-1.el6.x86_64<br>
glusterfs-devel-3.6.6-1.el6.x86_64<br>
glusterfs-api-devel-3.6.6-1.el6.x86_64<br>
glusterfs-3.6.6-1.el6.x86_64<br>
glusterfs-cli-3.6.6-1.el6.x86_64<br>
glusterfs-rdma-3.6.6-1.el6.x86_64<br>
samba-vfs-glusterfs-4.1.11-2.el6.x86_64<br>
glusterfs-server-3.6.6-1.el6.x86_64<br>
glusterfs-extra-xlators-3.6.6-1.el6.x86_64<br>
</div>
<div> </div>
<div>
<div
style="FONT-SIZE:12pt;FONT-FAMILY:Times
New Roman"><span><span>
<div> </div>
</span></span></div>
</div>
</blockquote>
</div>
</blockquote>
</div>
<br>
<fieldset></fieldset>
<br>
</div>
</div>
<pre>_______________________________________________
Gluster-devel mailing list
<a moz-do-not-send="true" href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a></pre>
</blockquote>
<br>
</div>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a moz-do-not-send="true"
href="http://www.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>