<html><body><div style="font-family: garamond,new york,times,serif; font-size: 12pt; color: #000000"><div>Hi,</div><div><br></div><div>Could you share the output of<br></div><div># getfattr -d -m . -e hex &lt;abs-path-to-media/ga/live/a&gt;<br></div><div><br></div><div>from both the bricks?<br></div><div><br></div><div>-Krutika<br></div><hr id="zwchr"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Andreas Tsaridas" &lt;andreas.tsaridas@gmail.com&gt;<br><b>To: </b>gluster-users@gluster.org<br><b>Sent: </b>Monday, January 4, 2016 5:10:58 PM<br><b>Subject: </b>[Gluster-users] folder not being healed<br><div><br></div><div dir="ltr">Hello,<div><br></div><div>I have a cluster of two replicated nodes in glusterfs 3.6.3 in RedHat 6.6. Problem is that a specific folder is always trying to be healed but never gets healed. This has been going on for 2 weeks now.</div><div><br></div><div>-----</div><div><br></div><div><div># gluster volume status</div><div>Status of volume: share</div><div>Gluster process<span class="" style="white-space:pre">                                                </span>Port<span class="" style="white-space:pre">        </span>Online<span class="" style="white-space:pre">        </span>Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 172.16.4.1:/srv/share/glusterfs<span class="" style="white-space:pre">                        </span>49152<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>10416</div><div>Brick 172.16.4.2:/srv/share/glusterfs<span class="" style="white-space:pre">                        </span>49152<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>19907</div><div>NFS Server on localhost<span class="" style="white-space:pre">                                        </span>2049<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>22664</div><div>Self-heal Daemon on localhost<span class="" style="white-space:pre">                                </span>N/A<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>22676</div><div>NFS Server on 172.16.4.2<span class="" style="white-space:pre">                                </span>2049<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>19923</div><div>Self-heal Daemon on 172.16.4.2<span class="" style="white-space:pre">                                </span>N/A<span class="" style="white-space:pre">        </span>Y<span class="" style="white-space:pre">        </span>19937</div><div><br></div><div>Task Status of Volume share</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div></div><div><br></div><div>------</div><div><br></div><div><div># gluster volume info</div><div><br></div><div>Volume Name: share</div><div>Type: Replicate</div><div>Volume ID: 17224664-645c-48b7-bc3a-b8fc84c6ab30</div><div>Status: Started</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 172.16.4.1:/srv/share/glusterfs</div><div>Brick2: 172.16.4.2:/srv/share/glusterfs</div><div>Options Reconfigured:</div><div>cluster.background-self-heal-count: 20</div><div>cluster.heal-timeout: 2</div><div>performance.normal-prio-threads: 64</div><div>performance.high-prio-threads: 64</div><div>performance.least-prio-threads: 64</div><div>performance.low-prio-threads: 64</div><div>performance.flush-behind: off</div><div>performance.io-thread-count: 64</div></div><div><br></div><div>------</div><div><br></div><div><div># gluster volume heal share info</div><div>Brick web01.rsdc:/srv/share/glusterfs/</div><div>/media/ga/live/a - Possibly undergoing heal</div><div><br></div><div>Number of entries: 1</div><div><br></div><div>Brick web02.rsdc:/srv/share/glusterfs/</div><div>Number of entries: 0</div></div><div><br></div><div>-------</div><div><br></div><div><div># gluster volume heal share info split-brain</div><div>Gathering list of split brain entries on volume share has been successful</div><div><br></div><div>Brick 172.16.4.1:/srv/share/glusterfs</div><div>Number of entries: 0</div><div><br></div><div>Brick 172.16.4.2:/srv/share/glusterfs</div><div>Number of entries: 0</div></div><div><br></div><div>-------</div><div><br></div><div><div>==&gt; /var/log/glusterfs/glustershd.log &lt;==</div><div>[2016-01-04 11:35:33.004831] I [afr-self-heal-entry.c:554:afr_selfheal_entry_do] 0-share-replicate-0: performing entry selfheal on b13199a1-464c-4491-8464-444b3f7eeee3</div><div>[2016-01-04 11:36:07.449192] W [client-rpc-fops.c:2772:client3_3_lookup_cbk] 0-share-client-1: remote operation failed: No data available. Path: (null) (00000000-0000-0000-0000-000000000000)</div><div>[2016-01-04 11:36:07.449706] W [client-rpc-fops.c:240:client3_3_mknod_cbk] 0-share-client-1: remote operation failed: File exists. Path: (null)</div></div><div><br></div><div>Could you please advise ?</div><div><br></div><div>Kind regards,</div><div><br></div><div>AT</div></div>
<br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>http://www.gluster.org/mailman/listinfo/gluster-users</blockquote><div><br></div></div></body></html>