<div dir="ltr">Hello,<div><br></div><div>Please see below :</div><div>-----</div><div><br></div><div>web01 # getfattr -d -m . -e hex media/ga/live/a</div><div># file: media/ga/live/a</div><div>trusted.afr.dirty=0x000000000000000000000000</div><div>trusted.afr.remote1=0x000000000000000000000000</div><div>trusted.afr.remote2=0x000000000000000000000005</div><div>trusted.afr.share-client-0=0x000000000000000000000000</div><div>trusted.afr.share-client-1=0x0000000000000000000000ee</div><div>trusted.gfid=0xb13199a1464c44918464444b3f7eeee3</div><div>trusted.glusterfs.dht=0x000000010000000000000000ffffffff </div><div><br></div><div><br></div><div>------</div><div><br></div><div><div>web02 # getfattr -d -m . -e hex media/ga/live/a</div><div># file: media/ga/live/a</div><div>trusted.afr.dirty=0x000000000000000000000000</div><div>trusted.afr.remote1=0x000000000000000000000008</div><div>trusted.afr.remote2=0x000000000000000000000000</div><div>trusted.afr.share-client-0=0x000000000000000000000000</div><div>trusted.afr.share-client-1=0x000000000000000000000000</div><div>trusted.gfid=0xb13199a1464c44918464444b3f7eeee3</div><div>trusted.glusterfs.dht=0x000000010000000000000000ffffffff</div></div><div><br></div><div>------</div><div><br></div><div>Regards,</div><div>AT</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jan 4, 2016 at 12:44 PM, Krutika Dhananjay <span dir="ltr">&lt;<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:garamond,new york,times,serif;font-size:12pt;color:#000000"><div>Hi,</div><div><br></div><div>Could you share the output of<br></div><div># getfattr -d -m . -e hex &lt;abs-path-to-media/ga/live/a&gt;<br></div><div><br></div><div>from both the bricks?<br></div><div><br></div><div>-Krutika<br></div><hr><blockquote style="border-left:2px solid #1010ff;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>&quot;Andreas Tsaridas&quot; &lt;<a href="mailto:andreas.tsaridas@gmail.com" target="_blank">andreas.tsaridas@gmail.com</a>&gt;<br><b>To: </b><a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br><b>Sent: </b>Monday, January 4, 2016 5:10:58 PM<br><b>Subject: </b>[Gluster-users] folder not being healed<div><div class="h5"><br><div><br></div><div dir="ltr">Hello,<div><br></div><div>I have a cluster of two replicated nodes in glusterfs 3.6.3 in RedHat 6.6. Problem is that a specific folder is always trying to be healed but never gets healed. This has been going on for 2 weeks now.</div><div><br></div><div>-----</div><div><br></div><div><div># gluster volume status</div><div>Status of volume: share</div><div>Gluster process<span style="white-space:pre-wrap">                                                </span>Port<span style="white-space:pre-wrap">        </span>Online<span style="white-space:pre-wrap">        </span>Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 172.16.4.1:/srv/share/glusterfs<span style="white-space:pre-wrap">                        </span>49152<span style="white-space:pre-wrap">        </span>Y<span style="white-space:pre-wrap">        </span>10416</div><div>Brick 172.16.4.2:/srv/share/glusterfs<span style="white-space:pre-wrap">                        </span>49152<span style="white-space:pre-wrap">        </span>Y<span style="white-space:pre-wrap">        </span>19907</div><div>NFS Server on localhost<span style="white-space:pre-wrap">                                        </span>2049<span style="white-space:pre-wrap">        </span>Y<span style="white-space:pre-wrap">        </span>22664</div><div>Self-heal Daemon on localhost<span style="white-space:pre-wrap">                                </span>N/A<span style="white-space:pre-wrap">        </span>Y<span style="white-space:pre-wrap">        </span>22676</div><div>NFS Server on 172.16.4.2<span style="white-space:pre-wrap">                                </span>2049<span style="white-space:pre-wrap">        </span>Y<span style="white-space:pre-wrap">        </span>19923</div><div>Self-heal Daemon on 172.16.4.2<span style="white-space:pre-wrap">                                </span>N/A<span style="white-space:pre-wrap">        </span>Y<span style="white-space:pre-wrap">        </span>19937</div><div><br></div><div>Task Status of Volume share</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div></div><div><br></div><div>------</div><div><br></div><div><div># gluster volume info</div><div><br></div><div>Volume Name: share</div><div>Type: Replicate</div><div>Volume ID: 17224664-645c-48b7-bc3a-b8fc84c6ab30</div><div>Status: Started</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 172.16.4.1:/srv/share/glusterfs</div><div>Brick2: 172.16.4.2:/srv/share/glusterfs</div><div>Options Reconfigured:</div><div>cluster.background-self-heal-count: 20</div><div>cluster.heal-timeout: 2</div><div>performance.normal-prio-threads: 64</div><div>performance.high-prio-threads: 64</div><div>performance.least-prio-threads: 64</div><div>performance.low-prio-threads: 64</div><div>performance.flush-behind: off</div><div>performance.io-thread-count: 64</div></div><div><br></div><div>------</div><div><br></div><div><div># gluster volume heal share info</div><div>Brick web01.rsdc:/srv/share/glusterfs/</div><div>/media/ga/live/a - Possibly undergoing heal</div><div><br></div><div>Number of entries: 1</div><div><br></div><div>Brick web02.rsdc:/srv/share/glusterfs/</div><div>Number of entries: 0</div></div><div><br></div><div>-------</div><div><br></div><div><div># gluster volume heal share info split-brain</div><div>Gathering list of split brain entries on volume share has been successful</div><div><br></div><div>Brick 172.16.4.1:/srv/share/glusterfs</div><div>Number of entries: 0</div><div><br></div><div>Brick 172.16.4.2:/srv/share/glusterfs</div><div>Number of entries: 0</div></div><div><br></div><div>-------</div><div><br></div><div><div>==&gt; /var/log/glusterfs/glustershd.log &lt;==</div><div>[2016-01-04 11:35:33.004831] I [afr-self-heal-entry.c:554:afr_selfheal_entry_do] 0-share-replicate-0: performing entry selfheal on b13199a1-464c-4491-8464-444b3f7eeee3</div><div>[2016-01-04 11:36:07.449192] W [client-rpc-fops.c:2772:client3_3_lookup_cbk] 0-share-client-1: remote operation failed: No data available. Path: (null) (00000000-0000-0000-0000-000000000000)</div><div>[2016-01-04 11:36:07.449706] W [client-rpc-fops.c:240:client3_3_mknod_cbk] 0-share-client-1: remote operation failed: File exists. Path: (null)</div></div><div><br></div><div>Could you please advise ?</div><div><br></div><div>Kind regards,</div><div><br></div><div>AT</div></div>
<br></div></div>_______________________________________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></blockquote><div><br></div></div></div></blockquote></div><br></div>