<html><head></head><body><a href="http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/">http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/</a><br><br><div class="gmail_quote">On March 11, 2015 4:24:09 AM PDT, Alessandro Ipe &lt;Alessandro.Ipe@meteo.be&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">Well, it is even worse. Now when doing  a "ls -R" on the volume results in a lot of <br /><br />[2015-03-11 11:18:31.957505] E [afr-self-heal-common.c:233:afr_sh_print_split_brain_log] 0-md1-replicate-2: Unable to self-heal contents of '/library' (possible split-brain). Please delete the file from all but the preferred subvolume.- Pending matrix:  [ [ 0 2 ] [ 1 0 ] ]<br />[2015-03-11 11:18:31.957692] E [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 0-md1-replicate-2:  metadata self heal  failed,   on /library<br /><br />I am desperate...<br /><br /><br />A.<br /><br /><br />On Wednesday 11 March 2015 12:05:33 you wrote:<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> Hi,<br /> <br /> <br /> When trying to access a file on a gluster client (through fuse), I get an<br /> "Input/output error" message.<br /> <br /> Getting the attributes for the file gives me for the first
brick<br /> # file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2<br /> trusted.afr.md1-client-2=0sAAAAAAAAAAAAAAAA<br /> trusted.afr.md1-client-3=0sAAABdAAAAAAAAAAA<br /> trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==<br /> <br /> while for the second (replicate) brick<br /> # file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2<br /> trusted.afr.md1-client-2=0sAAABJAAAAAAAAAAA<br /> trusted.afr.md1-client-3=0sAAAAAAAAAAAAAAAA<br /> trusted.gfid=0sOCFPGCdrQ9uyq2yTTPCKqQ==<br /> <br /> It seems that I have a split-brain. How can I solve this issue by resetting<br /> the attributes, please ?<br /> <br /> <br /> Thanks,<br /> <br /> <br /> Alessandro.<br /> <br /> ==================<br /> gluster volume info md1<br /> <br /> Volume Name: md1<br /> Type: Distributed-Replicate<br /> Volume ID: 6da4b915-1def-4df4-a41c-2f3300ebf16b<br /> Status: Started<br /> Number of Bricks: 3 x 2 = 6<br /> Transport-type: tcp<br /> Bricks:<br /> Brick1: tsunami1:/data/glusterfs/md1/brick1<br />
Brick2: tsunami2:/data/glusterfs/md1/brick1<br /> Brick3: tsunami3:/data/glusterfs/md1/brick1<br /> Brick4: tsunami4:/data/glusterfs/md1/brick1<br /> Brick5: tsunami5:/data/glusterfs/md1/brick1<br /> Brick6: tsunami6:/data/glusterfs/md1/brick1<br /> Options Reconfigured:<br /> server.allow-insecure: on<br /> cluster.read-hash-mode: 2<br /> features.quota: off<br /> performance.write-behind: on<br /> performance.write-behind-window-size: 4MB<br /> performance.flush-behind: off<br /> <a href="http://performance.io">performance.io</a>-thread-count: 64<br /> performance.cache-size: 512MB<br /> nfs.disable: on<br /> cluster.lookup-unhashed: off<br /></blockquote><br /><hr /><br />Gluster-users mailing list<br />Gluster-users@gluster.org<br /><a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br /></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>