<html><body><div style="font-family: garamond,new york,times,serif; font-size: 12pt; color: #000000"><div>Could you share the logs?<br></div><div>I'd like to look at the glustershd logs and etc-glusterfs-glusterd.vol.log files.<br></div><div><br></div><div>-Krutika<br></div><div><br></div><hr id="zwchr"><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="border-left: 2px solid #1010FF; margin-left: 5px; padding-left: 5px; color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Lindsay Mathieson" &lt;lindsay.mathieson@gmail.com&gt;<br><b>To: </b>"gluster-users" &lt;Gluster-users@gluster.org&gt;<br><b>Sent: </b>Saturday, January 23, 2016 9:20:52 AM<br><b>Subject: </b>[Gluster-users] More Peculiar heal behaviour after removing brick<br><div><br></div>Maybe I'm doing some wrong here but I'm not sure what, or maybe this is normal behaviour?<br> <br> <br> All of the following is performed from my vna node which has the highest numbered uuid. Indenting applied by me for readability.<br> <br> Spoiler, because it happens at the end: removing&nbsp; the vng brick followed by a full heal gives this error:<br> &nbsp; "<tt><b>Commit failed on vng.proxmox.softlog. Please check log file for details."</b></tt><br> <br> <br> Sets to recreate:<br> 1.&nbsp;&nbsp;&nbsp; Create a test volume:<br><blockquote><tt>vna$ gluster volume create test3 rep 3 transport tcp vnb.proxmox.softlog:/vmdata/test3 vng.proxmox.softlog:/vmdata/test3 vna.proxmox.softlog:/vmdata/test3</tt><br> <tt>vna$ gluster volume set test3 group softlog</tt><br> <tt>vna$ gluster volume info test3</tt><br><blockquote><tt>Volume Name: test3</tt><br> <tt>Type: Replicate</tt><br> <tt>Volume ID: 0be89d63-775c-4eb5-9d98-0a4a87f30fbf</tt><br> <tt>Status: Created</tt><br> <tt>Number of Bricks: 1 x 3 = 3</tt><br> <tt>Transport-type: tcp</tt><br> <tt>Bricks:</tt><br> <tt>Brick1: vnb.proxmox.softlog:/vmdata/test3</tt><br> <tt>Brick2: vng.proxmox.softlog:/vmdata/test3</tt><br> <tt>Brick3: vna.proxmox.softlog:/vmdata/test3</tt><br> <tt>Options Reconfigured:</tt><br> <tt>cluster.data-self-heal-algorithm: full</tt><br> <tt>network.remote-dio: enable</tt><br> <tt>cluster.eager-lock: enable</tt><br> <tt>performance.io-cache: off</tt><br> <tt>performance.read-ahead: off</tt><br> <tt>performance.quick-read: off</tt><br> <tt>performance.stat-prefetch: off</tt><br> <tt>performance.strict-write-ordering: on</tt><br> <tt>performance.write-behind: off</tt><br> <tt>nfs.enable-ino32: off</tt><br> <tt>nfs.addr-namelookup: off</tt><br> <tt>nfs.disable: on</tt><br> <tt>performance.cache-refresh-timeout: 4</tt><br> <tt>performance.io-thread-count: 32</tt><br> <tt>performance.low-prio-threads: 32</tt><br> <tt>cluster.server-quorum-type: server</tt><br> <tt>cluster.quorum-type: auto</tt><br> <tt>client.event-threads: 4</tt><br> <tt>server.event-threads: 4</tt><br> <tt>cluster.self-heal-window-size: 256</tt><br> <tt>features.shard-block-size: 512MB</tt><br> <tt>features.shard: on</tt><br> <tt>performance.readdir-ahead: off</tt><br></blockquote><tt>vna$ gluster volume start test3</tt><br> <br></blockquote>2.&nbsp;&nbsp;&nbsp; Immediately remove the vng brick:<br><blockquote><tt>vna$ gluster volume remove-brick test3 replica 2 vng.proxmox.softlog:/vmdata/test3 force</tt><br> <tt>vna$ gluster volume info test3</tt><br><blockquote><tt>Volume Name: test3</tt><br> <tt>Type: Replicate</tt><br> <tt>Volume ID: 36421a23-68c4-455d-8d4c-e21d9428e1da</tt><br> <tt>Status: Started</tt><br> <tt>Number of Bricks: 1 x 2 = 2</tt><br> <tt>Transport-type: tcp</tt><br> <tt>Bricks:</tt><br> <tt>Brick1: vnb.proxmox.softlog:/vmdata/test3</tt><br> <tt>Brick2: vna.proxmox.softlog:/vmdata/test3</tt><br> <tt>Options Reconfigured:</tt><br> <tt>cluster.data-self-heal-algorithm: full</tt><br> <tt>network.remote-dio: enable</tt><br> <tt>cluster.eager-lock: enable</tt><br> <tt>performance.io-cache: off</tt><br> <tt>performance.read-ahead: off</tt><br> <tt>performance.quick-read: off</tt><br> <tt>performance.stat-prefetch: off</tt><br> <tt>performance.strict-write-ordering: on</tt><br> <tt>performance.write-behind: off</tt><br> <tt>nfs.enable-ino32: off</tt><br> <tt>nfs.addr-namelookup: off</tt><br> <tt>nfs.disable: on</tt><br> <tt>performance.cache-refresh-timeout: 4</tt><br> <tt>performance.io-thread-count: 32</tt><br> <tt>performance.low-prio-threads: 32</tt><br> <tt>cluster.server-quorum-type: server</tt><br> <tt>cluster.quorum-type: auto</tt><br> <tt>client.event-threads: 4</tt><br> <tt>server.event-threads: 4</tt><br> <tt>cluster.self-heal-window-size: 256</tt><br> <tt>features.shard-block-size: 512MB</tt><br> <tt>features.shard: on</tt><br> <tt>performance.readdir-ahead: off</tt><br></blockquote></blockquote><br> 3.&nbsp;&nbsp;&nbsp; Then run a full heal:<br> <br><blockquote><tt>vna$ gluster volume heal test3 full</tt><br> <tt>&nbsp; <b>Commit failed on vng.proxmox.softlog. Please check log file for details.</b></tt><br></blockquote><br> <br> Weird, because of cause the vng brick has been removed. This happens every time.<br> <br> I have preserved the glustershd logs from vna &amp; vng if needed. There were no heal logs.<br> <br> <br><blockquote><br></blockquote><br> <br> <br><pre class="moz-signature">-- 
Lindsay Mathieson</pre><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>http://www.gluster.org/mailman/listinfo/gluster-users</blockquote><div><br></div></div></body></html>