<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
gluster 3.7.10<br>
Proxmox (debian jessie)<br>
<br>
I'm finding the following more than a little concerning. I've
created a datastore with the following settings:<br>
<tt><br>
</tt>
<blockquote><tt>Volume Name: datastore4</tt><br>
<tt>Type: Replicate</tt><br>
<tt>Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce</tt><br>
<tt>Status: Started</tt><br>
<tt>Number of Bricks: 1 x 3 = 3</tt><br>
<tt>Transport-type: tcp</tt><br>
<tt>Bricks:</tt><br>
<tt>Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4</tt><br>
<tt>Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4</tt><br>
<tt>Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4</tt><br>
<tt>Options Reconfigured:</tt><br>
<tt>features.shard-block-size: 64MB</tt><br>
<tt>network.remote-dio: enable</tt><br>
<tt>cluster.eager-lock: enable</tt><br>
<tt>performance.io-cache: off</tt><br>
<tt>performance.read-ahead: off</tt><br>
<tt>performance.quick-read: off</tt><br>
<tt>performance.stat-prefetch: on</tt><br>
<tt>performance.strict-write-ordering: off</tt><br>
<tt>nfs.enable-ino32: off</tt><br>
<tt>nfs.addr-namelookup: off</tt><br>
<tt>nfs.disable: on</tt><br>
<tt>cluster.server-quorum-type: server</tt><br>
<tt>cluster.quorum-type: auto</tt><br>
<tt>features.shard: on</tt><br>
<tt>cluster.data-self-heal: on</tt><br>
<tt>cluster.self-heal-window-size: 1024</tt><br>
<tt>transport.address-family: inet</tt><br>
<tt>performance.readdir-ahead: on</tt></blockquote>
<br>
<br>
I've transferred 12 Windows VM's to it (gfapi) and am running them
all, spread across three nodes.<br>
<br>
"gluster volume heal datastore3 statistics heal-count" shows zero
heals on all nodes.<br>
<br>
but "gluster volume heal datastore4 info" shows heals occurring on
mutliple shards on all nodes, different shards each time its called.
<br>
<br>
<blockquote><tt>gluster volume heal datastore4 info</tt><br>
<tt>Brick vnb.proxmox.softlog:/tank/vmdata/datastore4</tt><br>
<tt>/.shard/d297f8d6-e263-4af3-9384-6492614dc115.221</tt><br>
<tt>/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1362</tt><br>
<tt>/.shard/bbdff876-290a-4e5e-93ef-a95276d57220.942</tt><br>
<tt>/.shard/eaeb41ec-9c0d-4fed-984f-cf832d8d33e0.1032</tt><br>
<tt>/.shard/f8ce4b49-14d0-46ef-9a95-456884f34fd4.623</tt><br>
<tt>/.shard/e9a39d2e-a1b7-4ea0-9d8c-b55370048d03.483</tt><br>
<tt>/.shard/f8ce4b49-14d0-46ef-9a95-456884f34fd4.47</tt><br>
<tt>/.shard/eaeb41ec-9c0d-4fed-984f-cf832d8d33e0.160</tt><br>
<tt>Status: Connected</tt><br>
<tt>Number of entries: 8</tt><br>
<br>
<tt>Brick vng.proxmox.softlog:/tank/vmdata/datastore4</tt><br>
<tt>/.shard/bd493985-2ee6-43f1-b8d5-5f0d5d3aa6f5.33</tt><br>
<tt>/.shard/d297f8d6-e263-4af3-9384-6492614dc115.48</tt><br>
<tt>/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1304</tt><br>
<tt>/.shard/d297f8d6-e263-4af3-9384-6492614dc115.47</tt><br>
<tt>/.shard/719041d0-d755-4bc6-a5fc-6b59071fac17.142</tt><br>
<tt>Status: Connected</tt><br>
<tt>Number of entries: 5</tt><br>
<br>
<tt>Brick vna.proxmox.softlog:/tank/vmdata/datastore4</tt><br>
<tt>/.shard/d297f8d6-e263-4af3-9384-6492614dc115.357</tt><br>
<tt>/.shard/bbdff876-290a-4e5e-93ef-a95276d57220.996</tt><br>
<tt>/.shard/d297f8d6-e263-4af3-9384-6492614dc115.679</tt><br>
<tt>/.shard/d297f8d6-e263-4af3-9384-6492614dc115.496</tt><br>
<tt>/.shard/eaeb41ec-9c0d-4fed-984f-cf832d8d33e0.160</tt><br>
<tt>/.shard/719041d0-d755-4bc6-a5fc-6b59071fac17.954</tt><br>
<tt>/.shard/d297f8d6-e263-4af3-9384-6492614dc115.678</tt><br>
<tt>/.shard/719041d0-d755-4bc6-a5fc-6b59071fac17.852</tt><br>
<tt>/.shard/bbdff876-290a-4e5e-93ef-a95276d57220.1544</tt><br>
<tt>Status: Connected</tt><br>
<tt>Number of entries: 9</tt><br>
</blockquote>
<br>
<br>
<br>
<pre class="moz-signature" cols="72">--
Lindsay Mathieson</pre>
</body>
</html>