<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=utf-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    gluster 3.7.6<br>
    <br>
    I seem to be able to reliably reproduce this. I have a replica 2
    volume with 1 test VM image. While the VM is  running with heavy
    disk read/writes  (disk benchmark) I add a 3rd brick for replica 3:<br>
    <br>
    <tt>gluster volume add-brick datastore1 replica 3 
      vng.proxmox.softlog:/vmdata/datastore1 <br>
      <br>
      I pretty much immediately get this:<br>
      <br>
    </tt>
    <blockquote><tt>gluster volume heal datastore1 info</tt><br>
      <tt>Brick vna.proxmox.softlog:/vmdata/datastore1</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.20</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.22</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.55 - Possibly
        undergoing heal</tt><br>
      <br>
      <tt>/images/301/vm-301-disk-1.qcow2 - Possibly undergoing heal</tt><br>
      <br>
      <tt>Number of entries: 4</tt><br>
      <br>
      <tt>Brick vnb.proxmox.softlog:/vmdata/datastore1</tt><br>
      <tt>/images/301/vm-301-disk-1.qcow2 - Possibly undergoing heal</tt><br>
      <br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.55 - Possibly
        undergoing heal</tt><br>
      <br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.20</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.22</tt><br>
      <tt>Number of entries: 4</tt><br>
      <br>
      <tt>Brick vng.proxmox.softlog:/vmdata/datastore1</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.16</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.28</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.1</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.22</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.77</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.9</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.5</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.2</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.26</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.15</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.13</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.3</tt><br>
      <tt>/.shard/d6aad699-d71d-4b35-b021-d35e5ff297c4.18</tt><br>
      <tt>Number of entries: 13</tt><br>
    </blockquote>
    <tt><br>
    </tt>The brick on vng is the new empty brick, but it has 13 shards
    being healed back to vna &amp; vnb. That can't be right and if I
    leave it the VM becomes hopelessly corrupted. Also there are 81
    shards in the files, they should all be queued for healing.<br>
    <br>
    Additionally I get read errors when I run a qemu-img check on the VM
    image. If I remove the vng brick the problems are resolved.<br>
    <br>
    <br>
    If I do the same process while the VM is not running - i.e no files
    are being access, every proceeds as expect. All shard on vn &amp;
    vnb are healed to vng,<br>
    <tt><br>
    </tt>
    <pre class="moz-signature" cols="72">-- 
Lindsay Mathieson</pre>
  </body>
</html>