<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 24/05/2016 8:24 PM, Kevin Lemonnier
wrote:<br>
</div>
<blockquote cite="mid:20160524102444.GD3364@luwin.ulrar.net"
type="cite">
<pre wrap="">So the VM were configured with cache set to none, I just tried with
cache=directsync and it seems to be fixing the issue. Still need to run
more test, but did a couple already with that option and no I/O errors.
Never had to do this before, is it known ? Found the clue in some old mail
from this mailing list, did I miss some doc saying you should be using
directsync with glusterfs ?</pre>
</blockquote>
<br>
<p>Interesting, I remember seeing some issues with cache=none on the
proxmox mailing list. I use writeback or default, which might be
why I haven't encountered theses issue. I suspect you would find
writethrough works as well.</p>
<p><br>
</p>
<p>From the proxmox wiki:</p>
<p><br>
</p>
"<i>This mode causes qemu-kvm to interact with the disk image file
or block device with O_DIRECT semantics, so the host page cache is
bypassed </i><i><br>
</i><i> and I/O happens directly between the qemu-kvm userspace
buffers and the storage device. Because the actual
storage device may report </i><i><br>
</i><i> a write as completed when placed in its write queue
only, the guest's virtual storage adapter is informed that there
is a writeback cache, </i><i><br>
</i><i> so the guest would be expected to send down flush
commands as needed to manage data integrity.</i><i><br>
</i><i> Equivalent to direct access to your hosts' disk,
performance wise.</i>"<br>
<br>
<br>
I'll restore a test vm and try cache=none myself.<br>
<pre class="moz-signature" cols="72">--
Lindsay Mathieson</pre>
</body>
</html>