<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sun, Nov 6, 2016 at 3:24 AM, Gandalf Corvotempesta <span dir="ltr"><<a href="mailto:gandalf.corvotempesta@gmail.com" target="_blank">gandalf.corvotempesta@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>Il 06/11/2016 03:37, David Gossage ha scritto:<br>
</p>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">The only thing you gain with raidz1 I
think is maybe more usable space. Performance in general
will not be as good, and whether the vdev is mirrored or z1
neither can survive 2 drives failing. In most cases the z10
will rebuild faster with less impact during rebuild. If you
are already using gluster 3 node replicate as VM practices
suggest then you are already pretty well protected if you
lose the wrong 2 drives as well.<br>
</div>
</div>
</div>
</blockquote>
<br>
Ok, i'll try again. I'm <b>not</b> talking about a single RAIDZ1
for the whole server.<br>
<br>
Let's assume a 12 disks server. 4TB each. Raw space = 4TB*12 = 48TB<br>
<br>
You can do one of the following:<br>
1) <b>a single RAIDZ10</b>, using all disks, made up by 6 RAIDZ1
mirrors. usable space=4TB*6 = 24TB<br>
2) <b>6 RAIDZ1 mirrors</b>. usable space=4TB*6 = 24TB<br></div></blockquote><div><br></div><div>I see maybe you don't really means raidz1 here. Raidz1 usually refers to "raid5" type vdevs with at least 3 disks otherwise why pay a penalty for tracking parity when you can have a mirrored pair. So in your case you are changing it from one zpool like was laid out to multiple zpools with each one being 1 mirrored vdev pair of disks? </div><div><br></div><div>tank1</div><div> mirror</div><div> pair-a</div><div> pair-a</div><div><br></div><div>tank2</div><div> mirror</div><div> pair-b</div><div> pair-b</div><div><br></div><div>etc.....</div><div><br></div><div>as opposed to </div><div><br></div><div>tank1</div><div> mirror</div><div> pair-a</div><div> pair-a</div><div> mirror</div><div> pair-b</div><div> pair-b</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF">
<br>
You'll get the same usable space for both solution.<br>
<br>
Now you have gluster, so you have at least 2 more servers in
"identical" configuration.<br>
<br>
With solution 1, you can loose only 1 disk for each pair. If you
loose 2 disks from the same pair, you loose the whole RAIDZ10 and
you have<br>
to heal 24TB from the network.<br>
<br>
With solution 2, you can loose the same number of disks, but if you
loose 1 mirror at once, you only have to heal that mirror from the
network, only 4TB.<br>
<br>
* IOPS should be the same, as Gluster will 'aggragate' each pair in
a single volume, like a RAID10 does, but you get much more speed
during an healing.<br>
* Resilvering time is the same, as ZFS has to resilver only the
failed disk with both solutions.<br>
<br>
What i'm saying is to skip the "RAID0" part and use gluster as
aggragator. Is much more secure and faster to recover in case of
multiple failures.<br></div></blockquote><div><br></div><div>So moving from a replicated to a distributed-replicated model? or a striped-distributed-replicate? what is the command or layout you would use to get to the model you are wanting to use?</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF">
</div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>