<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 01/05/17 11:32, Gandalf
Corvotempesta wrote:<br>
</div>
<blockquote
cite="mid:CAJH6TXhM3Y2h6s6_CrZbX6z9yB_6rYM=ZQArva64+nO=jgxaKg@mail.gmail.com"
type="cite">
<div dir="auto">
<div>
<div class="gmail_extra">
<div class="gmail_quote">Il 05 gen 2017 2:00 PM, "Jeff
Darcy" <<a moz-do-not-send="true"
href="mailto:jdarcy@redhat.com">jdarcy@redhat.com</a>>
ha scritto:
<blockquote class="quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">There
used to be an idea called "data classification" to cover
this<br>
kind of case. You're right that setting arbitrary goals
for arbitrary<br>
objects would be too difficult. However, we could have
multiple pools<br>
with different replication/EC strategies, then use a
translator like<br>
the one for tiering to control which objects go into
which pools based<br>
on some kind of policy. To support that with a
relatively small<br>
number of nodes/bricks we'd also need to be able to
split bricks into<br>
smaller units, but that's not really all that hard<br>
</blockquote>
</div>
</div>
</div>
<div dir="auto"><br>
</div>
<div dir="auto">IMHO one of the biggest drawback in gluster is
the way it manage bricks</div>
<div dir="auto">Adding the ability to add one server per time
without having to manually rebalance or similiar would be
usefully</div>
<div dir="auto"><br>
</div>
<div dir="auto">Both ceph and lizard manage this automatically.</div>
<div dir="auto">If you want, you can add a single disk to a
working cluster and automatically the whole cluster is
rebalanced transparently with no user intervention</div>
<div dir="auto"><br>
</div>
<div dir="auto">This is really usefully an much less error prone
that having to manually rebalance all the things </div>
</div>
<br>
</blockquote>
<br>
That's still not without it's drawbacks, though I'm sure my instance
is pretty rare. Ceph's automatic migration of data caused a
cascading failure and a complete loss of 580Tb of data due to a
hardware bug. If it had been on gluster, none of it would have been
lost.<br>
</body>
</html>