<div dir="auto"><div><div class="gmail_extra"><div class="gmail_quote">Il 05 gen 2017 6:33 PM, "Joe Julian" <<a href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>> ha scritto:<blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
That's still not without it's drawbacks, though I'm sure my instance
is pretty rare. Ceph's automatic migration of data caused a
cascading failure and a complete loss of 580Tb of data due to a
hardware bug. If it had been on gluster, none of it would have been
lost.</div></blockquote></div></div></div><div dir="auto"><br></div><div dir="auto">I'm not talking only about automatic rebalance but mostly about ability to add a single brick/server to a replica 3 volume</div><div dir="auto"><br></div><div dir="auto">Anyway, could you please share more details about the experience you had wiith ceph and about what you mean with hardware bug?</div><div dir="auto"></div></div>