<div dir="ltr"><div class="gmail_extra"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>On Mon, Dec 5, 2016 at 4:53 AM, Momonth <span dir="ltr"><<a href="mailto:momonth@gmail.com" target="_blank">momonth@gmail.com</a>></span> wrote:<br></div></div></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi All,<br>
<br>
I've just joined this list as I'm working on a project and looking for<br>
a persistent and shared storage for docker based infra. I'm entirely<br>
new to the GlusterFS project, however have been involved into "storage<br>
business" for quite a while, including proprietary and opensource<br>
solutions.<br>
<br>
I've already deployed my first 2 nodes GlusterFS cluster, based on<br>
CentOS 6.8, I must admit it was really easy and everything just works<br>
=) So thumbsup!<br>
<br>
I'm now looking for $subj, just not to repeat common mistake a newbie<br>
like me would do. Things like "to use or not to use any RAID configs<br>
on nodes", "bricks capacities vs brick performance", "best network<br>
topologies" etc. Does anyone know a good source of that kind of info?<br></blockquote><div><br></div><div>Redhat I believe still recommends to have bricks raided. I haven't checked again lately but when I was looking at RHS(gluster) they recommended I want to say it was 8-12 disks in a raid6. I myself use ZFS disks passed through in JBOD in a raid10, I've also seen some that make each brick a disk or disk pair and let gluster handle the redundancy. It all comes own to what level of protection vs performance you want and the workload I think.</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I'm also curious to know any baseline performances, eg I have a 2<br>
nodes cluster in "replica" mode, each brick is SSD x2 in RAID1 mode.<br>
For the following workload:<br></blockquote><div><br></div><div>First thing I notice is that without 3 nodes for quorum you run risk of split brain issues. A 3rd node for 3 way replication, or an arbiter node would help with that. I like 3 nodes replication, but that also has effect on network throughput as you are now copying data once more simultaneously. </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
100% random, 30% reads / 70% writes, 4KB block size, single thread<br>
<br>
I observe ~ 220 read IOPS + ~515 write IOPS, 95th percentile read<br>
latency 1.9 ms, write - 1.9 ms.<br>
<br>
Is it OK or not OK? Should I look into optimizing it?<br>
<br>
Thanks,<br>
Vladimir<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div></div>