<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
+gluster-users<br>
<br>
<div class="moz-cite-prefix">On 01/13/2016 09:44 AM, Pawan Devaiah
wrote:<br>
</div>
<blockquote
cite="mid:CANF5f10sFnrK12raMPy24UKBSm__p9P0Kr9S6KCmJv2-DxwokA@mail.gmail.com"
type="cite">
<div dir="ltr">We would be looking for redundancy so replicated
volumes I guess</div>
</blockquote>
If replication is going to be there, why additional RAID10? You can
do just RAID6, it saves on space and replication in glusterfs will
give redundancy anyways.<br>
<br>
Pranith<br>
<blockquote
cite="mid:CANF5f10sFnrK12raMPy24UKBSm__p9P0Kr9S6KCmJv2-DxwokA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Thanks,</div>
<div>Dev</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Jan 13, 2016 at 5:07 PM,
Pranith Kumar Karampuri <span dir="ltr"><<a
moz-do-not-send="true" href="mailto:pkarampu@redhat.com"
target="_blank">pkarampu@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class=""> <br>
<br>
<div>On 01/13/2016 02:21 AM, Pawan Devaiah wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Thanks for the response Pranith
<div><br>
</div>
<div>If we take EC out of the equation and say I go
with RAID on the physical disk, do you think
GlusterFS is good for the 2 workloads that I
mentioned before.</div>
<div><br>
</div>
<div>Basically it is going to be a NFS storage for
VM and data but with different RAIDs, 10 for VM
and 6 for data.</div>
</div>
</blockquote>
</span> What will be the kind of volume you will be using
with these disks?<span class="HOEnZb"><font
color="#888888"><br>
<br>
Pranith</font></span>
<div>
<div class="h5"><br>
<blockquote type="cite">
<div dir="ltr">
<div>Thanks </div>
<div>Dev</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jan 12, 2016 at
9:46 PM, Pranith Kumar Karampuri <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:pkarampu@redhat.com"
target="_blank">pkarampu@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span> <br>
<br>
<div>On 01/12/2016 01:26 PM, Pawan Devaiah
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Thanks for your response
Pranith and Mathieu,
<div><br>
</div>
<div>Pranith: To answer your question,
I am planning to use this storage
for two main workloads.</div>
<div><br>
</div>
<div>1. As a shared storage for VMs.</div>
</div>
</blockquote>
</span> EC as it is today is not good for
this.<span><br>
<blockquote type="cite">
<div dir="ltr">
<div>2. As a NFS Storage for files.</div>
</div>
</blockquote>
</span> If the above is for storing archive
data. EC is nice here.<span><font
color="#888888"><br>
<br>
Pranith</font></span>
<div>
<div><br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>We are a online backup company
so we store few hundred Terra
bytes of data.</div>
<div><br>
</div>
<div><br>
</div>
<div>Mathieu: I appreciate your
concern, however as a system
admins sometimes we get paranoid
and try to control everything
under the Sun.</div>
<div>I know I can only control what
I can.</div>
<div><br>
</div>
<div>Having said that, No, I have
pair of servers to start with so
at the moment I am just evaluating
and preparing for proof of
concept, after which I am going to
propose to my management, if they
are happy then we will proceed
further.</div>
<div><br>
</div>
<div>Regards,</div>
<div>Dev</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jan
12, 2016 at 7:30 PM, Mathieu
Chateau <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:mathieu.chateau@lotp.fr"
target="_blank">mathieu.chateau@lotp.fr</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">Hello,
<div><br>
</div>
<div>For any system, 36 disks
raise disk failure
probability. Do you plan
GlusterFS with only one
server?</div>
<div><br>
</div>
<div>You should think about
failure at each level and be
prepared for it:</div>
<div>
<ul>
<li>Motherboard failure
(full server down)</li>
<li>Disks failure</li>
<li>Network cable failure <br>
</li>
<li>File system corruption
(time needed for fsck)</li>
<li>File/folder removed by
mistake (backup)</li>
</ul>
<div>Using or not raid
depend on your answer on
these questions and
performance needed. </div>
<div>It also depend how
"good" is raid controller
in your server, like if it
has battery and 1GB of
cache.</div>
</div>
<div><br>
</div>
<div>When many disks are
bought at same time (1
order, serial number close
to each other), they may
fail in near time to each
other (if something bad
happened in manufactory).</div>
<div>I already saw like 3
disks failing in few days.</div>
<div><br>
</div>
<div>just my 2 cents,</div>
<div><br>
</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br
clear="all">
<div>
<div>Cordialement,<br>
Mathieu CHATEAU<br>
<a moz-do-not-send="true"
href="http://www.lotp.fr" target="_blank">http://www.lotp.fr</a></div>
</div>
<div>
<div> <br>
<div class="gmail_quote">2016-01-12
4:36 GMT+01:00 Pranith
Kumar Karampuri <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div bgcolor="#FFFFFF"
text="#000000"><span>
<br>
<br>
<div>On 01/12/2016
04:34 AM, Pawan
Devaiah wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">Hi
All,
<div><br>
</div>
<div>We have a
fairly
powerful
server sitting
at office with
128 Gig RAM
and 36 X 4 TB
drives. I am
planning to
utilize this
server as a
backend
storage with
GlusterFS on
it.</div>
<div>I have
been doing lot
of reading on
Glusterfs, but
I do not see
any definite
recommendation
on having RAID
on GLUSTER
nodes.</div>
<div>Is it
recommended to
have RAID on
GLUSTER nodes
specially for
the bricks?</div>
<div>If Yes,
is it not
contrary to
the latest
Erasure code
implemented in
Gluster or is
it still not
ready for
production
environment?</div>
<div>I am
happy to
implement RAID
but my two
main concern
are</div>
<div>1. I want
to make most
of the disk
space
available.</div>
<div>2. I am
also concerned
about the
rebuild time
after disk
failure on the
RAID.</div>
</div>
</blockquote>
</span> What is the
workload you have?<br>
<br>
We found in our
testing that random
read/write workload
with Erasure coded
volumes is not as
good as we get with
replication. There
are enhancements in
progress at the
moment to address
these things which
we are yet to merge
and re-test.<br>
<br>
Pranith<br>
<blockquote
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Thanks</div>
<div>Dev</div>
<div><br>
</div>
<div><br>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</div>
<br>
_______________________________________________<br>
Gluster-users mailing
list<br>
<a
moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a
moz-do-not-send="true"
href="http://www.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer"
target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>