<div dir="ltr">I don't think these will help. We need to trigger parallel heals, I gave the command as a reply to one of your earlier threads. Sorry again for the delay :-(.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 9, 2016 at 3:53 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Does increasing any of below values helps ec heal speed?<br>
<br>
performance.io-thread-count 16<br>
performance.high-prio-threads 16<br>
performance.normal-prio-<wbr>threads 16<br>
performance.low-prio-threads 16<br>
performance.least-prio-threads 1<br>
client.event-threads 8<br>
server.event-threads 8<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Mon, Aug 8, 2016 at 2:48 PM, Ashish Pandey <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>> wrote:<br>
> Serkan,<br>
><br>
> Heal for 2 different files could be parallel but not for a single file and<br>
> different chunks.<br>
> I think you are referring your previous mail in which you had to remove one<br>
> complete disk.<br>
><br>
> In this case heal starts automatically but it scans through each and every<br>
> file/dir<br>
> to decide if it needs heal or not. No doubt it is more time taking process<br>
> as compared to index heal.<br>
> If the data is 900GB then it might take lot of time.<br>
><br>
> What configuration to choose depends a lot on your storage requirement,<br>
> hardware capability and<br>
> probability of failure of disk and network.<br>
><br>
> For example : A small configuration like 4+2 could help you in this<br>
> scenario. You can have distributed disp volume of 4+2 config.<br>
> In this case each sub vol have a comparatively less data. If a brick fails<br>
> in that sub vol, it will have to heal only that much data and that too from<br>
> reading 4 bricks only.<br>
><br>
> dist-disp-vol<br>
><br>
> subvol-1 subvol-2 subvol-3<br>
> 4+2 4+2 4+2<br>
> 4GB 4GB 4GB<br>
> ^^^<br>
> If a brick in this subvol-1 fails, it will be local to this subvol only and<br>
> will require only 4GB of data to be healed which will require reading from 4<br>
> disk only.<br>
><br>
> I am keeping Pranith in CC to take his input too.<br>
><br>
> Ashish<br>
><br>
><br>
> ______________________________<wbr>__<br>
> From: "Serkan Çoban" <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
> To: "Ashish Pandey" <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>><br>
> Cc: "Gluster Users" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
> Sent: Monday, August 8, 2016 4:47:02 PM<br>
> Subject: Re: [Gluster-users] ec heal questions<br>
><br>
><br>
> Is reading the good copies to construct the bad chunk is a parallel or<br>
> sequential operation?<br>
> Should I revert my 16+4 ec cluster to 8+2 because it takes nearly 7<br>
> days to heal just one broken 8TB disk which has only 800GB of data?<br>
><br>
> On Mon, Aug 8, 2016 at 1:56 PM, Ashish Pandey <<a href="mailto:aspandey@redhat.com">aspandey@redhat.com</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> Considering all the other factor same for both the configuration, yes<br>
>> small<br>
>> configuration<br>
>> would take less time. To read good copies, it will take less time.<br>
>><br>
>> I think, multi threaded shd is the only enhancement in near future.<br>
>><br>
>> Ashish<br>
>><br>
>> ______________________________<wbr>__<br>
>> From: "Serkan Çoban" <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> To: "Gluster Users" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
>> Sent: Monday, August 8, 2016 4:02:22 PM<br>
>> Subject: [Gluster-users] ec heal questions<br>
>><br>
>><br>
>> Hi,<br>
>><br>
>> Assume we have 8+2 and 16+4 ec configurations and we just replaced a<br>
>> broken disk in each configuration which has 100GB of data. In which<br>
>> case heal completes faster? Does heal speed has anything related with<br>
>> ec configuration?<br>
>><br>
>> Assume we are in 16+4 ec configuration. When heal starts it reads 16<br>
>> chunks from other bricks recompute our chunks and writes it to just<br>
>> replaced disk. Am I correct?<br>
>><br>
>> If above assumption is true then small ec configurations heals faster<br>
>> right?<br>
>><br>
>> Is there any improvements in 3.7.14+ that makes ec heal faster?(Other<br>
>> than multi-thread shd for ec)<br>
>><br>
>> Thanks,<br>
>> Serkan<br>
>> ______________________________<wbr>_________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>><br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div>