<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Serkan,<br></div><div><br></div><div>Heal for 2 different files could be parallel but not for a single file and different chunks.<br></div><div>I think you are referring your previous mail in which you had to remove one complete disk.<br></div><div><br></div><div>In this case heal starts automatically but it scans through each and every file/dir<br></div><div>to decide if it needs heal or not. No doubt it is more time taking process as compared to index heal.<br></div><div>If the data is 900GB then it might take lot of time.<br></div><div><br></div><div>What configuration to choose depends a lot on your storage requirement, hardware capability and</div><div>probability of failure of disk and network.<br></div><div><br>For example : A small configuration like 4+2 could help you in this scenario. You can have distributed disp volume of 4+2 config.</div><div>In this case each sub vol have a comparatively less data. If a brick fails in that sub vol, it will have to heal only that much data and that too from reading 4 bricks only.<br></div><div><br></div><div>dist-disp-vol<br></div><div><br></div><div>subvol-1 subvol-2 subvol-3<br></div><div>4+2 4+2 4+2<br></div><div>4GB 4GB 4GB<br></div><div>^^^<br></div><div>If a brick in this subvol-1 fails, it will be local to this subvol only and will require only 4GB of data to be healed which will require reading from 4 disk only. <br></div><div><br></div><div>I am keeping Pranith in CC to take his input too.</div><div><br></div><div>Ashish<br></div><div><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Serkan Çoban" <cobanserkan@gmail.com><br><b>To: </b>"Ashish Pandey" <aspandey@redhat.com><br><b>Cc: </b>"Gluster Users" <gluster-users@gluster.org><br><b>Sent: </b>Monday, August 8, 2016 4:47:02 PM<br><b>Subject: </b>Re: [Gluster-users] ec heal questions<br><div><br></div>Is reading the good copies to construct the bad chunk is a parallel or<br>sequential operation?<br>Should I revert my 16+4 ec cluster to 8+2 because it takes nearly 7<br>days to heal just one broken 8TB disk which has only 800GB of data?<br><div><br></div>On Mon, Aug 8, 2016 at 1:56 PM, Ashish Pandey <aspandey@redhat.com> wrote:<br>><br>> Hi,<br>><br>> Considering all the other factor same for both the configuration, yes small<br>> configuration<br>> would take less time. To read good copies, it will take less time.<br>><br>> I think, multi threaded shd is the only enhancement in near future.<br>><br>> Ashish<br>><br>> ________________________________<br>> From: "Serkan Çoban" <cobanserkan@gmail.com><br>> To: "Gluster Users" <gluster-users@gluster.org><br>> Sent: Monday, August 8, 2016 4:02:22 PM<br>> Subject: [Gluster-users] ec heal questions<br>><br>><br>> Hi,<br>><br>> Assume we have 8+2 and 16+4 ec configurations and we just replaced a<br>> broken disk in each configuration which has 100GB of data. In which<br>> case heal completes faster? Does heal speed has anything related with<br>> ec configuration?<br>><br>> Assume we are in 16+4 ec configuration. When heal starts it reads 16<br>> chunks from other bricks recompute our chunks and writes it to just<br>> replaced disk. Am I correct?<br>><br>> If above assumption is true then small ec configurations heals faster right?<br>><br>> Is there any improvements in 3.7.14+ that makes ec heal faster?(Other<br>> than multi-thread shd for ec)<br>><br>> Thanks,<br>> Serkan<br>> _______________________________________________<br>> Gluster-users mailing list<br>> Gluster-users@gluster.org<br>> http://www.gluster.org/mailman/listinfo/gluster-users<br>><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>http://www.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>