<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 11, 2016 at 4:29 PM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I can wait for the patch to complete, please inform me when you ready.<br>
If it will take too much time to solve the crawl issue I can test<br>
without it too...<br></blockquote><div><br></div><div>I don't know the Root cause for the problem, so I am not sure by when it will be ready. Let me build the rpms, I have a meeting now for around an hour. I will start building rpms after that.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Serkan<br>
<br>
On Thu, Aug 11, 2016 at 5:52 AM, Pranith Kumar Karampuri<br>
<div class="HOEnZb"><div class="h5"><<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
><br>
><br>
> On Wed, Aug 10, 2016 at 1:58 PM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> Any progress about the patch?<br>
><br>
><br>
> hi Serkan,<br>
> While testing the patch by myself, I am seeing that it is taking more<br>
> than one crawl to complete heals even when there are no directory<br>
> hierarchies. It is faster than before but it shouldn't take more than 1<br>
> crawl to complete the heal because all the files exist already. I am<br>
> investigating why that is the case now. If you want to test things out<br>
> without this patch I will give you rpms today. Otherwise we need to find<br>
> until we find RCA for this crawl problem. Let me know your decision. If you<br>
> are okay with testing progressive versions of this feature, that would be<br>
> great. We can compare how each patch improved the performance.<br>
><br>
> Pranith<br>
><br>
>><br>
>><br>
>> On Thu, Aug 4, 2016 at 10:16 AM, Pranith Kumar Karampuri<br>
>> <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>> ><br>
>> ><br>
>> > On Thu, Aug 4, 2016 at 11:30 AM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> Thanks Pranith,<br>
>> >> I am waiting for RPMs to show, I will do the tests as soon as possible<br>
>> >> and inform you.<br>
>> ><br>
>> ><br>
>> > I guess on 3.7.x the RPMs are not automatically built. Let me find how<br>
>> > it<br>
>> > can be done. I will inform you after finding that out. Give me a day.<br>
>> ><br>
>> >><br>
>> >><br>
>> >> On Wed, Aug 3, 2016 at 11:19 PM, Pranith Kumar Karampuri<br>
>> >> <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>> >> ><br>
>> >> ><br>
>> >> > On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri<br>
>> >> > <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>> >> >><br>
>> >> >><br>
>> >> >><br>
>> >> >> On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban<br>
>> >> >> <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> >> >> wrote:<br>
>> >> >>><br>
>> >> >>> I use rpms for installation. Redhat/Centos 6.8.<br>
>> >> >><br>
>> >> >><br>
>> >> >> <a href="http://review.gluster.org/#/c/15084" rel="noreferrer" target="_blank">http://review.gluster.org/#/c/<wbr>15084</a> is the patch. In some time the<br>
>> >> >> rpms<br>
>> >> >> will be built actually.<br>
>> >> ><br>
>> >> ><br>
>> >> > In the same URL above it will actually post the rpms for<br>
>> >> > fedora/el6/el7<br>
>> >> > at<br>
>> >> > the end of the page.<br>
>> >> ><br>
>> >> >><br>
>> >> >><br>
>> >> >> Use gluster volume set <volname> disperse.shd-max-threads<br>
>> >> >> <num-threads<br>
>> >> >> (range: 1-64)><br>
>> >> >><br>
>> >> >> While testing this I thought of ways to decrease the number of<br>
>> >> >> crawls<br>
>> >> >> as<br>
>> >> >> well. But they are a bit involved. Try to create same set of data<br>
>> >> >> and<br>
>> >> >> see<br>
>> >> >> what is the time it takes to complete heals using number of threads<br>
>> >> >> as<br>
>> >> >> you<br>
>> >> >> increase the number of parallel heals from 1 to 64.<br>
>> >> >><br>
>> >> >>><br>
>> >> >>> On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri<br>
>> >> >>> <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>> >> >>> ><br>
>> >> >>> ><br>
>> >> >>> > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban<br>
>> >> >>> > <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> >> >>> > wrote:<br>
>> >> >>> >><br>
>> >> >>> >> I prefer 3.7 if it is ok for you. Can you also provide build<br>
>> >> >>> >> instructions?<br>
>> >> >>> ><br>
>> >> >>> ><br>
>> >> >>> > 3.7 should be fine. Do you use rpms/debs/anything-else?<br>
>> >> >>> ><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri<br>
>> >> >>> >> <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>> >> >>> >> ><br>
>> >> >>> >> ><br>
>> >> >>> >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban<br>
>> >> >>> >> > <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> >> >>> >> > wrote:<br>
>> >> >>> >> >><br>
>> >> >>> >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right?<br>
>> >> >>> >> >> I<br>
>> >> >>> >> >> have<br>
>> >> >>> >> >> 26 disks on each server.<br>
>> >> >>> >> ><br>
>> >> >>> >> ><br>
>> >> >>> >> > On which release-branch do you want the patch? I am testing it<br>
>> >> >>> >> > on<br>
>> >> >>> >> > master-branch now.<br>
>> >> >>> >> ><br>
>> >> >>> >> >><br>
>> >> >>> >> >><br>
>> >> >>> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri<br>
>> >> >>> >> >> <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban<br>
>> >> >>> >> >> > <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> >> >>> >> >> > wrote:<br>
>> >> >>> >> >> >><br>
>> >> >>> >> >> >> I have two of my storage servers free, I think I can use<br>
>> >> >>> >> >> >> them<br>
>> >> >>> >> >> >> for<br>
>> >> >>> >> >> >> testing. Is two server testing environment ok for you?<br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> > I think it would be better if you have at least 3. You can<br>
>> >> >>> >> >> > test<br>
>> >> >>> >> >> > it<br>
>> >> >>> >> >> > with<br>
>> >> >>> >> >> > 2+1<br>
>> >> >>> >> >> > ec configuration.<br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> >><br>
>> >> >>> >> >> >><br>
>> >> >>> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri<br>
>> >> >>> >> >> >> <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban<br>
>> >> >>> >> >> >> > <<a href="mailto:cobanserkan@gmail.com">cobanserkan@gmail.com</a>><br>
>> >> >>> >> >> >> > wrote:<br>
>> >> >>> >> >> >> >><br>
>> >> >>> >> >> >> >> Hi,<br>
>> >> >>> >> >> >> >><br>
>> >> >>> >> >> >> >> May I ask if multi-threaded self heal for distributed<br>
>> >> >>> >> >> >> >> disperse<br>
>> >> >>> >> >> >> >> volumes<br>
>> >> >>> >> >> >> >> implemented in this release?<br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> > Serkan,<br>
>> >> >>> >> >> >> > At the moment I am a bit busy with different<br>
>> >> >>> >> >> >> > work,<br>
>> >> >>> >> >> >> > Is<br>
>> >> >>> >> >> >> > it<br>
>> >> >>> >> >> >> > possible<br>
>> >> >>> >> >> >> > for you to help test the feature if I provide a patch?<br>
>> >> >>> >> >> >> > Actually<br>
>> >> >>> >> >> >> > the<br>
>> >> >>> >> >> >> > patch<br>
>> >> >>> >> >> >> > should be small. Testing is where lot of time will be<br>
>> >> >>> >> >> >> > spent<br>
>> >> >>> >> >> >> > on.<br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> >><br>
>> >> >>> >> >> >> >><br>
>> >> >>> >> >> >> >> Thanks,<br>
>> >> >>> >> >> >> >> Serkan<br>
>> >> >>> >> >> >> >><br>
>> >> >>> >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage<br>
>> >> >>> >> >> >> >> <<a href="mailto:dgossage@carouselchecks.com">dgossage@carouselchecks.com</a>> wrote:<br>
>> >> >>> >> >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson<br>
>> >> >>> >> >> >> >> > <<a href="mailto:lindsay.mathieson@gmail.com">lindsay.mathieson@gmail.com</a>> wrote:<br>
>> >> >>> >> >> >> >> >><br>
>> >> >>> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote:<br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a<br>
>> >> >>> >> >> >> >> >>> regular<br>
>> >> >>> >> >> >> >> >>> minor<br>
>> >> >>> >> >> >> >> >>> release.<br>
>> >> >>> >> >> >> >> >>> The release-notes are available at<br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>><br>
>> >> >>> >> >> >> >> >>> <a href="https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md" rel="noreferrer" target="_blank">https://github.com/gluster/<wbr>glusterfs/blob/release-3.7/<wbr>doc/release-notes/3.7.14.md</a><br>
>> >> >>> >> >> >> >> >><br>
>> >> >>> >> >> >> >> >><br>
>> >> >>> >> >> >> >> >> Thanks Kaushal, I'll check it out<br>
>> >> >>> >> >> >> >> >><br>
>> >> >>> >> >> >> >> ><br>
>> >> >>> >> >> >> >> > So far on my test box its working as expected. At<br>
>> >> >>> >> >> >> >> > least<br>
>> >> >>> >> >> >> >> > the<br>
>> >> >>> >> >> >> >> > issues<br>
>> >> >>> >> >> >> >> > that<br>
>> >> >>> >> >> >> >> > prevented it from running as before have disappeared.<br>
>> >> >>> >> >> >> >> > Will<br>
>> >> >>> >> >> >> >> > need<br>
>> >> >>> >> >> >> >> > to<br>
>> >> >>> >> >> >> >> > see<br>
>> >> >>> >> >> >> >> > how<br>
>> >> >>> >> >> >> >> > my test VM behaves after a few days.<br>
>> >> >>> >> >> >> >> ><br>
>> >> >>> >> >> >> >> ><br>
>> >> >>> >> >> >> >> ><br>
>> >> >>> >> >> >> >> >> --<br>
>> >> >>> >> >> >> >> >> Lindsay Mathieson<br>
>> >> >>> >> >> >> >> >><br>
>> >> >>> >> >> >> >> >> ______________________________<wbr>_________________<br>
>> >> >>> >> >> >> >> >> Gluster-users mailing list<br>
>> >> >>> >> >> >> >> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> >>> >> >> >> >> >><br>
>> >> >>> >> >> >> >> >> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> >> >>> >> >> >> >> ><br>
>> >> >>> >> >> >> >> ><br>
>> >> >>> >> >> >> >> ><br>
>> >> >>> >> >> >> >> > ______________________________<wbr>_________________<br>
>> >> >>> >> >> >> >> > Gluster-users mailing list<br>
>> >> >>> >> >> >> >> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> >>> >> >> >> >> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> >> >>> >> >> >> >> ______________________________<wbr>_________________<br>
>> >> >>> >> >> >> >> Gluster-users mailing list<br>
>> >> >>> >> >> >> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> >>> >> >> >> >> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> ><br>
>> >> >>> >> >> >> > --<br>
>> >> >>> >> >> >> > Pranith<br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> ><br>
>> >> >>> >> >> > --<br>
>> >> >>> >> >> > Pranith<br>
>> >> >>> >> ><br>
>> >> >>> >> ><br>
>> >> >>> >> ><br>
>> >> >>> >> ><br>
>> >> >>> >> > --<br>
>> >> >>> >> > Pranith<br>
>> >> >>> ><br>
>> >> >>> ><br>
>> >> >>> ><br>
>> >> >>> ><br>
>> >> >>> > --<br>
>> >> >>> > Pranith<br>
>> >> >><br>
>> >> >><br>
>> >> >><br>
>> >> >><br>
>> >> >> --<br>
>> >> >> Pranith<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> > Pranith<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Pranith<br>
><br>
><br>
><br>
><br>
> --<br>
> Pranith<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>