<div dir="ltr">Doh my mistake, I thought it was merged. I was just running with the upstream 3.7 daily. Can I use this run as my baseline and then I can run next time on the patch to show the % improvement? I'll wipe everything and try on the patch, any idea when it will be merged?<br><div><br></div><div>-b</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Apr 29, 2015 at 5:34 AM, Susant Palai <span dir="ltr"><<a href="mailto:spalai@redhat.com" target="_blank">spalai@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Ben<br>
I checked out the glusterfs process attaching gdb and I could not find the newer code. Can you confirm whether you took the new patch ? patch i: <a href="http://review.gluster.org/#/c/9657/" target="_blank">http://review.gluster.org/#/c/9657/</a><br>
<span class="im HOEnZb"><br>
Thanks,<br>
Susant<br>
<br>
<br>
----- Original Message -----<br>
> From: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>><br>
</span><div class="HOEnZb"><div class="h5">> To: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>>, "Nithya Balachandran" <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>><br>
> Cc: "Shyamsundar Ranganathan" <<a href="mailto:srangana@redhat.com">srangana@redhat.com</a>><br>
> Sent: Wednesday, April 29, 2015 1:22:02 PM<br>
> Subject: Re: [Gluster-devel] Rebalance improvement design<br>
><br>
> This is how it looks for 2000 file. each 1MB. Done rebalance on 2*2 + 2.<br>
><br>
> OLDER:<br>
> [root@gprfs030 ~]# gluster v rebalance test1 status<br>
> Node Rebalanced-files size<br>
> scanned failures<br>
> skipped status run<br>
> time in secs<br>
> --------- ----------- -----------<br>
> ----------- ----------- -----------<br>
> ------------ --------------<br>
> localhost 2000 1.9GB<br>
> 3325 0 0<br>
> completed 63.00<br>
> gprfs032-10ge 0 0Bytes<br>
> 2158 0 0<br>
> completed 6.00<br>
> volume rebalance: test1: success:<br>
> [root@gprfs030 ~]#<br>
><br>
><br>
> NEW:<br>
> [root@gprfs030 upstream_rebalance]# gluster v rebalance test1 status<br>
> Node Rebalanced-files size<br>
> scanned failures<br>
> skipped status run<br>
> time in secs<br>
> --------- ----------- -----------<br>
> ----------- ----------- -----------<br>
> ------------ --------------<br>
> localhost 2000 1.9GB<br>
> 2011 0 0<br>
> completed 12.00<br>
> gprfs032-10ge 0 0Bytes<br>
> 0 0 0<br>
> failed 0.00 [Failed<br>
> because of a crash which I will address in next<br>
> patch]<br>
> volume rebalance: test1: success:<br>
><br>
><br>
> Just trying out replica behaviour for rebalance.<br>
><br>
> Here is the volume info.<br>
> [root@gprfs030 ~]# gluster v i<br>
><br>
> Volume Name: test1<br>
> Type: Distributed-Replicate<br>
> Volume ID: e12ef289-86f2-454a-beaa-72ea763dbada<br>
> Status: Started<br>
> Number of Bricks: 3 x 2 = 6<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: gprfs030-10ge:/bricks/gprfs030/brick1<br>
> Brick2: gprfs032-10ge:/bricks/gprfs032/brick1<br>
> Brick3: gprfs030-10ge:/bricks/gprfs030/brick2<br>
> Brick4: gprfs032-10ge:/bricks/gprfs032/brick2<br>
> Brick5: gprfs030-10ge:/bricks/gprfs030/brick3<br>
> Brick6: gprfs032-10ge:/bricks/gprfs032/brick3<br>
><br>
><br>
><br>
> ----- Original Message -----<br>
> > From: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>><br>
> > To: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>><br>
> > Cc: "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > Sent: Wednesday, April 29, 2015 1:13:04 PM<br>
> > Subject: Re: [Gluster-devel] Rebalance improvement design<br>
> ><br>
> > Ben, will you be able to give rebal stat for the same configuration and<br>
> > data<br>
> > set with older rebalance infra ?<br>
> ><br>
> > Thanks,<br>
> > Susant<br>
> ><br>
> > ----- Original Message -----<br>
> > > From: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>><br>
> > > To: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>><br>
> > > Cc: "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > > Sent: Wednesday, April 29, 2015 12:08:38 PM<br>
> > > Subject: Re: [Gluster-devel] Rebalance improvement design<br>
> > ><br>
> > > Hi Ben,<br>
> > > Yes we were using pure dist volume. Will check in to your systems for<br>
> > > more<br>
> > > info.<br>
> > ><br>
> > > Can you please update which patch set you used ? In the mean time I will<br>
> > > do<br>
> > > one set of test with the same configuration on a small data set.<br>
> > ><br>
> > > Thanks,<br>
> > > Susant<br>
> > ><br>
> > ><br>
> > > ----- Original Message -----<br>
> > > > From: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>><br>
> > > > To: "Nithya Balachandran" <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>><br>
> > > > Cc: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>>, "Gluster Devel"<br>
> > > > <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > > > Sent: Wednesday, April 29, 2015 2:13:05 AM<br>
> > > > Subject: Re: [Gluster-devel] Rebalance improvement design<br>
> > > ><br>
> > > > I am not seeing the performance you were. I am running on 500GB of<br>
> > > > data:<br>
> > > ><br>
> > > > [root@gqas001 ~]# gluster v rebalance testvol status<br>
> > > > Node Rebalanced-files<br>
> > > > size scanned failures skipped status<br>
> > > > run<br>
> > > > time in secs<br>
> > > > --------- -----------<br>
> > > > ----------- ----------- ----------- -----------<br>
> > > > ------------<br>
> > > > --------------<br>
> > > > localhost 129021<br>
> > > > 7.9GB 912104 0 0 in progress<br>
> > > > 10100.00<br>
> > > > <a href="http://gqas012.sbu.lab.eng.bos.redhat.com" target="_blank">gqas012.sbu.lab.eng.bos.redhat.com</a> 0 0Bytes<br>
> > > > 1930312 0 0 in progress<br>
> > > > 10100.00<br>
> > > > <a href="http://gqas003.sbu.lab.eng.bos.redhat.com" target="_blank">gqas003.sbu.lab.eng.bos.redhat.com</a> 0 0Bytes<br>
> > > > 1930312 0 0 in progress<br>
> > > > 10100.00<br>
> > > > <a href="http://gqas004.sbu.lab.eng.bos.redhat.com" target="_blank">gqas004.sbu.lab.eng.bos.redhat.com</a> 128903 7.9GB<br>
> > > > 946730 0 0 in progress<br>
> > > > 10100.00<br>
> > > > <a href="http://gqas013.sbu.lab.eng.bos.redhat.com" target="_blank">gqas013.sbu.lab.eng.bos.redhat.com</a> 0 0Bytes<br>
> > > > 1930312 0 0 in progress<br>
> > > > 10100.00<br>
> > > > <a href="http://gqas014.sbu.lab.eng.bos.redhat.com" target="_blank">gqas014.sbu.lab.eng.bos.redhat.com</a> 0 0Bytes<br>
> > > > 1930312 0 0 in progress<br>
> > > > 10100.00<br>
> > > ><br>
> > > > Based on what I am seeing I expect this to take 2 days. Was you rebal<br>
> > > > run<br>
> > > > on a pure dist volume? I am trying on 2x2 + 2 new bricks. Any idea<br>
> > > > why<br>
> > > > mine is taking so long?<br>
> > > ><br>
> > > > -b<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > On Wed, Apr 22, 2015 at 1:10 AM, Nithya Balachandran<br>
> > > > <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>><br>
> > > > wrote:<br>
> > > ><br>
> > > > > That sounds great. Thanks.<br>
> > > > ><br>
> > > > > Regards,<br>
> > > > > Nithya<br>
> > > > ><br>
> > > > > ----- Original Message -----<br>
> > > > > From: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>><br>
> > > > > To: "Nithya Balachandran" <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>><br>
> > > > > Cc: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>>, "Gluster Devel" <<br>
> > > > > <a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > > > > Sent: Wednesday, 22 April, 2015 12:14:14 AM<br>
> > > > > Subject: Re: [Gluster-devel] Rebalance improvement design<br>
> > > > ><br>
> > > > > I am setting up a test env now, I'll have some feedback for you this<br>
> > > > > week.<br>
> > > > ><br>
> > > > > -b<br>
> > > > ><br>
> > > > > On Tue, Apr 21, 2015 at 11:36 AM, Nithya Balachandran<br>
> > > > > <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a><br>
> > > > > ><br>
> > > > > wrote:<br>
> > > > ><br>
> > > > > > Hi Ben,<br>
> > > > > ><br>
> > > > > > Did you get a chance to try this out?<br>
> > > > > ><br>
> > > > > > Regards,<br>
> > > > > > Nithya<br>
> > > > > ><br>
> > > > > > ----- Original Message -----<br>
> > > > > > From: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>><br>
> > > > > > To: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>><br>
> > > > > > Cc: "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > > > > > Sent: Monday, April 13, 2015 9:55:07 AM<br>
> > > > > > Subject: Re: [Gluster-devel] Rebalance improvement design<br>
> > > > > ><br>
> > > > > > Hi Ben,<br>
> > > > > > Uploaded a new patch here: <a href="http://review.gluster.org/#/c/9657/" target="_blank">http://review.gluster.org/#/c/9657/</a>.<br>
> > > > > > We<br>
> > > > > > can<br>
> > > > > > start perf test on it. :)<br>
> > > > > ><br>
> > > > > > Susant<br>
> > > > > ><br>
> > > > > > ----- Original Message -----<br>
> > > > > > From: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>><br>
> > > > > > To: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>><br>
> > > > > > Cc: "Gluster Devel" <<a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > > > > > Sent: Thursday, 9 April, 2015 3:40:09 PM<br>
> > > > > > Subject: Re: [Gluster-devel] Rebalance improvement design<br>
> > > > > ><br>
> > > > > > Thanks Ben. RPM is not available and I am planning to refresh the<br>
> > > > > > patch<br>
> > > > > in<br>
> > > > > > two days with some more regression fixes. I think we can run the<br>
> > > > > > tests<br>
> > > > > post<br>
> > > > > > that. Any larger data-set will be good(say 3 to 5 TB).<br>
> > > > > ><br>
> > > > > > Thanks,<br>
> > > > > > Susant<br>
> > > > > ><br>
> > > > > > ----- Original Message -----<br>
> > > > > > From: "Benjamin Turner" <<a href="mailto:bennyturns@gmail.com">bennyturns@gmail.com</a>><br>
> > > > > > To: "Vijay Bellur" <<a href="mailto:vbellur@redhat.com">vbellur@redhat.com</a>><br>
> > > > > > Cc: "Susant Palai" <<a href="mailto:spalai@redhat.com">spalai@redhat.com</a>>, "Gluster Devel" <<br>
> > > > > > <a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a>><br>
> > > > > > Sent: Thursday, 9 April, 2015 2:10:30 AM<br>
> > > > > > Subject: Re: [Gluster-devel] Rebalance improvement design<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > I have some rebalance perf regression stuff I have been working on,<br>
> > > > > > is<br>
> > > > > > there an RPM with these patches anywhere so that I can try it on my<br>
> > > > > > systems? If not I'll just build from:<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > git fetch git:// <a href="http://review.gluster.org/glusterfs" target="_blank">review.gluster.org/glusterfs</a><br>
> > > > > > refs/changes/57/9657/8<br>
> > > > > > &&<br>
> > > > > > git cherry-pick FETCH_HEAD<br>
> > > > > ><br>
> > > > > ><br>
> > > > > ><br>
> > > > > > I will have _at_least_ 10TB of storage, how many TBs of data should<br>
> > > > > > I<br>
> > > > > > run<br>
> > > > > > with?<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > -b<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > On Tue, Apr 7, 2015 at 9:07 AM, Vijay Bellur < <a href="mailto:vbellur@redhat.com">vbellur@redhat.com</a> ><br>
> > > > > wrote:<br>
> > > > > ><br>
> > > > > ><br>
> > > > > ><br>
> > > > > ><br>
> > > > > > On 04/07/2015 03:08 PM, Susant Palai wrote:<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > Here is one test performed on a 300GB data set and around 100%(1/2<br>
> > > > > > the<br>
> > > > > > time) improvement was seen.<br>
> > > > > ><br>
> > > > > > [root@gprfs031 ~]# gluster v i<br>
> > > > > ><br>
> > > > > > Volume Name: rbperf<br>
> > > > > > Type: Distribute<br>
> > > > > > Volume ID: 35562662-337e-4923-b862- d0bbb0748003<br>
> > > > > > Status: Started<br>
> > > > > > Number of Bricks: 4<br>
> > > > > > Transport-type: tcp<br>
> > > > > > Bricks:<br>
> > > > > > Brick1: gprfs029-10ge:/bricks/ gprfs029/brick1<br>
> > > > > > Brick2: gprfs030-10ge:/bricks/ gprfs030/brick1<br>
> > > > > > Brick3: gprfs031-10ge:/bricks/ gprfs031/brick1<br>
> > > > > > Brick4: gprfs032-10ge:/bricks/ gprfs032/brick1<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > Added server 32 and started rebalance force.<br>
> > > > > ><br>
> > > > > > Rebalance stat for new changes:<br>
> > > > > > [root@gprfs031 ~]# gluster v rebalance rbperf status<br>
> > > > > > Node Rebalanced-files size scanned failures skipped status run time<br>
> > > > > > in<br>
> > > > > secs<br>
> > > > > > --------- ----------- ----------- ----------- -----------<br>
> > > > > > -----------<br>
> > > > > > ------------ --------------<br>
> > > > > > localhost 74639 36.1GB 297319 0 0 completed 1743.00<br>
> > > > > > 172.17.40.30 67512 33.5GB 269187 0 0 completed 1395.00<br>
> > > > > > gprfs029-10ge 79095 38.8GB 284105 0 0 completed 1559.00<br>
> > > > > > gprfs032-10ge 0 0Bytes 0 0 0 completed 402.00<br>
> > > > > > volume rebalance: rbperf: success:<br>
> > > > > ><br>
> > > > > > Rebalance stat for old model:<br>
> > > > > > [root@gprfs031 ~]# gluster v rebalance rbperf status<br>
> > > > > > Node Rebalanced-files size scanned failures skipped status run time<br>
> > > > > > in<br>
> > > > > secs<br>
> > > > > > --------- ----------- ----------- ----------- -----------<br>
> > > > > > -----------<br>
> > > > > > ------------ --------------<br>
> > > > > > localhost 86493 42.0GB 634302 0 0 completed 3329.00<br>
> > > > > > gprfs029-10ge 94115 46.2GB 687852 0 0 completed 3328.00<br>
> > > > > > gprfs030-10ge 74314 35.9GB 651943 0 0 completed 3072.00<br>
> > > > > > gprfs032-10ge 0 0Bytes 594166 0 0 completed 1943.00<br>
> > > > > > volume rebalance: rbperf: success:<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > This is interesting. Thanks for sharing & well done! Maybe we<br>
> > > > > > should<br>
> > > > > > attempt a much larger data set and see how we fare there :).<br>
> > > > > ><br>
> > > > > > Regards,<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > Vijay<br>
> > > > > ><br>
> > > > > ><br>
> > > > > > ______________________________ _________________<br>
> > > > > > Gluster-devel mailing list<br>
> > > > > > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > > > > > <a href="http://www.gluster.org/" target="_blank">http://www.gluster.org/</a> mailman/listinfo/gluster-devel<br>
> > > > > ><br>
> > > > > > _______________________________________________<br>
> > > > > > Gluster-devel mailing list<br>
> > > > > > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > > > > > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> > > > > > _______________________________________________<br>
> > > > > > Gluster-devel mailing list<br>
> > > > > > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > > > > > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> > > > > ><br>
> > > > ><br>
> > > ><br>
> > > _______________________________________________<br>
> > > Gluster-devel mailing list<br>
> > > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> > ><br>
> > _______________________________________________<br>
> > Gluster-devel mailing list<br>
> > <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> > <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
> ><br>
><br>
</div></div></blockquote></div><br></div>