<div dir="ltr">I have some rebalance perf regression stuff I have been working on, is there an RPM with these patches anywhere so that I can try it on my systems?  If not I&#39;ll just build from:<div><br></div><div>git fetch git://<a href="http://review.gluster.org/glusterfs">review.gluster.org/glusterfs</a> refs/changes/57/9657/8 &amp;&amp; git cherry-pick FETCH_HEAD<br></div><div><br></div><div>I will have _at_least_ 10TB of storage, how many TBs of data should I run with?<div><br></div><div>-b</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 7, 2015 at 9:07 AM, Vijay Bellur <span dir="ltr">&lt;<a href="mailto:vbellur@redhat.com" target="_blank">vbellur@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 04/07/2015 03:08 PM, Susant Palai wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Here is one test performed on a 300GB data set and around 100%(1/2 the time) improvement was seen.<br>
<br>
[root@gprfs031 ~]# gluster v i<br>
<br>
Volume Name: rbperf<br>
Type: Distribute<br>
Volume ID: 35562662-337e-4923-b862-<u></u>d0bbb0748003<br>
Status: Started<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: gprfs029-10ge:/bricks/<u></u>gprfs029/brick1<br>
Brick2: gprfs030-10ge:/bricks/<u></u>gprfs030/brick1<br>
Brick3: gprfs031-10ge:/bricks/<u></u>gprfs031/brick1<br>
Brick4: gprfs032-10ge:/bricks/<u></u>gprfs032/brick1<br>
<br>
<br>
Added server 32 and started rebalance force.<br>
<br>
Rebalance stat for new changes:<br>
[root@gprfs031 ~]# gluster v rebalance rbperf status<br>
                                     Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs<br>
                                ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br>
                                localhost            74639        36.1GB        297319             0             0            completed            1743.00<br>
                             172.17.40.30            67512        33.5GB        269187             0             0            completed            1395.00<br>
                            gprfs029-10ge            79095        38.8GB        284105             0             0            completed            1559.00<br>
                            gprfs032-10ge                0        0Bytes             0             0             0            completed             402.00<br>
volume rebalance: rbperf: success:<br>
<br>
Rebalance stat for old model:<br>
[root@gprfs031 ~]# gluster v rebalance rbperf status<br>
                                     Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs<br>
                                ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br>
                                localhost            86493        42.0GB        634302             0             0            completed            3329.00<br>
                            gprfs029-10ge            94115        46.2GB        687852             0             0            completed            3328.00<br>
                            gprfs030-10ge            74314        35.9GB        651943             0             0            completed            3072.00<br>
                            gprfs032-10ge                0        0Bytes        594166             0             0            completed            1943.00<br>
volume rebalance: rbperf: success:<br>
<br>
</blockquote>
<br></div></div>
This is interesting. Thanks for sharing &amp; well done! Maybe we should attempt a much larger data set and see how we fare there :).<br>
<br>
Regards,<div class="HOEnZb"><div class="h5"><br>
Vijay<br>
<br>
<br>
______________________________<u></u>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-devel" target="_blank">http://www.gluster.org/<u></u>mailman/listinfo/gluster-devel</a><br>
</div></div></blockquote></div><br></div>