<p dir="ltr"><br>
On Jan 12, 2016 3:44 AM, "Michael Adam" <<a href="mailto:obnox@samba.org">obnox@samba.org</a>> wrote:<br>
><br>
> On 2016-01-08 at 12:03 +0530, Raghavendra Talur wrote:<br>
> > Top posting, this is a very old thread.<br>
> ><br>
> > Keeping in view the recent NetBSD problems and the number of bugs creeping<br>
> > in, I suggest we do these things right now:<br>
> ><br>
> > a. Change the gerrit merge type to fast forward only.<br>
> > As explained below in the thread, with our current setup even if both<br>
> > PatchA and PatchB pass regression separately when both are merged it is<br>
> > possible that a functional bug creeps in.<br>
> > This is the only solution to prevent that from happening.<br>
> > I will work with Kaushal to get this done.<br>
> ><br>
> > b. In Jenkins, remove gerrit trigger and make it a manual operation<br>
> ><br>
> > Too many developers use the upstream infra as a test cluster and it is<br>
> > *not*.<br>
> > It is a verification mechanism for maintainers to ensure that the patch<br>
> > does not cause regression.<br>
> ><br>
> > It is required that all developers run full regression on their machines<br>
> > before asking for reviews.<br>
><br>
> Hmm, I am not 100% sure I would underwrite that.<br>
> I am coming from the Samba process, where we have exactly<br>
> that: A developer should have run full selftest before<br>
> submitting the change for review. Then after two samba<br>
> team developers have given their review+ (counting the<br>
> author), it can be pushed to our automatism that keeps<br>
> rebasing on current upstream and running selftest until<br>
> either selftest succeeds and is pushed as a fast forward<br>
> or selftest fails.<br>
><br>
> The reality is that people are lazy and think they know<br>
> when they can skip selftest. But people are deceived and<br>
> overlook problems. Hence either reviewers run into failures<br>
> or the automatic pre-push selftest fails. The problem<br>
> I see with this is that it wastes the precios time of<br>
> the reviewers.<br>
><br>
> When I started contributing to Gluster, I found it to<br>
> be a big, big plus to have automatic regression runs<br>
> as a first step after submission, so that a reviewer<br>
> has the option to only start looking at the patch once<br>
> automatic tests have passed.<br>
><br>
> I completely agree that the fast-forward-only and<br>
> post-review-pre-merge-regression-run approach<br>
> is the way to go, only this way the original problem<br>
> described by Talur can be avoided.<br>
><br>
> But would it be possible to keep and even require some<br>
> amount of automatic pre-review test run (build and at<br>
> least some amount of runtimte test)?<br>
> It really prevents waste of time of reviewers/maintainers.<br>
><br>
> The problem with this is of course that it can increase<br>
> the (real) time needed to complete a review from submission<br>
> until upstream merge.<br>
><br>
> Just a few thoughts...<br>
><br>
> Cheers - Michael<br>
></p>
<p dir="ltr">We had same concern from many other maintainers. I guess it would be better if test runs both before and after review. With these changes we would have removed test runs of work in progress patches. <br></p>