Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are these "more direct and optimal measures" that are cheaper than code reviews?

In my experience code reviews are much cheaper than other means of raising software quality. For example unit tests only start to add real value after you have written a good bunch of them, so they form this regression-safety-net.

I haven't seen Continuous integration and refactoring being thrown out of the window because of code reviews. Quite the opposite. Code reviews usually ask the author to refactor his code further. Similarly I have no idea why one would stop continuous integration because of code reviews.



Continuous integration means you can work on multiple items at once and that you break up large feature dev into smaller chunks that get committed to master. For example, I might commit the data layer for a new big feature well before the UX, etc.

Code reviews mean that every time I want to integrate with master I've got to 'grab a lock' and schedule and wait for a code review.

What typically happens in these code review cultures is you tend to make monolithic commits (to minimize the # of code reviews you have to do) and you also tend to minimize diffs with the code base (to make the CR easier for the reviewer).


What? These statements and conclusions make no sense to me.

Code reviews mean that every time I want to integrate with master I've got to 'grab a lock' and schedule and wait for a code review.

You're using a terrible code review system. You should be able to have multiple potential commits out for review at any given time. You should also have multiple pending commits from your coworkers on your queue at any given time. The code review process is asyncrhonous: process them at your leisure (though sooner is better than later, for obvious reasons). Once the review has been approved, it can be rebased onto or merged into master. If the patches you're writing are so far-flung that they don't apply cleanly, break them into smaller pieces.

What typically happens in these code review cultures is you tend to make monolithic commits (to minimize the # of code reviews you have to do) and you also tend to minimize diffs with the code base (to make the CR easier for the reviewer).

These two statements are contradictory. Monolithic commits by definition increase the diff with the codebase, not decrease it. What you tend to see are meaningful commits: commits which do one specific thing, and fully test it, without breaking any other systems.


The problems you describe aren't really problems with Code Review per se, rather with the process that's enforcing the reviews.

The idea of code reviews is that somebody else reads your code and provides feedback. Whether the reviewer is sent a pull request, a list of commits in master or a printout is a matter of implementation.

I see a cyclic problem in here: Reviews take lot of time -> developers want to get more done, not wait for a review -> so they send more changes to review -> reviews take even more time -> ...

I think the solution is to find ways to make reviews smaller instead. Breaking tasks to smaller pieces that take less time to implement and review. (But not by reducing the # of lines in diff - squeezing several changes into one commit is a sure way to make it harder to review.) Smaller reviews will also be more effective - with large reviews there's a tendency for a reviewer to wear off and just skim through the changes.

Of course that's easier to say than do... it's something I'm trying to do by myself and not fully succeeding.


I do occasionally ask newer employees who are not experienced with code reviews to split up a review into multiple smaller reviews. Actually there is a natural equilibrium that happens here. There is an exponential relationship between complexity and size of a single change and review latency, just because that's most efficient for reviewers (assuming some hard limit on acceptable latency as well). Most changes take on a uniform medium size as a result of this. The sheer number of changes isn't as big a factor here because most reviewers look at reviews in batch anyway.

Separately, minimizing diffs with the codebase is not just easier for the reviewer, it also makes 'blame' and resolving problems with existing code easier. You shouldn't break your back to minimize diffs, but as a form of hygiene it is not without merit.

Your definition of continuous integration is interesting in that I haven't seen it used in that way, or rather with that emphasis. Smaller syncs-to-master than "this is the entire feature" are great, but their effectiveness depends on automated testing, building, and deploying. When code under review is a coherent, high-quality, and tested thing, this can be a boon for CI because you get a small chunk with clear before and after states, which you might not get from just winging it. You can also work on code while other code is under review, especially with a Scrum/Kanban-type task system.


I'm curious what you mean by grab a lock and schedule and wait for a code review. I've used all sorts of different SCMs, code review software, in-person code review processes, etc.. but I've never heard of anything like that. Could you describe your process to us in more detail?


I haven't done it myself, but I know of teams that use a lock for integrating changes to the main branch.

Without it the rate of change in the branch was fast enough that submissions would be preempted by someone else's and you'd have to rerun presubmission tests etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: