Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Code reviews aren’t just for catching bugs (fullstory.com)
210 points by src on May 12, 2016 | hide | past | favorite | 147 comments


I'd still rather earlier code reviews... design reviews about 1/3 of the way into writing the code. Enough time to have passed to have discovered the dragons in the whiteboard design, but not enough to have written code that could only undergo minor fixes in a code review.

I prefer the idea of a code review happening at a time that it could still steer the ship... too many code reviews catch bugs, but don't correct poor system design. That's also where the greatest benefit/education exists for the author... not minor changes, but "what about approaching it like this".


I work on a very small team (4 engineers), so we have all commits for all branches posted to our engineering chatroom.

Everyone knows what everyone else is working on (and we all work in the same room), so when someone is on a task that we know my have some snags/difficult-to-design solutions, we'll all periodically take a break and look at decisions other people are making on their particular feature, providing feedback when appropriate.

Of course, since we all work in the same room, we can also use the WTF/minute metric to see (well, hear) when one of us is deep in the weeds and could use a second pair of eyes to pull us out.

[EDIT: and I know this won't scale too far beyond a team of our size, but for us it works quite well right now]


I work on a team with the same size as yours. Although we do not all sit in the same room, we still do code reviews. Since we are small we are not the most critical on style.

I am a big believer in code reviews especially starting early with a small team. This would set the culture from the beginning because it is harder to bring that in later.


imo, when it comes to style, if your language ecosystem has one, you should use it to enforce one. ideally as a git pre-commit hook.


One way to enforce this is to use source control for design docs, so that you can use the same pull request workflow as you do with code. We've done this by having user stories written in gherkin, so that new user stories can pass review, and then later can be implemented as tests. This can probably be extended to other types of design documentation.


This is the role of design documents which get heavily reviewed by the team before even the first line of code is written.


In my experience, no design document, no matter how carefully drafted survives contact with the enemy^W^W an IDE. At best you can define interface boundaries between independently developed modules.

Edit: broken leftover line


Things change a lot during the project, I agree. But assuming everything will change and using that as an excuse not to plan at all sounds like a poor choice to me.

"In preparing for battle I have always found that plans are useless, but planning is indispensable." - Dwight D. Eisenhower


I do actually agree that some amount of planning, and especially putting down a sketch on paper do help, in particular as a way to nail down requirements and figuring out obvious roadstops, but it is important to be under no illusion that the final project will resemble in any way the plan


> But assuming everything will change and using that as an excuse not to plan at all sounds like a poor choice to me.

I know that's how it sounds. But in my experience it turns out to be a lot more effective than planning.


We write those on my team. They usually end up only faintly resembling what actually gets shipped. No matter how much agreement there is beforehand, as soon as people see working software they change their minds. The best thing is to plan for and expect change, not try to mitigate its manifesting.


Yep, and design docs are the most important part of that planning for change. If you haven't planned the desired design, you can't see how circumstances are forcing you to change it, and what tradeoffs you'll be making to accommodate those circumstances.


True, if your software is well designed and modular then it's easy to contain necessary changes to a single subsystem. But a good design document would allow you just that - designate boundaries and interfaces between subsystems. I personally found out that writing an IDL (Protocol Buffers, Thrift or what have you) along your design document is very useful since it nicely bridges abstract concepts with actual code.


Well, except for the elephant-in-the-room of the dread lord Requirements Traceability, which will frequently eat the entire design cycle.


This is generally why smaller changes are better. They are more easily reviewed, and more easily "steered" as you put it by a good review.

And of course having a design doc ahead of time that the team can review and comment on for a longer, more complicated change, is a great thing to do. And compliments the subsequent code review[s].


I think it will always depend on the team. Process X might be a savior to team Y, but completely screw team A even if team A == team Y (javascript truthiness here).

On the flip side I think you can also say the potential downsides of excessive code review are significantly less then the potential downsides of zero code review.

I dunno.


> the potential downsides of excessive code review

What would those be?


Large commits getting stuck in review forever, or until they have too many conflicts to be merged cleanly; small commits being over-reviewed for trivial issues because the reviewer wants to prove that they have actually looked at the code.


On the other hand, if you skip code review for major or complicated changes, you end up with large chunks of code that only one person understands.

Sometimes that works out fine, but other times you don't find out until too late that there are major flaws.


You can also refuse changes for being too big to review. Creating incentives for small changes seem like a good thing to me, so I don't see where you complaint is.


As I said, review culture can also create incentives against small commits, because they will often be over-analysed. I've seen this sentiment on the internet, so it can't just be my own anecdata.

https://twitter.com/girayozil/status/306836785739210752

Not to mention trivial commits, like clarifying comments, something which doesn't happen anymore in my current project - nobody can be bothered to go through the review process for that.

I'm not against code reviews, but I prefer casual post-commit reviews for uncontroversial changes.


Bikeshedding and nitpicking. People can try to put you down a peg through overly critical review.


I'm not sure what ben_jones had in mind, but we've been doing a lot of pairing and mob programming where I work. I really enjoy it and it feels like the code review is basically just being done at all times (the reviewer is sitting right next to you as you write). But others feel like it eliminates some of the advantages of parallelism (i.e. 2 people working on 2 different things). It also slows down the amount of time to complete stories if one developer is less skilled than the other, but that issue should fade with time.


Pair programming is an extreme version of code reviewing though :)


Imagine your team prototyping game design ideas with average speed of 1-2 days per concept.


Prototyping ideas is not a core period for code review though...

ETA: If you end up using a prototype, then you clean up the code at the end of the 1-2 days, and request a review at that point. Code reviews are not meant to get in the way of development. They're a post-development step that happens before QA/testing.


How much code review you need is going to vary based on the kind of change. When my team has a large design decision, we anticipate that we're going to have to create some exploratory code that won't ever reach production. We review "spike" pull requests--maybe several generations, depending on the scope of the change. And if needed we have on and offline discussions about our alternative, the tradeoffs, etc.

I think this works a lot better than trying to impose design reviews on every change. Most changes shouldn't require this kind of scrutiny; you don't want to slow down 95% of code reviews for the sake of the difficult 5%.


Y not both?


What often goes unmentioned in praise for code review processes is their insanely exorbitant costs -- measured in engineer hours but perhaps more costly is all of the blocking and impedance [1]. Most of the "problems" that code reviews claim to address can be solved by much more direct and optimal measures. Code reviews are damn expensive.

This post concedes that code reviews are better for the more fluffy ends -- teamwork, openness, social recognition, but given their high costs, I'd rather achieve even these soft goals in other ways than to impede my team's delivery potential.

While mission critical systems deserve the whole kitchen sink thrown at them, expensive verifications, code reviews, etc etc., most business applications would do much better optimizing for better software architectures and domain conceptualization than spend so much time dwelling on the minutiae of lines of code.

[1] Continuous integration and refactoring, pillars of agility, go out the window in typical code review environments where commits are blocked until peer review.


Amen.

Another huge cost of code reviews is distraction. We've all seen the Paul Graham essay on maker's schedules vs. manager's schedules. We've all read the statistics on how much time is lost to interruptions. Code reviews are a massive interruption, done on a manager's schedule. Each code review is a distraction, and can take a significant time commitment, if it is to be a meaningful review.

A few years ago, I worked at a company with a strong emphasis on code reviews, and it turned into a waste of time when people couldn't afford to waste time. You'd have to do the review, but you really needed to get back to your development or bug-fixing because of the impending hard deadline, so there would be non-commital non-review reviews such as "looks good to me", or "I see no problems". While a good code review can be valuable, these perfunctory ones are a huge waste of time.

Finally, code reviews sometimes substitute for design review, which catches the most serious problems much earlier. At this same company, it drove me crazy that we always had time for code reviews, but never for design reviews.


> Another huge cost of code reviews is distraction. We've all seen the Paul Graham essay on maker's schedules vs. manager's schedules. We've all read the statistics on how much time is lost to interruptions. Code reviews are a massive interruption, done on a manager's schedule. Each code review is a distraction, and can take a significant time commitment, if it is to be a meaningful review.

Wait why are we doing code reviews on the manager's schedule? Why not just wait until you're finished a task to do them?


What? And keep the other developer waiting?


Code reviews should be asynchronous. Everyone on your team should be able to be working on multiple small changes/commits/whatever in parallel. While one is out for review, they're working on the others. Different people keep different schedules: I do all my reviews first thing in the morning, to settle in, and then often do another round after lunch. Other people do them at the end of the day, or don't mind the interruptions (e.g. follow a pomodoro schedule anyway) and do them as they come in.


Context switching also creates some overhead. YMMV as the time it takes to switch tasks IMHO depends on particular person, context scale and problem difficulty.


This has nothing to do with code review. If you're finished with a task, you're going to have to switch contexts; you can't just keep working on something that's done.


It was related to "Everyone on your team should be able to be working on multiple small changes/commits/whatever in parallel."


Waiting to do what? Why can't they just go off and do something else?


> Finally, code reviews sometimes substitute for design review, which catches the most serious problems much earlier. At this same company, it drove me crazy that we always had time for code reviews, but never for design reviews.

You reminded me of one cargo-scrum team working like that.

Important design decisions were made thoughtlessly on "sprint planning" meetings since they were required for division of the "story" into "tasks". Then the classic "garbage in, garbage out" law guided the sprint.


> What often goes unmentioned in praise for code review processes is their insanely exorbitant costs -- measured in engineer hours but perhaps more costly is all of the blocking and impedance [1].

I don't get why you think code reviews are so expensive. An engineer should rarely be blocked by a code review. If waiting for a review - don't. Pick up another task! A great thing about code reviews is they are done async.

If the cost is time spent doing the review - what's the alternative? No code review at all? That's like not testing - yes it's faster now, but you'll pay for it dearly later on. (Code review is well studied in academia / industry, and has consistently shown to be very effective. Having at least one extra pair of eyeballs on any code goes a long way.)


As the comments all over this thread demonstrate, code review is done for many reasons. Where I've worked, the reasons have not been clearly stated, and as a result there is a lot of time wasted. I wouldn't say that there should not be code reviews, but they could be cut way down. A code review should not be a substitute for a design review. It often is, in my experience. A code review should not be done for anything that can be done by static code analysis: code style, finding dead code, etc.

Also, a lot depends on how code reviews are done. I've seen them used as a veto, with lots of back and forth to get the veto removed over subjective and trivial issues. Don't do that.

Finally, let's be honest. Some people's code doesn't need review. Time spent reviewing such code is wasted. But it is often socially difficult to say this, so everyone's code gets reviewed.

Other people can't be trusted to write acceptable code, and reviews are essential. That's a different problem. (Beyond the scope of this note. Left as an exercise to the reader. Other cliches may apply.)


What are these "more direct and optimal measures" that are cheaper than code reviews?

In my experience code reviews are much cheaper than other means of raising software quality. For example unit tests only start to add real value after you have written a good bunch of them, so they form this regression-safety-net.

I haven't seen Continuous integration and refactoring being thrown out of the window because of code reviews. Quite the opposite. Code reviews usually ask the author to refactor his code further. Similarly I have no idea why one would stop continuous integration because of code reviews.


Continuous integration means you can work on multiple items at once and that you break up large feature dev into smaller chunks that get committed to master. For example, I might commit the data layer for a new big feature well before the UX, etc.

Code reviews mean that every time I want to integrate with master I've got to 'grab a lock' and schedule and wait for a code review.

What typically happens in these code review cultures is you tend to make monolithic commits (to minimize the # of code reviews you have to do) and you also tend to minimize diffs with the code base (to make the CR easier for the reviewer).


What? These statements and conclusions make no sense to me.

Code reviews mean that every time I want to integrate with master I've got to 'grab a lock' and schedule and wait for a code review.

You're using a terrible code review system. You should be able to have multiple potential commits out for review at any given time. You should also have multiple pending commits from your coworkers on your queue at any given time. The code review process is asyncrhonous: process them at your leisure (though sooner is better than later, for obvious reasons). Once the review has been approved, it can be rebased onto or merged into master. If the patches you're writing are so far-flung that they don't apply cleanly, break them into smaller pieces.

What typically happens in these code review cultures is you tend to make monolithic commits (to minimize the # of code reviews you have to do) and you also tend to minimize diffs with the code base (to make the CR easier for the reviewer).

These two statements are contradictory. Monolithic commits by definition increase the diff with the codebase, not decrease it. What you tend to see are meaningful commits: commits which do one specific thing, and fully test it, without breaking any other systems.


The problems you describe aren't really problems with Code Review per se, rather with the process that's enforcing the reviews.

The idea of code reviews is that somebody else reads your code and provides feedback. Whether the reviewer is sent a pull request, a list of commits in master or a printout is a matter of implementation.

I see a cyclic problem in here: Reviews take lot of time -> developers want to get more done, not wait for a review -> so they send more changes to review -> reviews take even more time -> ...

I think the solution is to find ways to make reviews smaller instead. Breaking tasks to smaller pieces that take less time to implement and review. (But not by reducing the # of lines in diff - squeezing several changes into one commit is a sure way to make it harder to review.) Smaller reviews will also be more effective - with large reviews there's a tendency for a reviewer to wear off and just skim through the changes.

Of course that's easier to say than do... it's something I'm trying to do by myself and not fully succeeding.


I do occasionally ask newer employees who are not experienced with code reviews to split up a review into multiple smaller reviews. Actually there is a natural equilibrium that happens here. There is an exponential relationship between complexity and size of a single change and review latency, just because that's most efficient for reviewers (assuming some hard limit on acceptable latency as well). Most changes take on a uniform medium size as a result of this. The sheer number of changes isn't as big a factor here because most reviewers look at reviews in batch anyway.

Separately, minimizing diffs with the codebase is not just easier for the reviewer, it also makes 'blame' and resolving problems with existing code easier. You shouldn't break your back to minimize diffs, but as a form of hygiene it is not without merit.

Your definition of continuous integration is interesting in that I haven't seen it used in that way, or rather with that emphasis. Smaller syncs-to-master than "this is the entire feature" are great, but their effectiveness depends on automated testing, building, and deploying. When code under review is a coherent, high-quality, and tested thing, this can be a boon for CI because you get a small chunk with clear before and after states, which you might not get from just winging it. You can also work on code while other code is under review, especially with a Scrum/Kanban-type task system.


I'm curious what you mean by grab a lock and schedule and wait for a code review. I've used all sorts of different SCMs, code review software, in-person code review processes, etc.. but I've never heard of anything like that. Could you describe your process to us in more detail?


I haven't done it myself, but I know of teams that use a lock for integrating changes to the main branch.

Without it the rate of change in the branch was fast enough that submissions would be preempted by someone else's and you'd have to rerun presubmission tests etc.


If anyone is ever blocked, you are doing it wrong. When performed on git branches, you can easily branch off you current feature branch, keep going and rebase to master when the original branch has been merged. Continuous integration is with you all the time and helps the reviewer by annotating the diff with code coverage, cyclomatic complexity and other metrics. A good review culture emphasizes design feedback over low level issues. In addition, plenty of positive feedback is as important as finding issues.


> What often goes unmentioned in praise for code review processes is their insanely exorbitant costs -- measured in engineer hours but perhaps more costly is all of the blocking and impedance [1]. Most of the "problems" that code reviews claim to address can be solved by much more direct and optimal measures. Code reviews are damn expensive.

I'm on a team where for about a year, part of our process has been that code doesn't get merged until three people say it's ready. If one person wrote it, it needs two reviews, or if a pair wrote it, it needs one review.

Your mileage may vary, but my experience with this is that the code reviews increase latency, but don't really have much effect on bandwidth. It might mean that there's a few hours between finishing your code and merging it into master, but during those hours you can be doing other things. Most code reviews take <10 minutes, and I usually just check for other people's PRs when right after I create a PR of my own, so it doesn't interrupt my flow.

Sometimes PRs do get blocked, but in those cases it's usually for good reasons; the design has some serious flaw or is very overcomplicated, or there's a bug. I see that as similar to a failing build. Sure, a CI server sometimes blocks you from merging, but do you really want something that fails tests or doesn't compile going into your master branch anyway? Code review is just like having a CI server that builds and runs tests, but it checks things that can't be automated.

We do sometimes run into cases where code review takes a long time, but the root cause there is usually that we didn't break down the feature into an adequately small enough minimum viable product. 1000 lines of changes takes more than 10 times as long to review as 100 lines of changes; the increase isn't linear. If code review is taking a long time, it may be an indication that your continuous integration isn't continuous enough.

That said, "code review is 100% necessary" and "code review is 100% too costly" are just different kinds of Kool-Aid you shouldn't drink. "People and interactions over processes and tools"[1] applies here. What works for my team might not work for yours. I'm not arguing for code review; there's no universal right way to create software. I'm arguing that code review is an effective process step for some teams and an ineffective one for others.

[1] http://www.agilemanifesto.org/


I have a hard time imagining that most code reviews take < 10 minutes.

* Understanding of the problem at hand, reading through the story, understanding the context of the the code change.

* Pulling down the code, reading through the commit log

* Reading the specs, possibly running coverage tools if not part of testing suite.

* Understanding the logic of the code, seeing if the tests cover the edge cases

* If dependencies change, they may need to be investigated too

* Thinking if there are better ways make code a bit more extensible or understandable

* Thinking of risk of the code (security, performance, deployment concerns)

* Reading through associated documentation and ensuring its accuracy

Doing all of this takes me longer then 10 minutes always.


That sounds like a more thorough review than we're talking about here. It's more like following a link to a website, reading the diffs, and adding inline comments. And it's typically done by someone already somewhat familiar with the codebase.

Also, some code reviews are tiny changes, so it brings the average down.


* We mostly understand the problem at hand already because we were in the same planning meeting.

* We do our code reviews on GitHub by looking at the diffs. No need to pull code.

* We review the code of the tests, but we don't run them. Our CI server runs tests and coverage tools.

* The code is usually a short MVP, <100 lines including tests, so understanding logic and edge cases doesn't take long.

* If dependencies change, it was a group discussion and we already investigated.

* Again, small commits make this quick.

* Small commits make this quick.

* Documentation is handled by our product manager.

I guess what I'm learning from reading your comment is that part of why our code reviews are fast compared to yours is that we mean drastically different things when we say "code review".


I feel like this can be argued so many ways. For instance, while code review might block some changes from being introduced into the system, it also does have more benefits than the "fluffy" ones you mentioned. Maybe an engineer that has more experience with the codebase can point out a way of doing something that is more in line with what the team's style is (earlier move toward standardized code vs letting people do what they want). Maybe you missed a utility function that already achieves what some of your code does. Maybe there's not a bug, but again a more experienced dev sees something that can be optimized and will save time in the future that would be spent on tracing system performance issues. The idea that we need to move ultra fast as devs... I think that is necessary in some cases, but I'm not convinced that taking some time to code review tasks is as expensive as you suggested. I mean even reviewing 1000 lines of code should take no more than 30 minutes. And if it looks entirely confusing and hard to follow, then maybe it's something to revisit because I guarantee you're gonna be spending longer if you have to go back and refactor/extend messy and unclean code that technically works. I think the key here is to catch things that stand out and not review the minutae.

This is just my opinion of the matter. On the topic of team building, I don't really care for all that stuff. Unit tests w/ CI and periodic refactoring are great for revisiting design decisions. Code reviews are great for catching bad design decisions early on, though so use it as a tool.


So the problem you're talking about is how to properly manage (and coach) less experienced devs or devs who write messy code. While code reviews can tangentially deal with this, I think there are much more direct approaches to these problems than an umbrella code review policy.


What direct approaches do you have in mind?


Yes, our peer review mostly consists of looking through the diff to make sure nothing seems crazy or out of place, stray print statements, etc. Every single pull request can't go through a gauntlet of close examination, we wouldn't get anything done.


> insanely exorbitant costs

wat?

Either the code is easily readable and correct, and takes barely any time at all to review, or it's buggy and detecting this early rather than late saves a yuuuuge amount of time.

Also, the reviewer now knows the code. So in reviewing the code they've already half way to being able to improve the code in the future.

And the reviewer can also pick up "oh, that's a neat pattern". Code reviews are education for both parties. Are you saying companies should not spend time or money on education/courses?

And as an author I really appreciate not only that someone looks for me having made a mistake, but because I know someone will read the code I won't go with the easy way. I can fool myself, but I can't fool someone else. E.g. the right thing to do is to name this constant or make an enum, but I'll just put a literal "4" here because I know it's 4. (which of course I won't remember in a week).


1) Commits don't have to be blocked. The action items from a review can be a separate commit.

2) Reviews can be done privately, with comments entered electronically. There doesn't have to be a four-day meeting.

2a) Organizations that use four-day meetings are likely full of people who need something to charge that time to. It may be worth delegating a representative from the engineering team to attend those.

3) Action items must be triaged. Ideally, each action item would be assigned a number and numbers grouped together as a statement of work for a commit. This is much less painful than it sounds. Test provenance would also be nice per commit.


>This post concedes that code reviews are better for the more fluffy ends -- teamwork, openness, social recognition, but given their high costs, I'd rather achieve even these soft goals in other ways than to impede my team's delivery potential.

What techniques have you found effective for improving the soft goals in the context of software (genuinely curious)? Are we talking more conventional management/business concepts or something a bit more loosely defined?


Not sure how to formalize but the team's I've worked on have generally negotiated mutual respect, openness, and teamwork by collaborating on the things of greater import -- the architecture, domain conceptualization, etc. And peer code reviews in some cases tend to work against these goals--because you tend to be down in the weeds of LOC, bike-shedding, arguing over the equivalents of tabbing and spacing or whether a line could be more functionally expressed, e.g....


Well perhaps another part of Google's code review culture is important, then: proposed changes must be style compliant, must compile, must have tests, and all the tests must pass before anyone will bother looking at them. If I got a code review (at Google) that was incorrectly indented, there would be a little red chip in the review UI that indicated such style violation, and I would just reply "Please fix" and not look at it again until the next revision.

In the absence of a strong style guide and testing culture then I agree code reviews can get bogged down in bikeshedding. At Google that doesn't seem to happen because the style guide for each language is quite prescriptive. For Go, no change will be reviewed that hasn't passed through gofmt, and so forth. And on the flip side you can have confidence that if your change has gone through gofmt, there's not going to be any discussion about the formatting.


What UI does Google use for code review?


While I value many of the same things as the author, I've found code reviews to be far inferior in every respect to pairing (especially promiscuous pairing (google it)), and to have negative effects in several important ways:

  * They delay integration.
  * They tend to encourage focus on abstract code polishing without proportionality or relation to the value of the code.
  * They favor superficial improvements, while increasing the costs sunk into paths that may be deeply flawed. (They facilitate late code-structural feedback, but not early directional/problem-analytical feedback.)
  * They often discourage more collaborative work and therefore quicker and richer feedback, by their presence as a substitute for pairing.
  * They create impediments to work moving quickly to completion.
Of these, the most egregious is the distraction from value. Teams that are spending a lot of time and energy talking about code quality (which is absolutely important, but not primary...the best teams I've seen maintain a high level of quality and talk about tradeoffs involved in delivering value at high quality) are often neglecting communication about where value lies and how to deliver it most efficiently.

[edited for formatting]


I agree that those things could be issues with code reviews, but I don't see how "promiscuous pairing" helps with most of them.

* They delay integration.

Pairing by definition slows down all work on code(you have 1 person working instead of two). Also, code reviews need not delay integration, even if you're following the article's suggestions. You can still merge all work in an integration branch, and pull it if it eventually doesn't pass code review.

* They tend to encourage focus on abstract code polishing without proportionality or relation to the value of the code.

This is an issue with the priorities of the people doing the code review, and thus applies just as much to pairing.

* They favor superficial improvements, while increasing the costs sunk into paths that may be deeply flawed. (They facilitate late code-structural feedback, but not early directional/problem-analytical feedback.)

I have no idea how pairing is possibly different from code review in this case.

* They often discourage more collaborative work and therefore quicker and richer feedback, by their presence as a substitute for pairing.

This is possibly true, so I'll just take the assertion at face value.

* They create impediments to work moving quickly to completion.

This is pretty much the same as #1.

"Of these, the most egregious is the distraction from value." - so the most egregious problem with code reviews is the people that do them might have wrong priorities. I don't see how pairing would change this.

A benefit of pairing that I can see over code reviews is when you get feedback - instant, real-time in pairing vs late in code reviews. The useful scenario I envision is when you start implementing a feature and your pair-buddy steers you away from dumping time into a poor solution. That and #4 seem to be significant reasons why you might want to introduce pair programming. However, pair programming is not without drawbacks when compared to code reviews - it takes up more time in most cases(this is where it derives its main benefit), some people work better solo, a mismatch of skill or expertise means one person has to slow down to work with the other. I wouldn't really say code reviews are "far inferior in every respect" to pairing, I think both have their place.


Thanks for your comments!

> I don't see how "promiscuous pairing" helps with most of them.

Maybe you should try it! The "promiscuous" part transfers context and knowledge around more quickly, and gives more opportunities, earlier in the process, to respond to feedback from a wider variety of perspectives. Two people might come to the same conclusion that another, joining a couple of hours later and without following the same mental path, would find crazy. It's generally pretty easy to change direction after a couple of hours.

> Pairing by definition slows down all work on code(you have 1 person working instead of two).

"Work on code" is not all that's involved in shipping product. In my experience, pairing produces less/simpler/better code that more directly addresses intended business value and cuts off unproductive paths more quickly than other methods. I much much much prefer cranking out less of the right code than producing tons of "perfect" code that solves the wrong problem (slightly hyperbolic, but you get the point, and it totally happens all the time). It's worth mentioning that what I'm talking about is pairing done well, by people who have learned how to do it well. It's absolutely possible to do a shit job pairing, as with anything else.

>* They tend to encourage focus on abstract code polishing without proportionality or relation to the value of the code. >This is an issue with the priorities of the people doing the code review, and thus applies just as much to pairing.

This is true to a certain degree, but good pairing typically involves thousands of little tradeoff decisions and discussions about how to solve problems. A code review can't reproduce the richness of all of that communication, so it necessarily emphasizes code in the abstract over deep consideration of the tradeoffs involved in the solution to a problem.

> I have no idea how pairing is possibly different from code review in [the case of superficial improvements.]

When I'm sitting down with someone to solve a problem and we explore a path for a few minutes, then I can step back and say, "wait, based on what we've just done, I think we're thinking about this whole thing in the wrong way, what about this other thing," it's easy to turn around and explore a totally different approach. When somebody has sunk an entire day into solving a problem, polishing the code to impress their coworkers, and responding to code review comments, it's extremely difficult to say, "hey, I think we're thinking about this the wrong way, what about this other thing?" I've experienced this problem nearly every time I've participated in code reviews and rarely during pairing. (To that point, pairing is such a better dynamic for this type of exchange. A code review has an oppositional nature, while with pairing, you're literally on the same side of the problem, sitting together to find a solution. The dynamic of questioning the path/approach starts off in a much more natural/friendly/collaborative place.)

> so the most egregious problem with code reviews is the people that do them might have wrong priorities

I think it goes beyond that. As mentioned above, the structure of code reviews (vs. pairing) favors certain types of feedback over others, and makes the feedback that I believe to be most valuable very difficult to give, socially, and too late to be really valuable.

>A benefit of pairing that I can see over code reviews is when you get feedback - instant, real-time in pairing vs late in code reviews.

Yes! Exactly. And from what I've seen, this makes an enormous amount of difference.

I don't agree with you about the time differences, especially when considering the full cycle of delivery. It's a notoriously difficult thing to measure effectively and/or prove, so...it's one of those things you have to experience first hand (in an environment where it's done well...there are plenty where it's not) to really buy into perhaps.

It's also true that some people just want to work solo. I'm motivated by producing the best possible product and generating the best possible outcomes, and pairing is the best way I've encountered to do that, so...in general, I'd prefer not to work with those people when they're steadfastly opposed to trying collaborative work. At the same time, I've found that most people who give it a shot with a good pair who has some patience, empathy, and skill, end up loving it. Even folks who generally work alone.


Interesting, this is the first time I'm hearing about promiscuous pairing. How big the team at your company is and what percentage of time is spent pair programming?


I do consulting, so I work with a lot of different teams of various sizes and configurations. The best teams tend to pair close to 100% of the time and generally have 4-10 developers. Fewer than that makes it difficult to swap around, mix perspective, and keep things fresh, and more than that starts to introduce more complications in planning and coordinating work streams and sharing context.


Code reviews can easily become a tool for people with huge egos to prove their smartness. I get code review comments for grammar of my comments or very small code style preferences that Google's anal style guide can't enforce (yet).

I like code reviews, don't get me wrong. But there should be a way to respond with "you just shut up, you're only trying to make yourself look smart".

It's all because higher up people mostly look at code review conversations this is happening. The other day I asked my peer how do I write this code? A or B? He said I don't care, A seems fine. Then in code review he commented it should be done in B way.

It's all politics.


The easiest is just fix it, even though its annoying.

Slightly clearer comments isn't a bad thing? Why not fix it in the two seconds it takes to.

No one's going to repremanded or promoted on code reviews.

Im a senior dev(ops) and don't mind if I get code review full of really minor issues by junior devs try to prove themselves. Every little issue fixed, makes a it a little bit better.


And those same ones commit stuff like the LoC I'm looking at right now that returns the results of a ternary, and there's a fucking bang symbol in front of every boolean, topped off with a bang preceding the parens around that ternary (IOW, some of those bangs should cancel out, and aren't needed). I ask myself, "how did this escape code review?" and the answer is probably that the senior dev who wrote it thumped the junior dev reviewing it with something along the lines of "that's how real Java devs do it", or something. I dunno, just guessing based on personalities. Same dev that self-admits he hasn't written unit tests before, yet argues with me (who has written thousands) about how to write unit tests.

Though I wonder if what you describe isn't just plain ol' bike shedding. I've seen it plenty in code reviews. Super sharp dev who generally writes great code puts a commit up for review, and someone feels like they ought to have some input, and because the code is otherwise solid they pick on grammar.


There's a line to walk for sure. When I code review a line that looks like this:

  $foo=fn1($bar) &&$baz = fn2 ($qux)
I always send it back. Do I look pedantic? Probably, but to me consistency is important. It's about caring about what you're doing. If I see a line like this, I assume the person who wrote it doesn't pay a lot of attention to detail and I pay closer attention to the rest of their commit. I mean, if you can't be troubled to set your IDE preferences to the team/project's rules, what's that say about you?


Isn't that in part a linting issue?

I figure code reviews are for reviewing stuff that can't be corrected by a computer.


I always complain about code comments that aren't complete sentences. Why should comments be misspelled or grammatically incorrect?


I also do the same thing. This is could sent he read down the rabbit hole if the comments is misleading which can be caused by misspelling or incorrect grammatical errors. Especially important if the team does not share the same language (ie English, German, French, or Mandarin etc).


Taking the time and effort to comment on grammar or code style preferences is anything but attempting to prove smartness; it's stupid work that shouldn't be necessary if the one submitting the code for review paid more attention to those basics.

If you can't write a coherent sentence or consider a style guide 'anal', you have no place developing for a project that is serious enough to warrant code reviews.

Another thing, if someone submits a PR with basic errors like that, they distract from the things that should be looked for in a code review - bugs, implementation details, etc.


I don't really see what your peer did wrong. Sometimes you need to see the actual implementation to understand how different approaches work out. There are definitely plenty of times I've started implementing A, then realised it doesn't work out that well.


That sounds like a culture problem. You don't really want people who are out to prove how smart they are, versus people that are trying to help the other engineers they work with get better.


Those two things can be presented as the same thing


I would go further and say that you should never trust code reviews to catch bugs. They're great for all the reasons in the article and what "zhemao" said but they're horrible for catching any bugs beyond the most trivial ones. Humans simply aren't good at running code in their head like that. Code review is no substitute for good code coverage and automated testing, preferably pre-commit.


Honest question: what strategy would you recommend to deal with a (senior) developer who specializes in opening gigantic pull request with significant number of bugs? (I invest a lot of time to read through the code and I catch lot of stuff, but it's draining huge amounts of my energy). Declining anything with coverage below 100% is not a viable option unfortunately, I think, and preaching about best practices gives little effect.


"This commit does too much different things. Please break it up into smaller units that do one thing only."

And then continue to reject it until you get nicely chunked up stuff.


I'd probably make your work visible by counting the hours spent in the review each day or week. If you are able to roughly calculate the costs associated with those reviews, you may be able to push your organization towards adopting better coding or testing standards. Alternatively, instead of reviewing the code and pointing out the bugs, suggest or write a few tests which will uncover those. Showing good test coverage in action may be better than just preaching it.


One of the really neat side effects of unit testing with high coverage is that it means your components must be suitable for more than one client: your actual product and the test cases.

This means it decouples all the components which leads to check-ins or pull requests that are also self contained and modular.

If you push for good test coverage (less than 100% but more than 80% perhaps?) for your unit tests, you'll get less bugs and more modular code and smaller check-ins.


If you see a problem, then describe the problem and reject request.


There could be a bunch of reasons there's bugs.

I view bugs as a potential issue in an individual's development process. It sounds like the biggest reason you already hit on -- gigantic pull requests. Gigantic requests usually mean PRs that aren't focused to one discrete work-piece, but represent work that kinda meanders to completion. Make sure they're focused, and ask for nonessential pieces to be delegated to another PR (style-only PRs being a good example).

Ultimately the only way to suppress large pull requests (or problematic practices in general) is to evangelize simple development workflows that others can help you with. Keep in mind that change for anyone, no matter intelligence or stubbornness, takes time - culture especially. You need to make it easier for them to adopt your methods instead of theirs when the time comes and they get fed up and are ready to change. Make sure you don't have an antagonistic relationship, either, or they'll do anything but what you want just to spite you :(

I've broken up some stuff to look for in code reviews, in code bases, and in developers. You probably only want to look at developers.

For the code itself:

- Do we have overdeveloped patterns that get in the way of expressive code? - Do these bugs fall into a general classification that can pinpoint a source (such as timing issues, authentication, database calls, or info validation?) - Is your code-base DRY? Repetitious functionality sprinkled around will mean bugs that never get fixed. Eliminate repetition. - Age of code-base? Newer code-bases should be more accepting of large PR's/more bugs, but should compensate with refactoring-only PRs, test blitzes, and accessible code coverage. Old code may have traditions that have outlived their usefulness, and owners that have grown too accustomed to the state of the code to understand the need for improvement. - No experts. Is this an inherited code-base with poorly documented patterns whose original authors have moved on?

For code reviews (speeding up review):

- First read-through of the code is only to familiarize myself with the code added. - Does this PR solve one problem or a bunch of problems? One feature/bug/style-change == one PR (per repository). Ask to break it up if it's big and doing more than one thing (unless those things are highly coupled). - Is there core functionality? Start with the most reused piece, and critique that first. - Is this code shared? Shared code should be held to a higher standard, especially if other people depend on it. Get their eyes on it too, it takes some work off you, builds ownership, and if this person is a problem, you'll want allies to back you up. - Are methods long? Longer methods tend to do more than one thing, and invites glomming onto existing methods instead of creating your own. Higher standards for shared code modification tends to lead to shorter methods, because reusing code needs to be a deliberate decision. Plus long methods can hide code reuse. - Is manual testing difficult? Long or difficult verification loops tend to allow for more bugs.

For the developer:

- What are their tools? What tools are you using that makes you more effective? Learning new tools is hard, but most bugs come from ineffective development practices - Are they senior in knowledge or senior in age/time? Not all senior devs deserve their title. - Do they stay on-task? Do they have to thrash between tasks? Unfocused development is a big source of bugs. - How familiar are they with the code base? Senior or not, code from new devs should require more scrutiny (but not to the point that it discourages collaboration).


That was going to be my comment on the title. If your code reviews are "...just for catching bugs", you're doing it wrong. I put another comment in this thread describing a crappy line of code. It works (I wrote the test for it), but my...$DEITY could it have been easier to read, and made more maintainable. But it apparently sailed right through code review.

Like you, I'd say that code review isn't for catching bugs at all. Is it readable? Can some Shmoe off the street who is hired to maintain make sense of it? How's the complexity? Do I need to maintain eight different states in my head to grok it? Et. al. Now, a team can help prevent potential bugs (either outright as the code stands, or bugs introduced later when someone tries to maintain it and it's a pasta derivative) with code reviews. I mean, didn't we run the unit tests we wrote before we submitted it for code review? Why are we finding bugs just by looking at the code?


I've personally (YMMV) found that going all the way to full TDD has resulted in the fewest delivery bugs (by a lot). Beats out unit testing for sure, but either of those are miles ahead of NO testing.


I think a benefit not mentioned in the article is that it makes sure that at least one person besides the original author understands what the code does.


It's very important, and also it makes sure everyone is aligned about how to evolve the codebase, the project is in some coherent state and you can reason about the code as a whole (same patterns over the whole codebase etc.).

Also with code reviews you make sure other devs are not (re)introducing some bugs that have been fixed before (maybe in other places of the app), and so on. There can be objective technical bugs, but code review will catch also the misconceptions about how stuff is supposed to work in the app, which can have profound consequences - no matter how great your code is technically, it you're building a square while circle was requested, it's not good.

I'm absolutely stunned by the comments in this thread. I've worked for a short while in a team with no code reviews, everyone was pushing whatever the hell they wanted and it was a nightmare (and git history was a total spaghetti). Soon after I joined the pull request workflow was introduced but it was too late, the code was so bad it was unmanagable.

Unless you have really magnificent team, chance are too high that someone will be writing non-maintainable/non-debuggable code, using super confusing variables/methods/classes names, or reinventing the wheel, or not handling errors properly. "We don't do code reviews" is a no-go for me at this point, second to only "We use ClearCase for version control".


Unfortunately code reviews don't ensure this. Reviewers, when faced with code in a domain they don't understand, don't go out of their way to learn it; instead they just look for surface-level / nitpicky stuff.


That's not a problem with code reviews; it's a problem with your company's implementation. "Please explain this" should be considered a very reasonable response to a code change the reviewer doesn't understand. In my experience, this ends in the code being cleaned up with readability in mind and the reviewer learning something new in more or less equal proportions.


It all depends on the reviewers, some will spend expensive hours on finding minor code style issues and not understand what actually happened in the commit.


absolutely agree -- situational awareness of changes in the codebase, this is the number one priority. Even if a reviewer does not understand much of it, when shit happens after merge, they know why. A new issue might be related "to that thing I read in so-and-so's review the other day"


I'm a dev with one year's experience, and recently moved to a team that reviews every PR. The benefits to me personally are super clear. #1, more experienced engineers are giving me frequent feedback, and that's obviously worth a ton. #2, it's part of my job to read other developers' code. I get exposed to patterns and design choices that I may not have in my repertoire.

Maybe the feedback I give the other direction isn't yet as valuable as what I receive on my PRs, but I trust that eventually it will be.

I totally get the posts pointing out that during crunch time, folks do pretend reviews, and the process becomes busy work. I think that's a symptom of other problems though (staffing model, etc), and not necessarily a shortcoming of peer review in general.


This is a great attitude to have!


As several other commenters have pointed out, code reviews are not a silver bullet. It is widely known that there is no silver bullet.

That said, if your goal is to make a quality product, you shouldn't have to choose between code reviews and continuous integration. You shouldn't have to choose between code reviews and code coverage, or manual QA processes. These are all widely regarded as best practices and if implemented "correctly" and appropriately to the team their combination forms a virtuous cycle for code health and team culture.


Code reviews : good programming habits :: sexual reproduction : good evolutionary traits

They make them spread much much faster than they otherwise would.


Working at a Gov't IT contractor, I really wish we had the organizational skills/incentive to do code reviews. Very rarely does the code I write get glanced at, much less examined.

My instinct, of course, is to solicit code reviews from my peers, but the organizational structure and support tooling are all woefully inadequate. Plenty of projects, even greenfield ones, aren't checked into source control. And of course if I spent time contacting those in my org that have similar skills and could thus review, not only would I have to justify it to my 3-4 managers, but so would the person I solicited for review to their own.

Nobody cares about the code quality, and thus it has been a nightmare for me trying to improve my skills right out of school. Here's to hoping I get out soon.


Plenty of projects, even greenfield ones, aren't checked into source control.

I really and truly did not know this still happens. Hell, even on throwaway/PoC stuff for which I am the sole developer, and code that stands a good chance of never seeing the light of day, I start with git init. 'cuz the probability that I'm going to wish later that it was in source control outweighs the very minor cost of putting it in there. For an organization that produces software that others will later use, I'm at a loss to explain it other than inertia.


I agree. Source control is an integral part of my workflow and it hurts to know that if someone here were to maintain my code later, they'd just copy and paste it as-is and start from there. The reasons for this are two-fold:

1) Like you said: Inertia. Most projects/developers here have been around for years, many starting before git was a thing. SVN is around and used quite a bit, but like I said, I've talked to developers that use neither. Since the code lives on the server, and the server is backed up, people feel no need for source control. Which is part of...

2) At least in the web development I'm doing, there's little to no "collaboration". I have been the sole person actually writing code on every project so far. There are teams working together, but usually every project has a single developer. This place is so vulnerable to their developers getting hit by a bus. But management doesn't care because 12 months down the line, once the developer is no longer working the project for whatever reason, the project is scrapped and either re-done because no one else was involved in the technical details, or they spend another year in federal procurement hell to get a shitty off-the-shelf product.


Let me start by saying that I don't disagree with you what so ever. I do, however, take issue with those other devs you've worked with. To wit:

Most projects/developers here have been around for years

I just made a comment this morning another dev that it occurred to me that the first program I ever wrote was compiled 40 years ago (digression: the reason I brought it up was to ask, "so why the hell do I still make off-by-one errors?"). The period of which I've used source control can be measured in decades. Until recently, the only reason I didn't use source control on a project was because back in the day SCM cost money, money that wasn't always available.

many starting before git was a thing

But, and I'm sure you're well aware so bear with me, source control has been around long before git showed up. The difference now is that git (and CVS and SVN before it) is free, as in beer, speech, whatever. That leaves us with little excuse these days (other than git being a general pain-in-the-ass to use).

Again, I'm not disagreeing with you personally. I guess it really just turned into a rant against lazy devs and/or the glacial organizations for which they work.


Hardly any one gives a shit about code quality, except the developers that have to maintain it afterwards.

Saying that, poor code quality has been a real incentive for me to improve my own code. When I eventually work out what something is doing and see how much simpler it could be i try to redo it in a way that will be a lot more understandable to the next person. If I visit my own code a month or two later and don't immediately understand it, its time to refactor it. At the end of the day I prefer writing code to debugging it, so the less time I have to spend trying to understand code, the less time I spend debugging.


True, code quality isn't as important as some people think. But I think that's there's different levels to indifference that can cause serious problems.

Code quality that simply reduces bugs and speeds maintenance is a lot less important than the kind of code quality that creates architecturally sound solutions and good products. Where I work, the lack of quality control/assurance affects so many things and has a visible detriment to the products.


Maybe you could be your own reviewer. Just pick something you did a month or two ago and check whether it still makes sense.


At a software shop about ten years ago I loved code reviews. They were a major way of learning and teaching things. I stopped doing some silly (albiet harmless) habits when other people pointed them out.


How do people handle reviews of highly specialized stuff? We have people who do stuff nobody else on the team understands or at least it would take them a long time of learning to do a real review. I look at a lot of stuff and check if it makes halfways sense. I can look at the coding style but I can't judge the overall design without spending many hours on it (which I don't have. Nobody else on the team has it either).

I am starting to think that pair programming may use up less engineering time than doing thorough reviews.


Where I work the same situation often comes up -- a developer may spend weeks doing research for a specialized function building prototypes, etc.

Ensure the specialized developer is doing due-diligence in defining and verifying the module works as intended and have others analyze its interactions in the larger system to prevent cascading failure, conforms to application norms, __is documented__, etc. Even if most others wouldn't understand the internals during review you will be taking steps to keep the risk localized. If the bus arrives and you lose the specialized developer then it's reasonable to assume another developer would need to spend nearly as much time to catch up on the research.

Otherwise you're responsible for making the decision to invest more developer time now in understanding the problem/implementation vs. later - a gamble at how much time until that developer leaves.

Taking over specialized/legacy code is part of the job. It can be painful but there are steps you can take to minimize that pain later.


> __is documented__

Documentation is indeed the name of the game. Few things are more frustrating than seeing a code do something and having no idea why.

I want to see comments, names of any clever algorithms used, links to applicable datasheets, research papers, etc. A short overview of the whole design is nice, too.


The thing is they can document as much as they want. Often the implementation is faulty and not what the documentation says it is. This happens a lot in multithreaded code.


That's a management failure. The organization can't depend on a single developer being the only one to understand a module. What happens when that developer leaves, even temporarily? A competent manager will dedicate time to cross training so that at least one other team member understands everything, even if this causes a short-term productivity loss.


That sounds good but in our case that's really not feasible. If a certain developer leaves almost anybody in the team can take over his work in a few weeks. But nobody has the team to stay up to date all the time. We would have to staff up quite a bit. This would be nice but it ain't gonna happen.


This is a topic close to my heart. I’m a software engineer for Canonical (company behind Ubuntu). Last year I did a study of the total review wait time over a two month period, which came out to be 8683hrs ( http://lingo.reviews/d3/juju_review_wait_times.html).

I wrote up my thoughts around scalable engineering here: bit.ly/1P0YgNo and released an MVP solution here: www.lingo.reviews

Since then I’ve been refining privately with a handful of engineers from different companies. Two days ago I put in my notice and took on solving the problem of scalable engineering as a full-time mission: http://codelingo.io

I'd love to connect with anyone that is passionate about this problem (it's been a 2 year obsession for me): jesse@codelingo.io


My employer develops safety critical software with decades of legacy code, long term support for multiple versions, and big customers. In our attempts to transition(in a mock way for the time being) to rolling releases we've "simplified" to a process with 3-5 people reviewing designs, 2-3 developers doing code review/light testing, 2-x(depending on affected modules) quality assurance people doing heavy testing and occasionally we pair program. Documentation/logistics is almost always more time consuming than design and development. I've had 4 character code changes take weeks to get through the process as several people have to find the time to look at it even for just an hour. It's cumbersome but we do catch a lot of bugs, and we're slowly getting better at design.


Code reviews:

* signal distrust by default - the work I do is not to be trusted to be merged in and by extension, I'm not to be trusted

* can create a culture of passive aggressiveness - you do something that I don't like, I'll get back at you when I'll review your code

* not guaranty code quality - having a junior review a junior's code will not yield expert level code

Just because Google does code reviews doesn't mean you should, for Google a bug could cost millions, for your project a bug might cost 10$, however the overhead of code reviews might cost more than 100$. It's important to do numbers and think rationally.


(Tedious disclaimer: my opinion only, not speaking for anybody else. I'm an SRE at Google.)

I distrust me by default, and you should too. Humans cannot write correct code, and the way to keep high code quality is to maximise our ability to fix the errors that result from having humans involved. I don't want me submitting any code that hasn't been looked at by another person. While I do (almost every day) manage to write some CLs that get approved without comments, I definitely have many CLs every day where somebody will say "This is confusing" or "There's no test for this part" or "Here's a better idea that I had".

If your team members are engaging in passive-aggressive abuse then you should find new ones, not try to do your job without interacting with them.

A person of the same experience level as me will routinely find things that I missed, just because they didn't spend two hours writing the code and are taking a fresh look at it. The same thing is true of a person more junior than me, if we can make them not be shy and write comments like "I don't understand what this does, therefore it is too confusing". No reviewer guarantees code quality, because nothing guarantees code quality, but my experience is consistently that 1 reviewer is a massive improvement over 0 reviewers, with marginal improvement based on reviewer experience.

The true cost of not doing code review is that your code will be harder to maintain in future, giving continual costs for its entire lifespan. The only code I consider to have a cost/benefit ratio that makes it worth skipping the review phase is code that I don't intend to keep for very long.


> Humans cannot write correct code, and the way to keep high code quality is to maximise our ability to fix the errors that result from having humans involved.

This kind of breeds a dilemma: If humans don't know how to write correct code, why are we trusting them to verify code correctness? :)

No matter how many times I run jslint, I always get the same result, however if I would show the same code to 20 programmers, I'm pretty sure I would get a lot of different results.

> If your team members are engaging in passive-aggressive abuse then you should find new ones, not try to do your job without interacting with them.

If your friend looks at your code, (s)he'll find a lot less issues than your rival. We software engineers are not emotionless objective beings.

My issue is that I have seen few issues that could have been caught in time by code reviews with the cost of the code review being less than the cost of just fixing the problem. Code reviewing every change is continuos effort, that might cost more than having a few quirks and fixing it.

I do understand some projects do require every kind of verification process you can throw at it, like software that controls nuclear power plants, however not everyone is building that kind of software.


Regardless of what SRE actually means I guess it must start with Senior. I have yet to hear official definition of that word but self-reflection, realization of own limited abilities and means-to-an-end mentality are something I personally would consider a strong candidate. Awesome mindset sir!


Site Reliability Engineer?


Assuming that people will make mistakes writing code isn't distrust, it's just common sense.

If someone is a good developer, I trust their first cut of code will be well thought out. I don't assume that they considered all corner cases or found the best design or written things in a way that makes sense to other people on the first try.

Things go downhill if people start taking code reviews personally, but then I don't think the real problem is code reviews.


> I don't assume that they considered all corner cases or found the best design or written things in a way that makes sense to other people on the first try.

Are you then assuming that the person who will review the code will consider everything?


signal distrust by default - the work I do is not to be trusted to be merged in and by extension, I'm not to be trusted

That doesn't have to be a bad thing. You (and everyone else) can't be trusted to be totally infallible, so if you want to produce good work relying on many eyes to catch mistakes, suggest improvements, or to learn from one another then you need to look at everyone's code. A code review should be a conducted in a safe, blame-free environment where everyone involved wants to make better software. That's should be the goal, not finger pointing or points scoring.


It's not just programming. Journalists and authors have editors for this exact reason.


My boss performs code reviews before my work is delivered. In my case it is mostly in a positive way: - he wants to know how stuff is implemented because he may need to maintain it if I am on leave. - it is a motivation for me to produce good code because I know it will be read - he may give good advice (for example usage of ArrayList instead of Vector in java)


This is not code review problem but huge ego problem. When you are discussing code you should always use technical arguments not personal attacks. Only bad programmers value their ego more than good advice from others. You should always use code review as opportunity to learn/teach new things not to fight or offend others.


I have yet to work in a team where everyone is 100% on the same page, agree on absolutely everything and totally objective and emotion free.

The reason I don't trust these methods is because they seem to ignore human nature.


What about the stress of continuous evaluation? How does the company view that? Standards are great to maintain, but people aren't robots. I question the value of the word 'review' in this context and its blocking nature.

Pair programming is a far more effective tool, but it's not always practical. How about using the terminology of 'collaboration' instead of the test based culture - which I feel turns people into machines - it's just the wrong control structure for people to be happy.


"not JUST for catching bugs"? I always thought catching bugs is the least of their purposes


Maybe I'm an arrogant XP-ist, but to me this sounds like a good step on the way to pair programming.


Depending on the org structure, pair programming can sometimes be a clearly superior version of a code review policy. Particularly when the review latency is high and, as a consequence, developers work on several branches in parallel.


It is, but with some differences; works better for remote teams (see github), is async (don't need to interrupt someone to get cracking), is more suitable to introverts (or anyone really, who can pair program for 8 hours / day or more?), and you get to sit down and review someone's code with a fresh pair of eyes instead of not catching mistakes because you wrote them or were there while they were being written.


In small/medium teams, especially remote ones, code review helps you keep understanding of the system - you already know what your code does and it's great way to learn the rest of the system.

In some sense, it can be looked at, as asynchronous pair programming.


Requiring code reviews before the commit is a bureaucratic waste of time and resources. If they non-mandatory and after the commit, then they can be a good idea.


wow. how so? You write perfect code and there is no need for a second pair of eyes ? This type of arrogance is always puzzling to me.


It's not arrogance. You don't have to write perfect code to realize that the insane cost of code review is not worth the problems it is supposed to solve. Especially when code reviews are notoriously not very good anyway at excising bugs.

The goal isn't perfect code. It's optimal delivery of business value. Code reviews are expensive, and not very optimal.


Why do you find them expensive?

I've done thousands in my career. They don't take much time compared to actually writing the code, and adding an extra 5% of engineering time pays major dividends later without drastically reducing throughput.


Code review makes you vulnerable to priority inversion. I need this change in now, to meet some deadline, but the reviewer has other priorities. Reviews pile up.

When this happens, a common response is to go work on something else, but this only exacerbates the problem: now you have a bunch of commits awaiting review which are dependent in various ways. If I merge this commit I have to go and fix up that one, and that other one. My work becomes quadratic, nursing a bunch of commits along while waiting their turn through a tiny review pipeline. 5% can be 100%.


This can happen if code reviews are not done well, I agree.

Things can be just as bad if code is not written well.

Doing either well requires good engineering practices and skill. It is part of the profession that you acquire over time, like anything else that can improve development.

Now, sometimes you are at the mercy of a system not filled with really experienced engineers. In such a system, though, I think skipping code reviews for expediency will probably be even worse in the long run.


I was speaking more about the team organization. Code reviews increase in cost as the team becomes more heterogenous and more distributed. If your team consists of a kernel engineer in Paris and an Angular developer in San Francisco, code reviews will be high cost and low value no matter how experienced the devs are.

I agree with you, even in the ideal case of a co-located homogeneous team, it's possible to screw up the process.


Completely agree.


For me continuous integration and refactoring have been the most important practices in keeping a code base clean, robust and agile.

Code review cultures tend to encourage the opposite of these agile practices: monolithic commits, infrequent integration, and minimal diffs -- in other words, practices that tend to result in lesser productivity.


I really like continuous integration/automated tests/code coverage dashboards/automated lint checkers, etc. The more decisions you can cleanly take away, the easier it is to work in a codebase. I agree with you there.

Once you have all that, the purpose of a code review changes a bit, but it is still really useful. Maybe you don't need locking, or maybe you should rewrite the base class, or maybe there is some other part of the code base that should be folded in, or maybe a better interface/algorithm. Stuff like that is hard for a computer to detect for you, at least for the next decade or two :).

The really big thing I push in code reviews is defensive programming. I want to make sure someone in a 2 years, who isn't familiar with the code, will not screw up the codebase after the original authors may have moved on to other things.


How hell does code review encourage those behaviours?

The best commits for code review are small ones. Nobody can be bothered with the big ones.


I believe the poster was specifically referring to pre-commit code reviews, not all code reviews in general. I think it's a stretch and not particularly fair to jump on him/her as being arrogant.


After commit there is little incentive to do the review (code is already there, QA can test it, story is done etc.)


You can often have 100% test coverage and critical change code reviews, for about same cost as doing code reviews for each commit.


It's hard to decide which are the critical changes. Even a single line change can have major consequences - where do you draw the line...

I guess both 100% code review coverage and 100% test coverage are extremes that one shouldn't worry too much about. But my suspicion is that 100% test coverage can do more harm than reviewing of each commit.


Obviously depends on a project and team size. In a reasonably sized team a team lead would be the one to draw the line.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: