Hacker News new | past | comments | ask | show | jobs | submit login

I'm a bit skeptical here. We should ask the question: why are we reviewing code in the first place? This has some hot debates on HN every now and then, and it's because reviews are not just automated checks but part of the engineering culture, which is a defining part of any company or eng department.

PR reviews are a way of learning from each other, keeping up with how the codebase evolves, sharing progress and ideas, giving feedback and asking questions. For example at $job we 90% approve PRs with various levels of pleas, suggestions, nitpicks and questions. We approve because of trust (each PR contains a demo video of a working feature or fix) and not to block each other, but there might be important feedback or suggestions given among the comments. A "rubber stamp bot" would be hard to train in such a review system and simply misses the point of what reviews are about.

What happens if there is a mistake (hidden y2k bomb, deployment issue, incident, regression, security bug, bad database migration, wrong config) in a PR that passes a human review? At a toxic company you get finger pointing, but with a healthy team, people can learn a lot when something bad passes a review. But you can't discuss anything with an indeterministic review bot. There's no responsibility there.

Another question is the review culture. If this app is trained on some repo (whether PRs were approved or not), past reviews reflect the review culture of the company. What happens when a blackbox AI takes that over? Is it going to train itself on its own reviews? People and review culture can be changed, but a black box AI is hard to change in a predictable way.

I'd rather set up code conventions, automated linters (i.e. deterministic checks) etc. than have a review bot allow code into production. Or just let go of PR reviews altogether, there were some articles shared on HN about that recently. :)




I agree with a lot of what you are saying here.

> why are we reviewing code in the first place?

It being part of engineering culture is spot on. I think of it as two things: (1) quality gate and (2) knowledge sharing. Because of (1), by default reviews can feel a bit like submitting homework - not all contributions are of the same risk level but they follow the same process.

The idea behind Codeball is unassuming - identify and approve the bulk of easy contributions so that devs can focus their energy reviewing the trickier ones. This is can be especially nice in a trustful environment, keeping the momentum for devs to ship small & often.

Another thing is - models can incorporate a surprising amount of indicators, for example, not just the outcome of the PR but also what happens to contribution after merging (was the code retained as-is or was it hot fixed a day later etc).


I def think 90% of the value in code review is knowledge sharing and general coding practice discussion in good eng cultures. Tests should really catch glaring mistakes and it's fairly rare that someone will be like this absolutely won't work for these reasons which you didn't catch.

If anything I think code review as a "nothing bad will happen" check gives a really false sense of security unless you have a super strict bus factor crazy smart kinda asshole engineer on the team who is probably going to piss everyone off with strict code reviews that most of the time are about personal preference but sometimes actually do catch the edge cases.


This could be really useful for large-scale changes across the company’s codebase that are usually reviewed by one high level engineer who doesn’t know much about the code being changed, but the changes is pushed through anyways because getting approval from all the owners would prevent the change from happening at all. In this case, automated code review makes more sense compared to more common localized code changes.

But those large-scale changes are also usually systematic, so wouldn’t have much to do with coding conventions or styles.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: