Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My beef with code reviews is that often they lead to tremendous amounts of wasted time, that's many thousands spent in a single week sometimes for simple pull requests.

Working from 6 years, not much, and not in as many places like others, I have built the opinion that code reviews are like tests, they should be used as a tool when they are necessary, they shouldn't be the default for every change.

In best case scenarios is the person creating the pull requests that requests reviews or decides the place needs tests.

My opinion obviously applies to product software, for libraries, especially when publicly exposed you want as much discussions and tests as possible.



The discipline of putting up small, coherent, explained, tested, and reviewable units of change, that you have looked over and feel comfortable showing off to other people as your work product, is 80% of the value for me. Whether anyone else actually thinks about it deeply or has something useful to say about it is secondary.


Indeed. It's kind of like rubberducking.


This is certainly true for blocking code reviews. I'm interested in exploring the alternative, which is review-after-commit. There's an article describing those here: https://copyconstruct.medium.com/post-commit-reviews-b4cc216...

Code still gets reviewed, but you don't end up with PRs languishing for hours, days or even weeks waiting to get a review from someone.


I review a lot of code with the mindset of yes and…

Basically when I start the PR is approved in my head until I find something blocking. I.e. major problem that causes dataloss, big performance issue or breaks other code. Anything else is a minor comment at most. The PR is approved by default. This gives the dev responsibility and ownership. Plus it increases release cadence

Doing a follow up PR to improve or fix something is just as fast as blocking the MR, but blocking the MR can be bad for morale.

This strategy might work better in young startups where having the feature _exist_ is better than not shipping. in my experience this builds up responsibility and ownership and removes the whole policing of other peoples work vibe around code review.

Also decisions and discussion around formatting, when to test, design, features, functionality, architecture should not happen during code review, they should have happened way before coding, or collaboratively while working. Code review is the worst time for that stuff, imho it should be to sanity check implementation.


I used to do a lot of this in a small team where we didn't block on reviews (we did reviews but didn't block). I was a senior developer on the team and I'd take time to read through new sections of code that came in. That worked pretty well.

Interesting enough, this bit of code/project, that didn't have super strict code review requirements, but had a lot of tests, is the code I worked on that I would consider the most robust/high quality. It was run by > 10 million users in a fairly important application. It wasn't huge and it had good tests (and was generally quite testable).

That said, it's really hard to control review-after-commit. Maybe we need better tooling for that. In my case, for the areas of code I was involved in, it was small enough to track in my head.


I'd love to explore that alternative, but I'm not sure how to actually make sure that any errors found/changes suggested after the commit is pushed and deployed actually get implemented.


I really like this idea! It's not like I need to check how any individual developer approaches their work (although that could become a useful mentoring session in some cases) but what matters is what it looks like before going into production.

The main difficulty I see with the described approach is that different changes will be interleaved in the trunk and it might be hard to extract just one of them to deploy. But that's what feature flags are for!


There’s probably more to this story. It’s somewhat a skill and maybe art to create MRs so that they are easy to review and get approved.

It is also true for the feedback on MRs. Providing compressed and succinct feedback makes it faster to address.

Almost like “if the change is difficult, refactor and make the change easy”. There are many ways to do one thing, some are better, some are not.

Companies and teams that have good review culture are successful in using the reviews as a tool.


This is so far from my experience with code reviews that I'd like to ask some questions to follow up on your experience. Do you mind shooting an email to hn@xkqr.org?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: