Hacker Newsnew | past | comments | ask | show | jobs | submit | riyanapatel's commentslogin

This post hits on something crucial - the difference between performative code review and substantive collaboration. The "theatre" metaphor is spot-on. The core issue Saša identifies - PRs that are "unreviewable" getting rubber-stamped with "LGTM". Teams go through the motions without the actual substance.

The storytelling approach through commits is brilliant, but it only works if you solve the human factors too. Even perfectly crafted PRs with great commit narratives get surface-level reviews. The friction kills engagement.

A few complementary approaches I've seen work: pair reviewing for complex changes that are hard to break down, AI pre-screening for basic issues so humans can focus on architecture/business logic, and synchronous review sessions when async back-and-forth is just burning time.

The key insight: good PR structure needs to be paired with removing tooling friction. When review is painful, people default to "LGTM" regardless of how well the story is told.



I'm assuming you're thinking about whether or not to allow the interviewee to use AI to answer code-related questions or problems. I'll be honest. This is a huge debate now, given the shift of utilizing AI. However, I believe that the interviews should be a direct reflection of what the employee will be doing at the company. If that company uses AI in their workflows, then why shouldn't it be allowed in interviews? You need to see the way the user utilizes AI to solve problems or understand their thought processes.


We tried this, and found that it made it very hard to get a signal, during the limited 1 hour we had for a particular coding exercise, of what they are capable of themselves. A roll of paper towels could produce a functioning UI in React or another “exercise-sized” thing with Windsurf or Cursor, or even Copilot, in an hour.

We decided that if you were going to have them use ai the interview prompt given would probably need to be crafted with that in mind, both to give them enough to do, and to try to make it so that they’ll probably need to do some additional manual work to get to a final solution.


Really cool!


Sometimes music is "quieter" than the loud office and chatter around me.


Windsurf, Cursor, Lovable I've heard are some good ones


Do bluelight glasses really work? Or are they just a ploy for it to be socially acceptable for people with 20/20 vision to wear glasses?


I like that title more now that I'm thinking about. The credit will always go to the one doing the work, and agents, programs, or systems still need the human touch.


Good callout. Do you feel like when failing, even if the issue was inherently the human's fault, blaming it on AI makes it a lot easier than admitting to mistakes?


You make a good point: Saying you "Googled" something does not give it as much credit since we know it's not Google, but all the other websites, publications, and sources that provided the info. You'd probably credit the website itself. I agree with the claim that the credit essentially should go to the one doing the "heavy lifting" aka training the LLM to even create the output.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: