← Back to context

Comment by ares623

1 day ago

Can’t help but draw parallels to how working with AI feels like. Your coworker opens a giant impressive looking PR and marks it ready for review. Meanwhile it’s up to someone else in the team to do the actual work of checking. Meanwhile the PR author gets patted on the back by management for being forward thinking and pro-active while everyone else is “nitpicky” and holding progress back.

I’m dealing with similar issues.

It’s reasonable to come up with team rules like:

- “if the reviewer finds more than 5 issues the PR shall be rejected immediately for the submitter to rework”

- “if the reviewer needs to take more than 8 hours to thoroughly review the PR it must be rejected and sent back to split up into manageable change sets”

Etc etc. let’s not make externalizing work for others appropriate behavior.

  • Eight hours to review! Girlie how big are these PRs?

    I can’t imagine saying, “ah, only six hours of heads down time to review this. That’s reasonable.”

    A combination of peer reviewed architecture documentation and incremental PRs should prevent anything taking nearly 8 hours of review.

    • Agreed, if it takes 8 hours to review a PR, then the process is broken and you need to start talking before anyone starts writing code. I'd put the max window on maybe 30 minutes for a PR, otherwise we're doing something else, not a "last pass before merge into production".

Not to mention the fact that juniors can now put the entire problem statement in AI chatbot which spits out _some_ code. The said juniors then don't understand half the code and run the code and raise the PR. They don't get a pat on the back but this raises countless bugs later on. This is much worse as they don't develop skills on their own. They blindly copy from AI.