Comment by csto12

8 days ago

I have read comments about this on X, here, and other places, yet I have ever seen there be proof this is an actual productivity boost.

I use Claude Opus (4.5, 4.6) all the time and catch it making making subtle mistakes, all the time.

Are you really being more productive (let’s say 3x times more), or just feel that way because you are constantly prompting Claude?

Maybe I’m wrong, but I don’t buy it.

I agree. The code despite detailed spec reveals bugs and edge cases upon inspection.

I'm talking Claude Opus 4.6 here.

> I use Claude Opus (4.5, 4.6) all the time and catch it making making subtle mistakes, all the time.

Didn't we make subtle mistakes without AI?

Why did we spend so much time debugging and doing code reviews?

> Are you really being more productive (let’s say 3x times more)

At least 2x more productive, and that's huge.

  • I think you’ve forgotten about the context of OP’s post. He said he uninstalled vscode and uses a dashboard for managing his agents. How are you going to be able to do code review well when you don’t even know what’s going on in your own project? I catch subtle bugs Claude emits because I know exactly what’s happening because I’m actively working with Claude, not letting Claude do everything.

i really don't understand why people keep thinking this. i'm easily 10x more productive since Claude Code came out. it's insane how much stuff you can build quickly, especially on personal projects.

  • Of course personal projects are much quicker because usually personal projects don't have high code standards... I'm talking about production code.

typical experience when only using one foundational model TBH. results are much better if you let different models review each other.

the bottleneck now is testing. that isn't going away anytime soon, it'll get much worse for a bit while models are good at churning code out that's slightly wrong or technically correct, but solving a different problem than intended; it's going to be a relatively short lived situation I'm afraid until the industry switches to most code being written for serving agents instead of humans.

  • The way LLMs work, different tokens can activate different parts of the network. I generally have 2-3 different agents review it from different perspectives. I give them identities, like Martin Fowler, or Uncle Bob, or whatever I think is relevant.

    • true - but the way LLMs are trained, google's RLVR is different from anthropic's is different from openai's. you'll get very good results sending the same 'review this change' prompt (literally) to different models.