Comment by kif
20 hours ago
But that's the problem. Something that can be so reliable at times, can also fail miserably at others. I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load. You're not just coding anymore, you're thinking about what needs to be done, and then reviewing it as if someone else wrote the code.
LLMs are great for rapid prototyping, boilerplate, that kind of thing. I myself use them daily. But the amount of mistakes Claude makes is not negligible in my experience.
This is a fair observation, and I think it actually reinforces the argument. The burnout you're describing comes from treating AI output as "your code that happens to need review." It's not. It's a hypothesis. Once you reframe it that way, the workflow shifts: you invest more in tests, validation scenarios, acceptance criteria, clear specs. Less time writing code, more time defining what correct looks like. That's not extra work on top of engineering. That is the engineering now. The teams I've seen adapt best are the ones that made this shift explicit: the deliverable isn't the code, it's the proof that the code is right.
> I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load.
This needs more attention. There's a lot of inhumanity in the modern workplace and modern economy, and that needs to be addressed.
AI is being dumped into the society of 2026, which is about extracting as much wealth as possible for the already-wealthy shareholder class. Any wealth, comfort, or security anyone else gets is basically a glitch that "should" be fixed.
AI is an attempt to fix the glitch of having a well-compensated and comfortable knowledge worker class (which includes software engineers). They'd rather have what few they need running hot and burning out, and a mass of idle people ready to take their place for bottom-dollar.
This is a fair point. The cognitive load is real. Reviewing AI output is a different kind of exhausting than writing code yourself.
Even when the output is "guided," I don't trust it. I still review every single line. Every statement. I need to understand what the hell is going on before it goes anywhere. That's non-negotiable. I think it gets better as you build tighter feedback loops and better testing around it, but I won't pretend it's effortless.
You are correct, but this is not a new role. AI effectively makes all of us tech leads.
Prototyping is a perfectly fine use of LLMs - its easier to see a closer-to-finished good than one that is not.
But that won't generate the returns Model producers need :) This is the issue. So they will keep pushing nonsense.