Comment by mdavidn
17 hours ago
It sounds like you're the one in denial? AI makes some things faster, like working in a language I don't know very well. It makes other things slower, like working in a language I already know very well. In both cases, writing code is a small percentage of the total development effort.
No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.
Even if you limit your AI experience to finding information online through deep research it's such a time saver and productivity booster that makes a lot of difference.
The list of things it can do for you is massive, even if you don't have it write a single line of code.
Yet the counter argument is like "bu..but..my colleague is pushing slop and it's not good at writing code for me", come on, then use it at things it's good at, not things you don't find it satisfactory.
I am not even a software engineer but from using the models so much I think you are confined to a specific niche that happens to be well represented in the training data so you have a distorted perspective on the general usefulness of language models.
For some things LLMs are like magic. For other things LLMs are maddeningly useless.
The irony to me is anyone who says something like "you don't know how to use the LLM" actually hasn't explored the models enough to understand their strengths/weaknesses and how random and arbitrary the strengths and weakness are.
Their use cases happen to line up with the strengths of the model and think it is something they are doing special themselves when it is not.
It "obviously" does based on what, exactly? For most devs (and it appears you, based on your comments) the answer is "their own subjective impressions", but that METR study (https://arxiv.org/pdf/2507.09089) should have completely killed any illusions that that is a reliable metric (note: this argument works regardless of how much LLMs have improved since the study period, because it's about how accurate dev's impressions are, not how good the LLMs actually were).
Yes, self-reported productivity is unreliable, but there have been other, larger, more rigorous, empirical studies on real-world tasks which we should be talking about instead. The majority of them consistently show a productivity boost. A thread that mentions and briefly discusses some of those:
https://news.ycombinator.com/item?id=45379452
3 replies →
It's a good study. I also believe it is not an easy skill to learn. I would not say I have 10x output but easily 20%
When I was early in use of it I would say I sped up 4x but now after using it heavily for a long time some days it's 20% other days -20%
It's a very difficuly technology to know when you're one or the other.
The real thing to note is when you "feel" lazy and using AI you are almost certainly in the -20% category. I've had days of not thinking and I have to revert all the code from that day because AI jacked it up so much.
To get that speed up you need to be truly focused 100% or risk death by a thousand cuts.
not OP but I have a hard metric for you.
AI multiplied the amount of code I committed last month by 5x and it's exactly the code I would have written manually. Because I review every line.
model: Claude Sonnet 3.5/4.5 in VSCode GitHub Copilot. (GPT Codex and Gemini are good too)
2 replies →
>No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.
Feel free to cite said data you've seen supporting this argument.