← Back to context

Comment by lunar_mycroft

17 hours ago

It "obviously" does based on what, exactly? For most devs (and it appears you, based on your comments) the answer is "their own subjective impressions", but that METR study (https://arxiv.org/pdf/2507.09089) should have completely killed any illusions that that is a reliable metric (note: this argument works regardless of how much LLMs have improved since the study period, because it's about how accurate dev's impressions are, not how good the LLMs actually were).

Yes, self-reported productivity is unreliable, but there have been other, larger, more rigorous, empirical studies on real-world tasks which we should be talking about instead. The majority of them consistently show a productivity boost. A thread that mentions and briefly discusses some of those:

https://news.ycombinator.com/item?id=45379452

  • Some (partial) counter points:

    - I think given public available metrics, it's clear that this isn't translating into more products/apps getting shipped. That could be because devs are now running into other bottlenecks, but it could also indicate that there's something wrong with these studies.

    - Most devs who say AI speeds them up assert numbers much higher than what those studies have shown. Much of the hype around these tools is built on those higher estimates.

    - I won't claim to have read every study, but of the ones I have checked in the past, the more the methodology impressed me the less effect it showed.

    - Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.

    - Review is imperfect, and LLMs produce worse code on average than human developers. That should result in somewhat lowered code quality with LLM usage (although that might be an acceptable trade off for some). The fact that some of these studies didn't find that is another thing that suggests there shortcomings in said studies.

    • > - Most devs who say AI speeds them up assert numbers much higher than what those studies have shown.

      I am not sure how much is just programmers saying "10x" because that is the meme, but if at all realistic numbers are mentioned, I see people claiming 20 - 50%, which lines up with the studies above. E.g. https://news.ycombinator.com/item?id=46197037

      > - Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.

      Absolutely, and all the largest studies I've looked at mention this clearly and explain how they try to address it.

      > Review is imperfect, and LLMs produce worse code on average than human developers.

      Wait, I'm not sure that can be asserted at all. Anecdotally not my experience, and the largest study in the link above explicitly discuss it and find that proxies for quality (like approval rates) indicate more improvement than a decline. The Stanford video accounts for code churn (possibly due to fixing AI-created mistakes) and still finds a clear productivity boost.

      My current hypothesis, based on the DORA and DX 2025 reports, is that quality is largely a function of your quality control processes (tests, CI/CD etc.)

      That said, I would be very interested in studies you found interesting. I'm always looking for more empirical evidence!

      1 reply →

It's a good study. I also believe it is not an easy skill to learn. I would not say I have 10x output but easily 20%

When I was early in use of it I would say I sped up 4x but now after using it heavily for a long time some days it's 20% other days -20%

It's a very difficuly technology to know when you're one or the other.

The real thing to note is when you "feel" lazy and using AI you are almost certainly in the -20% category. I've had days of not thinking and I have to revert all the code from that day because AI jacked it up so much.

To get that speed up you need to be truly focused 100% or risk death by a thousand cuts.

not OP but I have a hard metric for you.

AI multiplied the amount of code I committed last month by 5x and it's exactly the code I would have written manually. Because I review every line.

model: Claude Sonnet 3.5/4.5 in VSCode GitHub Copilot. (GPT Codex and Gemini are good too)

  • I have no reason to think you're lying about the first part (although I'd point there's several ways that metric could be misleading, and approximately every piece of evidence available suggests it doesn't generalize), but the second part is very fishy. There's really no way for you to know whether or not you'd have written the same code or effectively the same code after reviewing existing code, especially when that review must be fairly cursory (because in order to get the speed up you claim, you must be spending much less time reviewing the code than it would have taken to write). Effectively, what you've done is moved the subjectivity from "how much does this speed me up?" to "is the output the same as if I had done it manually?"

    • > There's really no way for you to know whether or not you'd have written the same code or effectively the same code after reviewing existing code.

      There is in my case because it's just CRUD code. The pattern looks exactly like the code I wrote the month prior.

      And this is where LLMs excel at, in my experience. "Given these examples, extrapolate to these other cases."