← Back to context

Comment by keeda

1 day ago

> - Most devs who say AI speeds them up assert numbers much higher than what those studies have shown.

I am not sure how much is just programmers saying "10x" because that is the meme, but if at all realistic numbers are mentioned, I see people claiming 20 - 50%, which lines up with the studies above. E.g. https://news.ycombinator.com/item?id=46197037

> - Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.

Absolutely, and all the largest studies I've looked at mention this clearly and explain how they try to address it.

> Review is imperfect, and LLMs produce worse code on average than human developers.

Wait, I'm not sure that can be asserted at all. Anecdotally not my experience, and the largest study in the link above explicitly discuss it and find that proxies for quality (like approval rates) indicate more improvement than a decline. The Stanford video accounts for code churn (possibly due to fixing AI-created mistakes) and still finds a clear productivity boost.

My current hypothesis, based on the DORA and DX 2025 reports, is that quality is largely a function of your quality control processes (tests, CI/CD etc.)

That said, I would be very interested in studies you found interesting. I'm always looking for more empirical evidence!

> I see people claiming 20 - 50%, which lines up with the studies above

Most of those studies either measure productivity using useless metrics like lines of code, number of PRs, or whose participants are working for organizations that are heavily invested in future success of AI.

One of my older comments addressing a similar list of studies: https://news.ycombinator.com/item?id=45324157

  • As mentioned in the thread I linked, they acknowlege the productivity puzzle and try to control for it in their studies. It's worth reading them in detail, I feel like many of them did a decent job controlling for many factors.

    For instance, when measure the number of PRs they ensure that each one goes through the same review process whether AI-assisted or not, ensuring these PRs meet the same quality standards as humans.

    Furthermore, they did this as a randomly controlled trial comparing engineers without AI to those with AI (in most cases, the same ones over time!) which does control for a lot of the issues with using PRs in isolation as a holistic view of productivity.

    >... whose participants are working for organizations that are heavily invested in future success of AI.

    That seems pretty ad hom, unless you want to claim they are faking the data. Along with co-authors who are from premier institutes like NBER, MIT, UPenn, Princeton, etc.

    And here's the kicker: they all converge on a similar range of productivity boost, such as the Stanford study:

    > https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.

    The preponderence of evidence paints a very clear picture. The alternative hypothesis is that ALL these institutes and companies are colluding. Occam's razor and all that.

> if at all realistic numbers are mentioned, I see people claiming 20 - 50%

IME most people claim small integer multiples, 2-5x.

> all the largest studies I've looked at mention this clearly and explain how they try to address it.

Yes, but I think pre-AI virtually everyone reading this would have been very skeptical about their ability to do so.

> My current hypothesis, based on the DORA and DX 2025 reports, is that quality is largely a function of your quality control processes (tests, CI/CD etc.)

This is pretty obviously incorrect, IMO. To see why, let's pretend it's 2021 and LLMs haven't come out yet. Someone is suggesting no longer using experienced (and expensive) first world developers to write code. Instead, they suggest hiring several barely trained boot camp devs (from low cost of living parts of the world so they're dirt cheap) for every current dev and having the latter just do review. They claim that this won't impact quality because of the aforementioned review and their QA process. Do you think that's a realistic assessment? If and on the off chance you think it is, why didn't this happen on a larger scale pre-LLM?

The resolution here is that while quality control is clearly important, it's imperfect, ergo the quality of the code before passing through that process still matters. Pass worse code in, and you'll get worse code out. As such, any team using the method described above might produce more code, but it would be worse code.

> the largest study in the link above explicitly discuss it and find that proxies for quality (like approval rates) indicate more improvement than a decline

Right, but my point is that that's a sanity check failure. The fact that shoving worse at your quality control system will lower the quality of the code coming out the other side is IMO very well established, as is the fact that LLM generated code is still worse than human generated (where the human knows how to write the code in question, which they should if they're going to be responsible for it). It follows that more LLM code generation will result in worse code, and if a study finds the opposite it's very likely that the it made some mistake.

As an analogy, when a physics experiment appeared to find that neutrino travel faster than the speed of light in a vacuum, the correct conclusion was that there had almost certainly been a problem with the experiment, not that neutrinos actually travel faster than the speed of light. That was indeed the explanation. (Note that I'm not claiming that "quality control processes cannot completely eliminate the effect of input code quality" and "LLM generated code is worse than human generated code" are as well established as relativity.)