Comment by simonw

3 days ago

Here's the full paper, which has a lot of details missing from the summary linked above: https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf

My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.

This study had 16 participants, with a mix of previous exposure to AI tools - 56% of them had never used Cursor before, and the study was mainly about Cursor.

They then had those 16 participants work on issues (about 15 each), where each issue was randomly assigned a "you can use AI" v.s. "you can't use AI" rule.

So each developer worked on a mix of AI-tasks and no-AI-tasks during the study.

A quarter of the participants saw increased performance, 3/4 saw reduced performance.

One of the top performers for AI was also someone with the most previous Cursor experience. The paper acknowledges that here:

> However, we see positive speedup for the one developer who has more than 50 hours of Cursor experience, so it's plausible that there is a high skill ceiling for using Cursor, such that developers with significant experience see positive speedup.

My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learing curve.

I find the very popular response of "you're just not using it right" to be big copout for LLMs, especially at the scale we see today. It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user. Typically if a user doesn't find value in the product, we agree that the product is poorly designed/implemented, not that the user is bad. But AI seems somehow exempt from this sentiment

  • > It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

    It's completely normal in development. How many years of programming experience you need for almost any language? How many days/weeks you need to use debuggers effectively? How long from the first contact with version control until you get git?

    I think it's the opposite actually - it's common that new classes of tools in tech need experience to use well. Much less if you're moving to something different within the same class.

    • > LLMs, especially at the scale we see today

      The OP qualifies how the marketing cycle for this product is beyond extreme, and its own category.

      Normal people are being told to worry about AI ending the world, or all jobs disappearing.

      Simply saying “the problem is the user”, without acknowledging the degree of hype, and expectation setting, the is irresponsible.

      11 replies →

    • It is completely typical, but at the same time abnormal to have tools with such poor usability.

      A good debugger is very easy to use. I remember the Visual Studio debugger or the C++ debugger on Windows were a piece of cake 20 years ago, while gdb is still painful today. Java and .NET had excellent integrated debuggers while golang had a crap debugging story for so long that I don’t even use a debugger with it. In fact I almost never use debuggers any more.

      Version control - same story. CVS for all its problems I had learned to use almost immediately and it had a GUI that was straightforward. git I still have to look up commands for in some cases. Literally all the good git UIs cost a non-trivial amount of money.

      Programming languages are notoriously full of unnecessary complexity. Personal pet peeve: Rust lifetime management. If this is what it takes, just use GC (and I am - golang).

      3 replies →

    • Linus did not show up in front of congress talking about how dangerously powerful unregulated version control was to the entirety of human civilization a year before he debuted Git and charged thousands a year to use it.

      4 replies →

    • Hmmm, I don't see it? Are debuggers hard to use? Sometimes. But the debugger is allowing you to do something you couldn't actually do before. i.e. set breakpoints, and step through your code. So, while tricky to use, you are still in a better position than not having it. Just because you can get better at using something doesn't automatically mean that using it as a beginner makes you worse off.

      Same can be said for version control and programming.

      1 reply →

    • > How many days/weeks you need to use debuggers effectively

      I understand your point, but would counter with: gdb isn't marketed as a cuddly tool that can let anyone do anything.

  • >It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

    Is that perhaps because of the nature of the category of 'tech peoduct'. In other domains, this certainly isn't the case. Especially if the goal is to get the best result instead of the optimum output/effort balance.

    Musical instruments are a clear case where the best results are down to the user. Most crafts are similar. There is the proverb "A bad craftsman blames his tools" that highlights that there are entire fields where the skill of the user is considered to be the most important thing.

    When a product is aimed at as many people as the marketers can find, that focus on individual ability is lost and the product targets the lowest common denominator.

    They are easier to use, but less capable at their peak. I think of the state of LLMs analogous to home computing at a stage of development somewhere around Altair to TRS-80 level. These are the first ones on the scene, people are exploring what they are good for, how they work, and sometimes putting them to effective use in new and interesting ways. It's not unreasonable to expect a degree of expertise at this stage.

    The LLM equivalent of a Mac will come, plenty of people will attempt to make one before it's ready. There will be a few Apple Newtons along the way that will lead people to say the entire notion was foolhardy. Then someone will make it work. That's when you can expect to use something without expertise. We're not there yet.

  • > It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

    Maybe, but it isn't hard to think of developer tools where this is the case. This is the entire history of editor and IDE wars.

    Imagine running this same study design with vim. How well would you expect the not-previously-experienced developers to perform in such a study?

    • No one is claiming 10x perf gains in vim.

      It’s just a fun geeky thing to use with a lot of zany customizations. And after two hellish years of memory muscling enough keyboard bindings to finally be productive, you earned it! It’s a badge of pride!

      But we all know you’re still fat fingering ggdG on occasion and silently cursing to yourself.

      9 replies →

    • What I like about IDE wars is that it remained a dispute between engineers. Some engineers like fancy pants IDEs and use them, some are good with vim and stick with that. No one ever assumed that Jetbrains autocomplete is going to replace me or that I am outdated for not using it - even if there might be a productivity cost associated with that choice.

      1 reply →

  • New technologies that require new ways of thinking are always this way. "Google-fu" was literally a hirable career skill in 2004 because nobody knew how to search to get optimal outcomes. They've done alright improving things since then - let's see how good Cursor is in 10 years.

  • >It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

    Apple's Response to iPhone 4 Antenna Problem: You're Holding It Wrong https://www.wired.com/2010/06/iphone-4-holding-it-wrong/

  • Stay tuned, a new study is coming with another revelation: you aren't getting faster by using Vim when you are learning it.

    My previous employer didn't even allow me to use Vim until I learned it properly so it wouldn't affect my productivity. Why would using a cursor automatically make you better at something if it's just new to you and you are already an elite programmer according to this study?

    • How did you measure this? Was the conslusion of your studies that typing/editing speed was the real bottlekneck for a SWE becoming 10x?

  • I think the reason for that is maybe you’re comparing to traditional products that are deterministic or have specific features that add value?

    If my phone keeps crashing or if the browser is slow or clunky then yes, it’s not on me, it’s the phone, but an LLM is a lot more open ended in what it can do. Unlike the phone example above where I expect it to work from a simple input (turning it on) or action (open browser, punch in a url), what an LLM does is more complex and nuanced.

    Even the same prompt from different users might result in different output - so there is more onus on the user to craft the right input.

    Perhaps that’s why AI is exempt for now.

  • It's a specialist tool. You wouldn't be surprised that it took awhile for someone to take a big to get at typed programming, parallel programming, docker, IaaC, etc. either.

    We have 2 sibling teams, one the genAI devs and the other the regular GPU product devs. It is entirely unsurprising to me that the genAI developers are successfully using coding agents with long-running plans, while the GPU developers are still more at the level of chat-style back-and-forth.

    At the same time, everyone sees the potential, and just like other automation movements, are investing in themselves and the code base.

  • On the other hand if you don't use vim, emacs, and other spawns from hell, you get labeled a noob and nothing can ever be said about their terrible UX.

    I think we can be more open minded that an absolutely brand new technology (literally did not exist 3y ago) might require some amount of learning and adjusting, even for people who see themselves as an Einstein if only they wished to apply themselves.

    • > you get labeled a noob

      No one would call one a noob for not using Vim or Emacs. But they might for a different reason.

      If someone blindly rejects even the notion of these tools without attempting to understand the underlying ideas behind them, that certainly suggests the dilettante nature of the person making the argument.

      The idea of vim-motions is a beautiful, elegant, pragmatic model. Thinking that it is somehow outdated is a misapprehension. It is timeless just like musical notation - similarly it provides compositional grammar and universal language, and leads to developing muscle memory; and just like it, it can be intimidating but rewarding.

      Emacs is grounded on another amazing idea - one of the greatest ideas in computer science, the idea of Lisp. And Lisp is just as everlasting, like math notation or molecular formulas — it has rigid structural rules and uniform syntax, there's compositional clarity, meta-reasoning and universal readability.

      These tools remain in use today despite the abundance of "brand new technology" because time and again these concepts have proven to be highly practical. Nothing prevents vim from being integrated into new tools, and the flexibility of Lisp allows for seamless integration of new tools within the old-school engine.

      2 replies →

  • Not every tool can be figured out in a day (or a week or more). That doesn't mean that the tool is useless, or that the user is incapable.

  • I've spent the last 2 months trying to figure out how to utilize AI properly, and only in the last week do I feel that I've hit upon a workflow that's actually a force multiplier (vs divisor).

    • Cool! Contragtulations to the anecdotal feelings of productivity. Superimportant input to the discussion. Now I at least can confidently say that investors will definitely get the hundreds of billions, trillions spent back with a fat ROI and profits on top!

      Thanks again!

  • > It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

    Sorry to be pedantic but this is really common in tech products: vim, emacs, any second-brain app, effectiveness of IDEs depending on learning its features, git, and more.

    • Well, surely vim is easy to use - I started it and and haven't stopped using it yet (one day I'll learn how to exit)

  • Just a few examples: Bicycle. Car(driving). Airplane(piloting). Welder. CNC machine. CAD.

    All take quite an effort to master, until then they might slow one down or outright kill.

Hey Simon -- thanks for the detailed read of the paper - I'm a big fan of your OS projects!

Noting a few important points here:

1. Some prior studies that find speedup do so with developers that have similar (or less!) experience with the tools they use. In other words, the "steep learning curve" theory doesn't differentially explain our results vs. other results.

2. Prior to the study, 90+% of developers had reasonable experience prompting LLMs. Before we found slowdown, this was the only concern that most external reviewers had about experience was about prompting -- as prompting was considered the primary skill. In general, the standard wisdom was/is Cursor is very easy to pick up if you're used to VSCode, which most developers used prior to the study.

3. Imagine all these developers had a TON of AI experience. One thing this might do is make them worse programmers when not using AI (relatable, at least for me), which in turn would raise the speedup we find (but not because AI was better, but just because with AI is much worse). In other words, we're sorta in between a rock and a hard place here -- it's just plain hard to figure out what the right baseline should be!

4. We shared information on developer prior experience with expert forecasters. Even with this information, forecasters were still dramatically over-optimistic about speedup.

5. As you say, it's totally possible that there is a long-tail of skills to using these tools -- things you only pick up and realize after hundreds of hours of usage. Our study doesn't really speak to this. I'd be excited for future literature to explore this more.

In general, these results being surprising makes it easy to read the paper, find one factor that resonates, and conclude "ah, this one factor probably just explains slowdown." My guess: there is no one factor -- there's a bunch of factors that contribute to this result -- at least 5 seem likely, and at least 9 we can't rule out (see the factors table on page 11).

I'll also note that one really important takeaway -- that developer self-reports after using AI are overoptimistic to the point of being on the wrong side of speedup/slowdown -- isn't a function of which tool they use. The need for robust, on-the-ground measurements to accurately judge productivity gains is a key takeaway here for me!

(You can see a lot more detail in section C.2.7 of the paper ("Below-average use of AI tools") -- where we explore the points here in more detail.)

  • Figure 6 which breaks-down the time spent doing different tasks is very informative -- it suggest: 15% less active coding 5% less testing, 8% less research and reading

    4% more idle time 20% more AI interaction time

    The 28% less coding/testing/research is why developers reported 20% less work. You might be spending 20% more time overall "working" while you are really idle 5% more time and feel like you've worked less because you were drinking coffee and eating a sandwich between waiting for the AI and reading AI output.

    I think the AI skill-boost comes from having work flows that let you shave half that git-ops time, cut an extra 5% off coding, but cut the idle/waiting and do more prompting of parallel agents and a bit more testing then you really are a 2x dev.

    • > You might be spending 20% more time overall "working" while you are really idle 5% more time and feel like you've worked less because you were drinking coffee and eating a sandwich between waiting for the AI and reading AI output.

      This is going to be interesting long-term. Realistically people don't spend anywhere close to 100% of time working and they take breaks after intense periods of work. So the real benefit calculation needs to include: outcome itself, time spent interacting with the app, overlap of tasks while agents are running, time spent doing work over a long period of time, any skill degradation, LLM skills, etc. It's going to take a long time before we have real answers to most of those, much less their interactions.

    • i just realized the figure is showing the time breakdown as a percentage of total time, it would be more useful to show absolute time (hours) for those side-by-side comparisons since the implied hours would boost the AI bars height by 18%

      1 reply →

  • Thanks for the detailed reply! I need to spend a bunch more time with this I think - above was initial hunches from skimming the paper.

    • Sounds great. Looking forward to hearing more detailed thoughts -- my emails in the paper :)

  • Really interesting paper, and thanks for the followon points.

    The over-optimism is indeed a really important takeaway, and agreed that it's not tool-dependent.

  • Were participants given time to customize their Cursor settings? In my experience tool/convention mismatch kills Cursor's productivity - once it gets going with a wrong library or doesn't use project's functions I will almost always reject code and re-prompt. But, especially for large projects, having a well-crafted repo prompt mitigates most of these issues.

  • With today's state of LLMs and Agents, it's still not good for all the tasks. It took me couple of weeks before being able to correctly adjust on what I can ask and what I can expect. As a result, I don't use Claude Code for everything and I think I'm able to better pick the right task and the right size of task to give it. These adjustment depends on what you are doing, the complexity of and the maturity of the project at play.

    Very often, I have entire tasks that I can't offload to the Agent. I won't say I'm 20x more productive, it's probably more in the range of 15% to 20% (but I can't measure that obviously).

  • Using devs working in their own repository is certainly understandable, but it might also explain in part the results. Personally I barely use AI for my own code, while on the other hand when working on some one off script or unfamiliar code base, I get a lot more value from it.

  • Your next study should be very experienced devs working in new or early life repos where AI shines for refactoring and structured code suggestion, not to mention documentation and tests.

    It’s much more useful getting something off the ground than maintaining a huge codebase.

  • Did each developer do a large enough mix of AI/non-AI tasks, in varying orders, that you have any hints in your data whether the "AI penalty" grew or shrunk over time?

Well, there are two possible interpretations here of 75% of participants (all of whom had some experience using LLMs) being slower using generative AI:

LLMs have a v. steep and long learning curve as you posit (though note the points from the paper authors in the other reply).

Current LLMs just are not as good as they are sold to be as a programming assistant and people consistently predict and self-report in the wrong direction on how useful they are.

  • Let me bring you a third (not necessarily true) interpretation:

    The developer who has experience using cursor saw a productivity increase not because he became better at using cursor, but because he became worse at not using it.

  • > Current LLMs just are not as good as they are sold to be as a programming assistant and people consistently predict and self-report in the wrong direction on how useful they are.

    I would argue you don't need the "as a programming assistant" phrase as right now from my experience over the past 2 years, literally every single AI tool is massively oversold as to its utility. I've literally not seen a single one that delivers on what it's billed as capable of.

    They're useful, but right now they need a lot of handholding and I don't have time for that. Too much fact checking. If I want a tool I always have to double check, I was born with a memory so I'm already good there. I don't want to have to fact check my fact checker.

    LLMs are great at small tasks. The larger the single task is, or the more tasks you try to cram into one session, the worse they fall apart.

  • > Current LLMs

    One thing that happened here is that they aren't using current LLMs:

    > Most issues were completed in February and March 2025, before models like Claude 4 Opus or Gemini 2.5 Pro were released.

    That doesn't mean this study is bad! In fact, I'd be very curious to see it done again, but with newer models, to see if that has an impact.

    • > One thing that happened here is that they aren't using current LLMs

      I've been hearing this for 2 years now

      the previous model retroactively becomes total dogshit the moment a new one is released

      convenient, isn't it?

      61 replies →

  • The third option is that the person who used Cursor before had some sort of skill atrophy that led to lower unassisted speed.

    I think an easy measure to help identify why a slow down is happening would be to measure how much refactoring happened on the AI generated code. Often times it seems to be missing stuff like error handling, or adds in unnecessary stuff. Of course this assumes it even had a working solution in the first place.

  • > people consistently predict and self-report in the wrong direction

    I recall an adage about work-estimation: As chunks get too big, people unconsciously substitute "how possible does the final outcome feel" with "how long will the work take to do."

    People asked "how long did it take" could be substituting something else, such as "how alone did I feel while working on it."

  • Or a sampling artifact. 4 vs 12 does seem significant within a study, but consider a set of N such studies.

    I assume that many large companies have tested efficiency gains and losses of there programmers much more extensively than the authors of this tiny study.

    A survey of companies and their evaluation and conclusions would carry more weight—-excluding companies selling AI products, of course.

    • If you use binomial test, P(X<=4) is about 0.105 which means p = 0.21.

> My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.

I totally agree with this. Although also, you can end up in a bad spot even after you've gotten pretty good at getting the AI tools to give you good output, because you fail to learn the code you're producing well.

A developer gets better at the code they're working on over time. An LLM gets worse.

You can use an LLM to write a lot of code fast, but if you don't pay enough attention, you aren't getting any better at the code while the LLM is getting worse. This is why you can get like two months of greenfield work done in a weekend but then hit a brick wall - you didn't learn anything about the code that was written, and while the LLM started out producing reasonable code, it got worse until you have a ball of mud that neither the LLM nor you can effectively work on.

So a really difficult skill in my mind is continually avoiding temptation to vibe. Take a whole week to do a month's worth of features, not a weekend to do two month's worth, and put in the effort to guide the LLM to keep producing clean code, and to be sure you know the code. You do want to know the code and you can't do that without putting in work yourself.

  • > Take a whole week to do a month's worth of features

    Everything else in your post is so reasonable and then you still somehow ended up suggesting that LLMs should be quadrupling our output

  • So a really difficult skill in my mind is continually avoiding temptation to vibe.

    I agree. I have found that I can use agents most effectively by letting it write code in small steps. After each step I do review of the changes and polish it up (either by doing the fixups myself or prompting). I have found that this helps me understanding the code, but also avoids that the model gets in a bad solution space or produces unmaintainable code.

    I also think this kind of close-loop is necessary. Like yesterday I let an LLM write a relatively complex data structure. It got the implementation nearly correct, but was stuck, unable to find an off-by-one comparison. In this case it was easy to catch because I let it write property-based tests (which I had to fix up to work properly), but it's easy for things to slip through the cracks if you don't review carefully.

    (This is all using Cursor + Claude 4.)

  • I feel the same way. I use it for super small chunks, still understand everything it outputs, and often manually copy/paste or straight up write myself. I don't know if I'm actually faster before, but it feels more comfy than alt-tabbing to stack overflow, which is what I feel like it's mostly replaced.

    Poor stack overflow, it looks like they are the ones really hurting from all this.

  • > but then hit a brick wall

    This is my intuition as well. I had a teammate use a pretty good analogy today. He likened vibe coding to vacuuming up a string in four tries when it only takes one try to reach down and pick it up. I thought that aligned well with my experience with LLM assisted coding. We have to vacuum the floor while exercising the "difficult skill [of] continually avoiding temptation to vibe"

I notice that some people have become more productive thanks to AI tools, while others are not.

My working hypothesis is that people who are fast at scanning lots of text (or code for that matter) have a serious advantage. Being able to dismiss unhelpful suggestions quickly and then iterating to get to helpful assistance is key.

Being fast at scanning code correlates with seniority, but there are also senior developers who can write at a solid pace, but prefer to take their time to read and understand code thoroughly. I wouldn't assume that this kind of developer gains little profit from typical AI coding assistance. There are also juniors who can quickly read text, and possibly these have an advantage.

A similar effect has been around with being able to quickly "Google" something. I wouldn't be surprised if this is the same trait at work.

  • One has to take time to review code and think through different aspects of execution (like memory management, concurrency, etc). Plenty of code cannot be scanned.

    That said, if the language has GC and other helpers, it makes it easier to scan.

    Code and architecture review is an important part of my role and I catch issues that others miss because I spend more time. I did use AI for review (GPT 4.1), but only as an addition, since not reliable enough.

  • Just to thank you for that point. I think it's likely more true than most of us realise. That and maybe the ability to mentally scaffold or outline a system or solution ahead of time.

  • An interesting point. I wonder how much my decades-old habit of watching subtitled anime helps there—it’s definitely made me dramatically faster at scanning text.

I was one of the survey participants, and guessed the result so wrong that I could make a meme out of myself.

We have heard variations of that narrative for at least a year now. It is not hard to use these chatbots and no one who was very productive in open source before "AI" has any higher output now.

Most people who subscribe to that narrative have some connection to "AI" money, but there might be some misguided believers as well.

  > My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.

This is what I heard about strong type systems (especially Haskell's) about 20-15 years ago.

"History does not repeat, but it rhymes."

If we rhyme "strong types will change the world" with "agentic LLMs will change the world," what do we get?

My personal theory is that we will get the same: some people will get modest-to-substantial benefits there, but changes in the world will be small if noticeable at all.

  • I don't think that's a fair comparison. Type systems don't produce probabilistic output. Their entire purpose is to reduce the scope of possible errors you can write. They kind of did change the world, didn't they? I mean, not everyone is writing Haskell but Rust exists and it's doing pretty well. There was also not really a case to be made where type systems made software in general _worse_. But you could definitely make the case that LLM's might make software worse.

    • That probabilistic output has to be symbolically constrained - SQL/JSON/other code is generated through syntax constrained beam search.

      You brought up Rust, it is fascinating.

      The Rust's type system differs from typical Hindle-Milner by having operations that can remove definitions from environment of the scope.

      Rust was conceived in 2006.

      In 2006 there already were HList papers by Oleg Kiselyov [1] that had shown how to keep type level key-value lists with addition, removal and lookup, and type-level stateful operations like in [2] were already possible, albeit, most probably, not with nice monadic syntax support.

        [1] https://okmij.org/ftp/Haskell/HList-ext.pdf
        [2] http://blog.sigfpe.com/2009/02/beyond-monads.html
      

      It was entirely possible to have prototype Rust to be embedded into Haskell and have borrow checker implemented as type-level manipulation over double parameterized state monad.

      But it was not, Rust was not embedded into Haskell and now it will never get effects (even as weak as monad transformers) and, as a consequence, will never get proper high performance software transactional memory.

      So here we are: everything in Haskell's strong type system world that would make Rust better was there at the very beginning of the Rust journey, but had no impact on Rust.

      Rhyme that with LLM.

    • Its too bad the management people never pushed Haskell as hard as they're pushing AI today! Alas.

  • Maybe it depends on the task. I’m 100% sure, that if you think that type system is a drawback, then you have never code in a diverse, large codebase. Our 1.5 million LOC 30 years old monolith would be completely unmaintainable without it. But seriously, anything without a formal type system above 10 LOC after a few years is unmaintainable. An informal is fine for a while, but not long for sure. On a 30 years old code, basically every single informal rules are broken.

    Also, my long experience is that even in PoC phase, using a type system adds almost zero extra time… of course if you know the type system, which should be trivial in any case after you’ve seen a few.

    • It's generally trivial for conventional class-based type systems like those in Java and C#, but TypeScript is a different beast entirely. On the surface it seems similar but it's so much deeper than the others.

      I don't like it. I know it is the way it is because it's supposed to support all the cursed weird stuff you can do in JS, but to me as a fullstack developer who's never really taken the time to deep dive and learn TS properly it often feels more like an obstacle. For my own code it's fine, but when I have to work with third party libraries it can be really confusing. It's definitely a skill issue though.

      1 reply →

    • Contrarily I believe that strong type system is a plus. Please, look at my other comment: https://news.ycombinator.com/item?id=44529347

      My original point was about history and about how can we extract possible outcome from it.

      My other comment tries to amplify that too. Type systems were strong enough for several decades now, had everything Rust needed and more years before Rust began, yet they have little penetration into real world, example being that fancy dandy Rust language.

I'm the developer of txtai, a fairly popular open-source project. I don't use any AI-generated code and it's not integrated into my workflows at the moment.

AI has a lot of potential but it's way over-hyped right now. Listen to the people on the ground who are doing real work and building real projects, none of them are over-hyping it. It's mostly those who have tangentially used LLMs.

It's also not surprising that many in this thread are clinging to a basic premise that it's 3 steps backwards to go 5 steps forward. Perhaps that is true but I'll take the study at face value, it seems very plausible to me.

> My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learning curve.

Could be the case for some, but I also think, that there is not much to climb on the learning curve for AI agents.

In my opinion, its more interesting, that the study also states, that AI capabilities may be comparatively lower on existing code:

> Our results also suggest that AI capabilities may be comparatively lower in settings with very high quality standards, or with many implicit requirements (e.g. relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn.

This is consistent with my personal/pear experience. On existing code: You have to do try and error with AI until you get a 'good' result. Or highly modify AI generated code by yourself (which is often slower then writing it yourself from the beginning).

My personal experience was that of a decrease in productivity until I spent significant time with it. Managing configurations, prompting it the right way, asking other models for code reviews… And I still see there is more I can unlock with more time learning the right interaction patterns.

For nasty, legacy codebases there is only so much you can do IMO. With green field (in certain domains), I become more confident every day that coding will be reduced to an AI task. I’m learning how to be a product manager / ideas guy in response

Looking at the example tasks in the pdf ("Sentencize wrongly splits sentence with multiple...") these look like really discrete and well defined bug fixes. AI should smash tasks like that so this is even less hopeful.

I'm sympathetic to the argument re experience with the tools paying off, because my personal anecdata matches that. It hasn't been until the last 6 weeks, after watching a friend demo their workflow, that my personal efficiency has improved dramatically.

The most useful thing of all would have been to have screen recordings of those 16 developers working on their assigned issues, so they could be reviewed for varying approaches to AI-assisted dev, and we could be done with this absurd debate once and for all.

I don't even think we know how to do it yet. I revise my whole attitude and all of my beliefs about this stuff every week: I figure out things that seemed really promising don't pan out, I find stuff that I kick myself for not realizing sooner, and it's still this high-stakes game. I still blow a couple of days and wish I had just done it the old-fashioned way, and then I'll catch a run where it's like, fuck, I was never that good, that's the last 5-10% that breaks a PB.

I very much think that these things are going to wind up being massive amplifiers for people who were already extremely sophisticated and then put massive effort into optimizing them and combining them with other advanced techniques (formal methods, top-to-bottom performance orientation).

I don't think this stuff is going to democratize software engineering at all, I think it's going to take the difficulty level so high that it's like back when Djikstra or Tony Hoare was a fairly typical computer programmer.

> My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learing curve.

Definitely. Effective LLM usage is not as straightforward as people believe. Two big things I see a lot of developers do when they share chats:

1. Talk to the LLM like a human. Remember when internet search first came out, and people were literally "Asking Jeeves" in full natural language? Eventually people learned that you don't need to type, "What is the current weather in San Francisco?" because "san francisco weather" gave you the same, or better, results. Now we've come full circle and people talk to LLMs like humans again; not out of any advanced prompt engineering, but just because it's so anthropomorphized it feels natural. But I can assure you that "pandas count unique values column 'Foo'" is just as effective an LLM prompt as "Using pandas, how do I get the count of unique values in the column named 'Foo'?" The LLM is also not insulted by you talking to it like this.

2. Don't know when to stop using the LLM. Rather than let the LLM take you 80% of the way there and then handle the remaining 20% "manually", they'll keep trying to prompt to get the LLM to generate what they want. Sometimes this works, but often it's just a waste of time and it's far more efficient to just take the LLM output and adjust it manually.

Much like so-called Google-fu, LLM usage is a skill and people who don't know what they're doing are going to get substandard results.

  • > Rather than let the LLM take you 80% of the way there and then handle the remaining 20% "manually"

    IMO 80% is way too much, LLMs are probably good for things that are not your domain knowledge and you can efford to not be 100% correct, like rendering the Mandelbrot set, simple functions like that.

    LLMs are not deterministic sometimes they produce correct code and other times they produce wrong code. This means one has to audit LLM generated code and auditing code takes more effort than writing it, especially if you are not the original author of the code being audited.

    Code has to be 100% deterministic. As programmers we write code, detailed instructions for the computer (CPU), we have developed allot of tools such as Unit Tests to make sure the computer does exactly what we wrote.

    A codebase has allot of context that you gain by writing the code, some things just look wrong and you know exactly why because you wrote the code, there is also allot of context that you should keep in your head as you write the code, context that you miss from simply prompting an LLM.

  • > Effective LLM usage is not as straightforward as people believe

    It is not as straightforward as people are told to believe!

    • ^ this, so much this. The amount of bullshit that gets shoveled into hacker news threads about the supposed capabilities of these models is epic.

  • "But I can assure you that "pandas count unique values column 'Foo'" is just as effective an LLM prompt as "Using pandas, how do I get the count of unique values in the column named 'Foo'?""

    How can you be so sure? Did you compare in a systematic way or read papers by people who did it?

    Now I surely get results giving the llm only snippets and keywords, but anything complex, I do notice differences the way I articulate. Not claiming there is a significant difference, but it seems to me this way.

    • > How can you be so sure? Did you compare in a systematic way or read papers by people who did it?

      No, but I didn't need to read scientific papers to figure how to use Google effectively, either. I'm just using a results-based analysis after a lot of LLM usage.

      4 replies →

  • > But I can assure you that "pandas count unique values column 'Foo'" is just as effective an LLM prompt as "Using pandas, how do I get the count of unique values in the column named 'Foo'?"

    While the results are going to be similar, typing a question in full can help you think about it yourself too, as if the LLM is a rubber duck that can respond back.

    I've found myself adjusting and rewriting prompts during the process of writing them before i ask the LLM anything because as i was writing the prompt i was thinking about the problem simultaneously.

    Of course for simple queries like "write me a function in C that calculates the length of a 3d vector using vec3 for type" you can write it like "c function vec3 length 3d" or something like that instead and the LLM will give more or less the same response (tried it with Devstral).

    But TBH to me that sounds like programmers using Vim claiming they're more productive than users of other editors because they have to use less keystrokes.

  • > Talk to the LLM like a human

    Maybe the LLM doesn't strictly need it, but typing out does bring some clarity for the asker. I've found it helps a lot to catch myself - what am I even wanting from this?

  • I'm not sure about your example about talking to LLMs. There is good reason to think that speaking to it like a human might produce better results, as that's what most of the training data is composed of.

    I don't have any studies, but it eems to me reasonable to assume.

    (Unlike google, where presumably it actually used keywords anyway)

    • > I'm not sure about your example about talking to LLMs. There is good reason to think that speaking to it like a human might produce better results, as that's what most of the training data is composed of.

      In practice I have not had any issues getting information out of an LLM when speaking to them like a computer, rather than a human. At least not for factual or code-related information; I'm not sure how it impacts responses for e.g. creative writing, but that's not what I'm using them for anyway.

"My intiution is that..." - AGREED.

I've found that there are a couple of things you need to do to be very efficient.

- Maintain an architecture.md file (with AI assistance) that answers many of the questions and clarifies a lot of the ambiguity in the design and structure of the code.

- A bootstrap.md file(s) is also useful for a lot of tasks.. having the AI read it and start with a correct idea about the subject is useful and a time saver for a variety of kinds of tasks.

- Regularly asking the AI to refactor code, simplify it, modularize it - this is what the experienced dev is for. VIBE coding generally doesn't work as AI's tend to write messy non-modular code unless you tell them otherwise. But if you review code, ask for specific changes.. they happily comply.

- Read the code produced, and carefully review it. And notice and address areas where there are issues, have the AI fix all of these.

- Take over when there are editing tasks you can do more efficiently.

- Structure the solution/architecture in ways that you know the AI will work well with.. things it knows about.. it's general sweet spots.

- Know when to stop using the AI and code it yourself.. particuarly when the AI has entered the confusion doom loop. Wasting time trying to get the AI to figure out what it's never going to is best used just fixing it yourself.

- Know when to just not ever try to use AI. Intuitively you know there's just certain code you can't trust the AI to safely work on. Don't be a fool and break your software.

----

I've found there's no guarantee that AI assistance will speed up any one project (and in some cases slow it down).. but measured cross all tasks and projects, the benefits are pretty substantial. That's probably others experience at this point too.

Thank you for the last paragraph.

Same thought came when I was reading the article and glad I am not alone.

Anecdotally, most common productivity boost is coming from cutting down weird slow steps in processes. Write an automation script, campaign previewer for marketing, etc etc.

Coding seems to transform to be a more efficient (again anecdotally) but not entirely faster. You can do a better work on a new feature in the same or slightly smaller time.

Idle time at 4% was interesting. I think this number goes higher the more you use a specific tool and adjust your workflow to that

In addition to the learning curve of the tooling, there's also the learning curve of the models. Each have a certain personality that you have to figure out so that you can catch the failure patterns right away.

A friend of mine, complete non-programmer, has been trying to use ChatGPT to write a phone app. I've been as hands off as I feel I can be, watching how the process goes for him. My observations so far is that it's not going well, he doesn't understand what questions he should be asking so the answers he's getting aren't useful. I encourage him to ask it to teach him the relevant programming but he asks it to help him make the app without programming at all.

With more coaching from me, which I might end up doing, I think he would get further. But I expected the chatbot to get him further through the process than this. My conclusion so far is that this technology won't meaningfully shift the balance of programmers to non-programmers in the general population.

> A quarter of the participants saw increased performance, 3/4 saw reduced performance.

The study used 246 tasks across 16 developers, for an average of 15 tasks per developer. Divide that further in half because tasks were assigned as AI or not-AI assisted, and the sample size per developer is still relatively small. Someone would have to take the time to review the statistics, but I don’t think this is a case where you can start inferring that the developers who benefited from AI were just better at using AI tools than those who were not.

I do agree that it would be interesting to repeat a similar test on developers who have more AI tool assistance, but then there is a potential confounding effect that AI-enthusiastic developers could actually lose some of their practice in writing code without the tools.

  • > potential confounding effect that AI-enthusiastic developers could actually lose some of their practice in writing code without the tools

    I don't think this is a confounding effect

    This is something that we definitely need to measure and be aware of, if there is a risk of it

> My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.

Yes, and I'll add that there is likely no single "golden workflow" that works for everybody, and everybody needs to figure it out for themselves. It took me months to figure out how to be effective with these tools, and I doubt my approach will transfer over to others' situations.

For instance, I'm working solo on smallish, research-y projects and I had the freedom to structure my code and workflows in a way that works best for me and the AI. Briefly: I follow an ad-hoc, pair-programming paradigm, fluidly switching between manual coding and AI-codegen depending on an instinctive evaluation of whether a prompt would be faster. This rapid manual-vs-prompt assessment is second nature to me now, but it took me a while to build that muscle.

I've not worked with coding agents, but I doubt this approach will transfer over well to them.

I've said it before, but this is technology that behaves like people, and so you have to approach it like working with a colleague, with all their quirks and fallibilities and potentially-unbound capabilities, rather than a deterministic, single-purpose tool.

I'd love to see a follow-up of the study where they let the same developers get more familiar with AI-assisted coding for a few months and repeat the experiment.

  • > I've not worked with coding agents, but I doubt this approach will transfer over well to them.

    Actually, it works well so long as you tell them when you’ve made a change. Claude gets confused if things randomly change underneath it, but it has no trouble so long as you give it a short explanation.

I have been teaching people at my company how to use AI code tools, the learning curve is way worse for developers and I have had to come up with some exercises to try and breakthrough the curve. Some seemingly can’t get it.

The short version is that devs want to give instructions instead of ask for what outcome they want. When it doesn’t follow the instructions, they double down by being more precise, the worst thing you can do. When non devs don’t get what they want, they add more detail to the description of the desired outcome.

Once you get past the control problem, then you have a second set of issues for devs where the things that should be easy or hard don’t necessarily map to their mental model of what is easy or hard, so they get frustrated with the LLM when it can’t do something “easy.”

Lastly, devs keep a shit load of context in their head - the project, what they are working on, application state, etc. and they need to do that for LLMs too, but you have to repeat themselves often and “be” the external memory for the LLM. Most devs I have taught hate that, they actually would rather have it the other way around where they get help with context and state but want to instruct the computer on their own.

Interestingly, the best AI assisted devs have often moved to management/solution architecture, and they find the AI code tools brought back some of the love of coding. I have a hypothesis they’re wired a bit differently and their role with AI tools is actually closer to management than it is development in a number of ways.

  • > Interestingly, the best AI assisted devs have often moved to management/solution architecture, and they find the AI code tools brought back some of the love of coding. I have a hypothesis they’re wired a bit differently and their role with AI tools is actually closer to management than it is development in a number of ways.

    The CTO and VPEng at my company (very small, still do technical work occasionally) both love the agent stuff so much. Part of it for them is that it gives them the opportunity to do technical work again with the limited time they have. Without having to distract an actual dev, or spend a long time reading through the codebase, they can quickly get context for an build small items themselves.

  • > Interestingly, the best AI assisted devs have often moved to management/solution architecture, and they find the AI code tools brought back some of the love of coding

    This suggests me though that they are bad at coding, otherwise they would have stayed longer. And I can't find anything in your comment that would corroborate the opposite. So what gives?

    I am not saying what you say is untrue, but you didn't give any convincing arguments to us to believe otherwise.

    Also, you didn't define the criteria of getting better. Getting better in terms of what exactly???

    • I'm not bad at coding. I would say I'm pretty damned good. But coding is a means-to-an-end. I come up with an idea, then I have the long-winded middle bit where I have to write all the code, spin up a DB, create the tables, etc.

      LLMs have given me a whole new love of coding, getting rid of the dull grind and letting me write code an order of magnitude quicker than before.

    • > This suggests me though that they are bad at coding, otherwise they would have stayed longer.

      Or they care about producing value, not just the code, and realized they had more leverage and impact in other roles.

      > And I can't find anything in your comment that would corroborate the opposite.

      I didn’t try and corroborate the opposite.

      Honestly, I don’t care about the “best coders.” I care about people who do their job well, sometimes that is writing amazing code but most of the time it isn’t. I don’t have any devs in my company who work in a magical vacuum where they are handed perfectly written tasks, they complete them, and then they do the next one.

      If I did, I could replace them with AI faster.

      > Also, you didn't define the criteria of getting better. Getting better in terms of what exactly?

      Delivery velocity - bug fixes, features, etc. that pass testing/QA and goes to prod.

      3 replies →

I can say that in my experience AI is very good at early codebases and refactoring tasks that come with that.

But for very large stable codebases it is a mixed bag of results. Their selection of candidates is valid but it probably illustrates a worst case scenario for time based measurement.

If an AI code editor cannot make more changes quicker than a dev or cannot provide relevant suggestions quick enough/without being distracting then you lose time.

Devil's advocate: it's also possible the one developer hasn't become more productive with Cursor, but rather has atrophied their non-AI productivity due to becoming reliant on Cursor.

  • I suspect you're onto something here but I also think it would be an extremely dramatic atrophy to have occurred in such a short period of time...

>My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.

Are we are still selling the "you are an expert senior developer" meme ? I can completely see how once you are working on a mature codebase LLMs would only slow you down. Especially one that was not created by an LLM and where you are the expert.

  • I think it depends on the kind of work you're doing, but I use it on mature codebases where I am the expert, and I heavily delegate to Claude Code. By being knowledgeable of the codebase, I know exactly how to specify a task I need performed. I set it to work on one task, then I monitor it while personally starting on other work.

    I think LLMs shine when you need to write a higher volume of code that extends a proven pattern, quickly explore experiments that require a lot of boilerplate, or have multiple smaller tasks that you can set multiple agents upon to parallelize. I've also had success in using LLMs to do a lot of external documentation research in order to integrate findings into code.

    If you are fine-tuning an algorithm or doing domain-expert-level tweaks that require a lot of contextual input-output expert analysis, then you're probably better off just coding on your own.

    Context engineering has been mentioned a lot lately, but it's not a meme. It's the real trick to successful LLM agent usage. Good context documentation, guides, and well-defined processes (just like with a human intern) will mean the difference between success and failure.

I feel like I get better at it as I use Claude code more because I both understand its strength and weaknesses and also understand what context it’s usually missing. Like today I was struggling to debug an issue and realised that Claude’s idea of a coordinate system was 90 degrees rotated from mine and thus it was getting confused because I was confusing it.

How were "experienced engineers" defined?

I've found AI to be quite helpful in pointing me in the right direction when navigating an entirely new code-base.

When it's code I already know like the back of my hand, it's not super helpful, other than maybe doing a few automated tasks like refactoring, where there have already been some good tools for a while.

  • > To directly measure the real-world impact of AI tools on software development, we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years.

Any "tricks" you learn for one model may not be applicable to another, it isn't a given that previous experience with a company's product will increase the likelihood of productivity increases. When models change out from under you, the heuristics you've built up might be useless.

It seems really surprising to me that anyone would call 50 hours of experience a "high skill ceiling".

i just treat ai as a very long auto complete. sometimes it surprises me. on things i do not know, like windows C calls, i think i ought to just search the documentation..

What you described has been true of the adoption of every technology ever

Nothing new this time except for people who have no vision and no ability to work hard not “getting it” because they don’t have the cognitive capacity to learn

LLMs are good for things you know how to do, but can't be arsed to. Like small tools with extensive use of random APIs etc.

For example I whipped together a Steam API -based tool that gets my game library and enriches it with data available in maybe 30 minutes of active work.

The LLM (Cursor with Gemini Pro + Claude 3.7 at the time IIRC) spent maybe 2-3 hours on it while I watched some shows on my main display and it worked on my second screen with me directing it.

Could I have done it myself from scratch like a proper artisan? Most definitely. Would I have bothered? Nope.

Simon's opinion is unsurprisingly that people need to read his blog and spam on every story on HN lest we be left behind.

> My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.

You hit the nail on the head here.

I feel like I’ve seen a lot of people trying to make strong arguments that AI coding assistants aren’t useful. As someone who uses and enjoys AI coding assistants, I don’t find this research angle to be… uh… very grounded in reality?

Like, if you’re using these things, the fact that they are useful is pretty irrefutable. If one thinks there’s some sort of “productivity mirage” going on here, well OK, but to demonstrate that it might be better to start by acknowledging areas where they are useful, and show that your method explains the reality we’re seeing before using that method to show areas where we might be fooling ourselves.

I can maybe buy that AI might not be useful for certain kinds of tasks or contexts. But I keep pushing their boundaries and they keep surprising me with how capable they are, so it feels like it’ll be difficult to prove otherwise in a durable fashion.

  • I think the thing is there IS a learning curve, AND there is a productivity mirage, AND they are immensely useful, AND it is context dependent. All of this leads to a lot of confusion when communicating with people who are having a different experience.

    • Right, my problem is that while some people may be correct about the productivity mirage, many of those people are getting out over their skis and making bigger claims than they can reasonably prove. I’m arguing that they should be more nuanced and tactical.

  • Still odd to me that the only vibe coded software that gets aquired are by companies selling tools or want to promote vibe coding.

    • Pardon my caps, but WHO CARES about acquisitions?!

      You’ve been given a dubiously capable genie that can write code without you having to do it! If this thing can build first drafts of those side projects you always think about and never get around to, that in and of itself is useful! If it can do the yak-shaving required to set up those e2e tests you know you should have but never have time for it is useful!

      Have it try out all the dumb ideas you have that might be cool but don’t feel worth your time to boilerplate out!

      I like to think we’re a bunch of creative people here! Stop thinking about how it can make you money and use it for fun!

      3 replies →

  • Exactly. The people who say that these assistants are useless or "not good enough" are basically burying their heads in the sand. The people who claim that there is no mirage are burying their head in the sand as well...