← Back to context

Comment by wakeywakeywakey

17 hours ago

> ... or how it's supposed to make any programmer worth their weight in salt 10x better.

It doesn't. The only people I've seen claim such speedups are either not generally fluent in programming or stand to benefit financially from reinforcing this meme.

The speedup from AI is in the exponent.

Just the other day ChatGPT implemented something that would have taken me a week of research to figure out: in 10 minutes. What do you call that speedup? It's a lot more than 10x.

On other days I barely touch AI because I can write easy code faster than I can write prompts for easy code, though the autocomplete definitely helps me type faster.

The "10x" is just a placeholder for averaging over a series of stochastic exponents. It's a way of saying "somewhere between 1 and infinity"

  • > Just the other day ChatGPT implemented something that would have taken me a week of research to figure out: in 10 minutes. What do you call that speedup? It's a lot more than 10x.

    Can you share what exactly this was? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.

    Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.

    • Sure. I needed to draw some parametric and smooth Bézier curves. LLMs are beasts at figuring out the appropriate equations. It would have taken me forever to work out where all the control points should go.

For every conspicuous vibecoding influencer there are a bunch of experienced software engineers using them to get things done. The newest generation of models are actually pretty decent at following instructions and using existing code as a template. Building line-of-business apps is much quicker with Claude Code because once you've nicely scaffolded everything you can just tell it to build stuff and it'll do so the same way you would have in a fraction of the time. You can also use it to research alternatives to architectural approaches and tooling that you come up with so that you don't paint yourself into a corner by having not heard about some semi-niche tool that fits your use case perfectly.

Of course I wouldn't use an LLM to #yolo some Next.js monstrosity with a flavor-of-the-week ORM and random Tailwind. I have, however, had it build numerous parts of my apps after telling it all about the mise targets and tests and architecture of the code that I came up with up front. In a way it vindicates my approach to software engineering because it's able to use the tools available to it to (reasonably) ensure correctness before it says it's done.

I am a professional engineer with around 10 years of experience and I use AI to work about 5x faster on a site I personally maintain (~100 DAU, so not huge, but also not nothing). I don’t work in AI so I get no financial benefit by “reinforcing this meme”.

  • Same position, different results. I'm maybe 20% faster. Writing the code is rarely the bottleneck for me, so there's limited potential in that way. When I am writing the code, things that I'd find easy and fast are a little faster (or I can leave AI doing them). Things that are hard and slow are nearly as hard and nearly as slow when using AI, I still need to maintain most of the code in my head that I'd need to without AI, because it'll get things wrong so quickly.

    I think what you're working on has a huge impact on AI's usability. If you're working on things that are simple conceptually and simple to implement, AI will do very well (including handling edge cases). If it's a hard concept, but simple execution, you can use AI to only do the execution and still get a pretty good speed boost, but not transformational. If it's a hard concept and a hard execution (as my latest project has been), then AI is really just not very good at it.

  • Oh, well if it can generate some simple code for your personal website, surely it can also be the "next level of abstraction" for the entirety of software engineering.

    • Well, I don’t really think it’s “simple”. The code uses React, nodejs, realtime events pushed via SSE, infra pushed via Terraform, postgres, blob store on S3, emails send with SES… sure, it’s not the next Google, but it’s a bit above, like, a personal blog.

      And in any case, you are moving goalposts. OP said he had never seen anyone serious claim that they got productivity gains from AI. When I claim that, you say “well it’s not the next level of abstraction for all SWE”. Obviously - I never claimed that?

      1 reply →

  • > either not generally fluent in programming or stand to benefit financially from reinforcing this meme

    Then figure out which one of the two you are. Years of experience have never equated competence.

Our ops guy has thrown together several buggy dashboards using AI tools. They're passable but impossible to maintain.

  • I personally think that everyone knows AI produces subpar code, and that the infallible humans are just passing it along because they don't understand/care. We're starting to see the gaslighting now, it's not that AI makes you better, it's that AI makes you ship faster, and now shipping faster (with more bugs) is more important because "tech debt is an appreciating asset" in the world where AI tools can pump out features 10x faster (with the commensurate bugs/issues). We're entering the era of "move fast and break stuff" on steroids. I miss the era of software that worked.

    • Yep, bugs are already just another cost of doing business for companies that aren’t user-focused. We can expect buggier code from now on. Especially for software where the users aren’t the ones buying it.

      Disclaimer because I sound pessimistic: I do use a lot of AI to write code.

      I do feel behind on the usage of it.

Practically every post on HN that mentions AI now ends up with a thread that is "I get 100X speed-up using LLMs" vs. "It made me slower and I've never met a single person in real life who has worked faster with AI."

I'm a half-decent developer with 40 years experience. AI regularly gives me somewhere in the range of 10-100X speed-up of development. I don't benefit from a meme, I do benefit from better code delivered faster.

Sometimes AI is a piece of crap and I work at 0.5X for an hour flogging a dead horse. But those are rarer these days.

  • I've posted this on another comment verbatim that was similar to yours, so apologies for the copy and paste:

    Can you share what exactly this was (that got you the 10-100x speedup)? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.

    Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.