← Back to context

Comment by mountainriver

10 days ago

The analogy to the dot com bubble is leaky at best. AI will hit a point of exponential improvement, we are already in the outer parts of this loop.

It will become so valuable so fast we struggle to comprehend it.

Then why has my experience with AI started to see such dramatically diminishing returns?

2022-2023 AI changed enough to be me to convert from skeptic, to a believer. I started working as an AI Engineer and wanted to be on the front lines.

2023-2024 Again, major changes, especially as far as coding goes. I started building very promising prototypes for companies, was able to build a laundry list of projects that were just boring to write.

2024-2025 My day to day usage has decreased. The models seem better at fact finding but worse for code. None of those "cool" prototypes from myself or anyone else I knew seemed to be able to become more than just that. Many of the cool companies I started learning about in 2022 started to reduce staff and are running into financial troubles.

The only area where I've been impressed is the relatively niche improvements in open source text/image to video models. It's wild that you can make sure animated films on a home computer now.

But even there I'm seeing no signs of "exponential improvement".

  • I vibe coded 5 deep ML libraries this month. I'm an MLE by trade and it would have taken me ages without AI. This wasn't possible even a year ago. I have no idea how anyone thinks the models haven't improved

    • > This wasn't possible even a year ago.

      My experience has been that it was. I was using AI last year to build ML models about as well as I have been this year.

      I'm not saying AI isn't useful, just that the progress certainly looks to be sigmoid not exponential in growth. By far the biggest year for improvement was 2022-2023. Early 2022 I didn't think any of the code assistants were useful, by 2023 I was able to use them more reliably. 2024 was another big improvement, but I honestly haven't felt the change (at least not for the better).

      Some of the tooling may be better, but that has little do to with exponential progress in AI itself.

      1 reply →

Very few people predicted LLMs, yet lots of people are now very certain they know what the future of AI holds. I have no idea why so many people have so much faith in their ability to predict the future of technology, when the evidence that they can't is so clear.

It's certainly possible that AI will improve this way, but I'd wager it's extremely unlikely. My sense is that what people are calling AI will later be recognized as obviously steroidal statistical models that could do little else than remix and regurgitate in convincing ways. I guess time will tell which of us is correct.

  • If those statistical models are helping you do better research, or basically doing most of it better than you can, does it matter? People act like models are implicitly bad because they are statistical, which makes no sense at all.

    If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement. And yes it will progress because thats the nature of it doing meaningful work.

    • > If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement.

      That's a lot of vague language. I don't really see any way to respond. I suppose I can say this much: the usefulness of a tool is not proof of the correctness of the predictions we make about it.

      > And yes it will progress because thats the nature of it doing meaningful work.

      This is a non sequitur. It makes no sense.

      And I never said there's anything bad about or wrong with statistical models.

While this remains possible my main impression now is that progress seems to be slowing down rather than accelerating.

  • Not even remotely. In LLM land, the progress seems slow the past few years, but a lot has happened under the hood.

    Elsewhere in AI however progress has been enormous, and many projects are only now reaching the point where they are starting to have valuable outputs. Take video gen for instance - it simply did not exist outside of research labs a few years ago, and now it’s getting to the point where it’s actually useful - and that’s just a very visible example, never mind the models being applied to everything from plasma physics to kidney disease.

    • > the progress seems slow the past few years, but a lot has happened under the hood.

      The claim is "exponential" progress, exponential progress never seems "slow" after it has started to become visible.

      I've worked in the research part of this space, there's neat stuff happening, but we are very clearly in the diminishing returns phase of development.

      1 reply →

  • If you keep up with the research this isn't the case, ML timelines have always been slower than anyone likes

  • I'm not so sure about this.

    First were the models. Then the APIs. Then the cost efficiencies. Right now the tooling and automated workflows. Next will be a frantic effort to "AI-Everything". A lot of things won't make the cut, but absolutely many tasks, whole jobs, and perhaps entire subsets of industries will flip over.

    For example you might say no AI can write a completely tested, secure, fully functional mobile app with one prompt (yet). But look at the advancements in Cline, Claude code, MCPs, code execution environments, and other tooling in just the last 6 months.

    The whole monkeys typewriters shakespeare thing starts to become viable.