← Back to context

Comment by shiandow

10 days ago

While this remains possible my main impression now is that progress seems to be slowing down rather than accelerating.

Not even remotely. In LLM land, the progress seems slow the past few years, but a lot has happened under the hood.

Elsewhere in AI however progress has been enormous, and many projects are only now reaching the point where they are starting to have valuable outputs. Take video gen for instance - it simply did not exist outside of research labs a few years ago, and now it’s getting to the point where it’s actually useful - and that’s just a very visible example, never mind the models being applied to everything from plasma physics to kidney disease.

  • > the progress seems slow the past few years, but a lot has happened under the hood.

    The claim is "exponential" progress, exponential progress never seems "slow" after it has started to become visible.

    I've worked in the research part of this space, there's neat stuff happening, but we are very clearly in the diminishing returns phase of development.

If you keep up with the research this isn't the case, ML timelines have always been slower than anyone likes

I'm not so sure about this.

First were the models. Then the APIs. Then the cost efficiencies. Right now the tooling and automated workflows. Next will be a frantic effort to "AI-Everything". A lot of things won't make the cut, but absolutely many tasks, whole jobs, and perhaps entire subsets of industries will flip over.

For example you might say no AI can write a completely tested, secure, fully functional mobile app with one prompt (yet). But look at the advancements in Cline, Claude code, MCPs, code execution environments, and other tooling in just the last 6 months.

The whole monkeys typewriters shakespeare thing starts to become viable.