← Back to context

Comment by noodletheworld

20 hours ago

I know it seems like forever ago, but claude code only came out in 2025.

Its very difficult to argue the point that claude code:

1) was a paradigm shift in terms of functionality, despite, to be fair, at best, incremental improvements in the underlying models.

2) The results are an order of magnitude, I estimate, better in terms of output.

I think its very fair to distill “AI progress 2025” to: you can get better results (up to a point; better than raw output anyway; scaling to multiple agents has not worked) without better models with clever tools and loops. (…and video/image slop infests everything :p).

Did more software ship in 2025 than in 2024? I'm still looking for some actual indication of output here. I get that people feel more productive but the actual metrics don't seem to agree.

  • I'm still waiting for the Linux drivers to be written because of all the 20x improvements that AI hypers are touting. I would even settle for Apple M3 and M4 computers to be supported by Asahi.

  • I am not making any argument about productivity about using AI vs. not using AI.

    My point is purely that, compared to 2024, the quality of the code produced by LLM inference agent systems is better.

    To say that 2025 was a nothing burger is objectively incorrect.

    Will it scale? Is it good enough to use professionally? Is this like self driving cars where the best they ever get is stuck with an odd shaped traffic cone? Is it actually more productive?

    Who knows?

    Im just saying… LLM coding in 2024 sucked. 2025 was a big year.