← Back to context

Comment by mcny

2 days ago

That would be great if that was the case but my understanding is that the progress is plateauing. I don't know how much of this is anthorpic / Google / openAI holding itself back to save money and how much is the state of the art improvement slowing down though. I can imagine there could be a 64 GB GPU in five years as absurd as it feels to type that today.

What gives you the impression the progress is plateauing?

I'm finding the difference just between Sonnet 4 and Sonnet 4.5 to be meaningful in terms of the complexity of tasks I'm willing to use them for.

  • > I'm finding the difference just between Sonnet 4 and Sonnet 4.5 to be meaningful in terms of the complexity of tasks I'm willing to use them for.

    That doesn't mean "not plateauing".

    It's better, certainly, but the difference between SOTA now and SOTA 6 months ago is a fraction of the difference between SOTA 6 months ago and the difference 18 months ago.

    It doesn't mean that the models aren't getting better, it means that the improvement in each generation is smaller than the the improvement in the previous generation.

    • 18 months ago to 6 months ago was indeed a busy period - both multimodal image input and reasoning models were rare at the start of that time period and common by the end of it.

      Comparing a 12 month period to a 6 month period feels unfair to me though. I think we will have a much fuller picture by the end of the year - I have high expectations for the next wave of Chinese models and for Gemini 3.

      1 reply →

> a 64 GB GPU in five years

Is there a digit missing? I don't understand why this existing in 5 years is absurd

  • I meant for me it feels absurd today but it will likely happen in five years.