← Back to context

Comment by dualvariable

21 hours ago

However, inference costs for entirely good enough models are likely to keep declining in the future. We're probably hitting diminishing returns on model size and training. The new generations aren't quantum leaps anymore, and newer generations of open source models like DeepSeek are likely to start getting good enough.

There's going to be a limit to how much they can raise prices, because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI--and that's a business model right there. Right now I'm sure there's a lot of people who will protest that they couldn't do their jobs with lesser models, but as time goes on that will get less and less. Already right now the consumers who are using AI for writing presentations, cooking recipe generation and ELI5 answers for common things, aren't going to be missing much from a lesser model. That'll actually only start to get cheaper over time.

Also for business needs, as AI inference costs escalate there comes a point where businesses rediscover human intelligence again, and start hiring/training people to do more work to use lesser models--if that is more productive in the end than shelling out large amounts of cash for inference on the latest models. [Although given how much companies waste on AWS, there's a lot of tolerance for overspending in corporations...]

> because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI-

Not sure how it all works out. Currently trillion dollar companies can't make a native app for platforms. Everything is just JS/Electron because economics does not work for them.

And here companies can make GW data center running very expensive GPUs for 1/10th of current prices. Sound little fanciful to me.

  • The price you pay for anthropic must include the price of training new and better models which is incredibly costly. If you use the models someone else already spend money to develop you don’t need to pay this price.

I guess the new models will still be quantum leaps, but literally: "The smallest possible change in a system"

  • They've been like that for a while actually, I think at least since the big hype around ChatGPT 4.5 (or was it 5?) and that underwhelming, lukewarm, oversanitised presentation by Altman and his team.

  • Yups... Mythos is the smallest possible leap. Not a standard model generation advance, not even a version point advance. Just the smallest possible quanta of a change. We are absolutely hitting a plateau any day now. Any day. Any time. Any second now. Yup. Right now! Surely!

    • I mean let's be realistic - all that we know about the "mythical" Mythos is the carefully curated and release stuff by the Anthropic's PR team. Is it really a huge leap they are making it to be? I doubt it. In fact I bet if it was indeed that powerful and dangerous, as they imply, they'd find a way to release it immediately, devastate OpenAI and DeepSeek and secure a leading position in the market. Why is it not happening? I suspect because Dario is again at it, peddling his bullshit.

    • Yeah. AI progress is insanely fast if you compare it to anything else. Where else is a one year old technology already hopelessly outdated? 10 years ago is basically stone age.

      10 replies →

I think so too.

And at some point even frontier model costs will hopefully come down (if there is still a meaningful difference between closed and open source models at that point) as all of the compute that's being built out right now comes online.