Comment by giwook
18 hours ago
Lots of us have noticed that usage limits for Claude have been nerfed in recent weeks/months.
If anything, these new multipliers are more transparent than anything OpenAI or Anthropic have communicated regarding actual costs and give us a more realistic understanding of what it's costing these providers.
The fact that we were able to get such a substantial amount of usage for $20/$100/$200 a month was never meant to last and to think otherwise was perhaps a bit naive.
This feels like a strategy from the ZIRP era of tech growth where companies burned investor capital and gave away their products and services for free (or subsidized them heavily) in order to prioritize user acquisition initially. Then once they'd gained enough traction and stickiness they'd then implement a monetization strategy to capitalize on said user base.
However, inference costs for entirely good enough models are likely to keep declining in the future. We're probably hitting diminishing returns on model size and training. The new generations aren't quantum leaps anymore, and newer generations of open source models like DeepSeek are likely to start getting good enough.
There's going to be a limit to how much they can raise prices, because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI--and that's a business model right there. Right now I'm sure there's a lot of people who will protest that they couldn't do their jobs with lesser models, but as time goes on that will get less and less. Already right now the consumers who are using AI for writing presentations, cooking recipe generation and ELI5 answers for common things, aren't going to be missing much from a lesser model. That'll actually only start to get cheaper over time.
Also for business needs, as AI inference costs escalate there comes a point where businesses rediscover human intelligence again, and start hiring/training people to do more work to use lesser models--if that is more productive in the end than shelling out large amounts of cash for inference on the latest models. [Although given how much companies waste on AWS, there's a lot of tolerance for overspending in corporations...]
> because someone can always build out a datacenter and fill it up with open source DeepSeek inference and undercut your prices by 10x while still making a very good ROI-
Not sure how it all works out. Currently trillion dollar companies can't make a native app for platforms. Everything is just JS/Electron because economics does not work for them.
And here companies can make GW data center running very expensive GPUs for 1/10th of current prices. Sound little fanciful to me.
The price you pay for anthropic must include the price of training new and better models which is incredibly costly. If you use the models someone else already spend money to develop you don’t need to pay this price.
I guess the new models will still be quantum leaps, but literally: "The smallest possible change in a system"
They've been like that for a while actually, I think at least since the big hype around ChatGPT 4.5 (or was it 5?) and that underwhelming, lukewarm, oversanitised presentation by Altman and his team.
Yups... Mythos is the smallest possible leap. Not a standard model generation advance, not even a version point advance. Just the smallest possible quanta of a change. We are absolutely hitting a plateau any day now. Any day. Any time. Any second now. Yup. Right now! Surely!
11 replies →
I think so too.
And at some point even frontier model costs will hopefully come down (if there is still a meaningful difference between closed and open source models at that point) as all of the compute that's being built out right now comes online.
I hope it's true, but right now hardware prices are insane
It does feel like the music is about to stop.
It has been years now, of cash injections, investors can't keep feeding the beast forever.
This is the best AI programming will be. From here on the enshitification starts and the prices go up.
As predicted by many. The math is, as usual, mathing.
It has been years now of reading this same comment... Surely people can't keep typing it forever.
But the prices haven't been going up by multiples of 6 for the past few years. Things are actually changing now. I don't think it's over, but in the short term, it's going to be considerably more expensive.
2 replies →
The difference is we're now in a world where Disney has pulled out of OpenAI without comming, and Sora was dropped off a ditch.
In other words.
The bubble has burst. You're just in denial.
I’m not willing too, but I can set up a cron job to Claude -p the task.
Dunno, if in this day and age you are making inference more expensive, more scarce, you are honestly moving in the wrong direction and DeepSeek and others will gladly take your lunch.
The hardware to run deepseek is still incredibly expensive.
Have you seen the news about qwen3.6? People are running it on sub 1000 euro hardware. Apparently it's about as good as Claude sonnet.
> The hardware to run deepseek is still incredibly expensive.
Deepseek API pricing is very low compared to Anthropic/OpenAI API pricing.
For many, the 300% difference in pricing may be difficult to justify, if the quality difference is very small. And there will be many tasks where the most expensive/the best model, is not needed. Currently many people end up using Opus 4.7/GPT 5.5 for many tasks without thinking about it.
7 replies →
That is folly because there is very minimal cost to switching providers, let alone models.
[dead]
Did anyone really expect AI to be cheap?
If/when it gets to the point where it can replace a skilled worker, the service can be sold for close to the same price as that skilled labour. But the AI can run 24/7, reliably, and scale up/down at a moments notice.
There's not going to be much competition to drive prices down, the barriers to entry are already huge. There'll likely to be one clear winner, becoming a near-monopoly, or maybe we'll get a duopoly at best.
> Did anyone really expect AI to be cheap?
Yes, a lot of people (not me). Why? Well because that was the whole value proposition of these companies, relentlessly pushed by their PR and most of the media- rememmber it was something something Pocket PhDs, massive unemployment etc?
"There's not going to be much competition to drive prices down, the barriers to entry are already huge. There'll likely to be one clear winner, becoming a near-monopoly, or maybe we'll get a duopoly at best."
Based on what exactly? So far every time OpenAI, Anthropic or whatever has released a new top performing model, competitors have caught up quickly. Open source models have greatly improved as well.
I expect AI to be just like cloud computing in general - AWS, Azure, GCP being the main providers, with dozens of smaller competitors offering similar services as well.
Right now China is flexing the future in my opinion. Smaller, widely available, frontier models for pennies on the dollar.
I think the future of ai will be breakthroughs that let it run on commodity hardware, and the average person will not be paying for it from the cloud unless they want to be surveilled or are in older hardware.
Right now I am running about what was a frontier model 1-2 years ago on a junk machine. Some people are running what was a frontier model 4 months ago on PCs and laptops that cost 5,000. In a year I think the landscape will be even better.
I do. "Commoditize your complement". Want to sell lots of silicon? Give away good local models to run on that silicon.
Even if SOTA models in the cloud are a few percentage points better, most work can be routed to local models most of the time. That leaves the cloud providers fighting over the most computationally intensive tasks. In the long term, I think models are going to be local-first.
(Unless providers can figure out a network effect that local models can't replicate).
> I think models are going to be local-first.
Why on earth would that happen when everything else is moving into the cloud to tie it to ever-escalating subscription fees and prevent piracy?
Even with gaming, where running high-end 3D games in the cloud seems like madness and inevitably degrades the quality of the experience, they won't stop trying.
> In the long term, I think models are going to be local-first.
Why? There's an inherent efficiency advantage to scale, while the only real advantage for local models (privacy/secrecy) hasn't proven convincing for broader IT either.
7 replies →
> Did anyone really expect AI to be cheap?
Considering most of the cost of producing a model is the upfront cost rather than the running one, I kinda still do.
The point was never to produce 4 frontier models per company a year.