Comment by extr
11 hours ago
I think you are kidding if you think you are going to be remotely approximately the quantity/quality of output you get from a $100/max sub with Zed/Openrouter. I easily get $1K+ of usage out of my $100 max sub. And that's with Opus 4.6 on high thinking.
For personal use I've noticed Claude (via the web-based chat UI) making really bizarre mistakes lately like ignoring input or making completely random assumptions. At work Claude Code has turned into an absolute dog. It fails to follow instructions and builds stuff like a lazy junior developer without any architecture, tests, or verification. This is even with max effort, Opus 4.6, multiple agents, early compaction, etc. I don't know what they did but Anthropic's quality lead has basically evaporated for me. I hope they fix it because I've since adapted my project's Claude artifacts for use with Codex and started using it instead - it feels like Claude Code did earlier this year.
I'd like to give the new GLM models a try for personal stuff.
> At work Claude Code has turned into an absolute dog.
Could it be related to this?: https://news.ycombinator.com/item?id=47660925
I've noticed the same thing, and even done side by side tests where I compare Claude Code with Cursor both running Opus 4.6.
It seems Cursor somehow builds a better contextual description of the workspace, so the model knows what I'm actually trying to achieve.
The problem is that with Cursor I'm paying per-token, so as GP suggested you can easily spend $100+ per month vs $20 on Claude Code.
Same, I'm looking hard for an alternative to what I had.
And I'm seeing the same thing in my sphere- everyone is bailing Anthropic the past few weeks. I figure that's why we're seeing more posts like this.
I hope they're paying attention.
> I easily get $1K+ of usage out of my $100 max sub. And that's with Opus 4.6 on high thinking.
And people keep claiming the token providers are running inference at a profit.
>And people keep claiming the token providers are running inference at a profit.
Not everyone gets $1K of usage, and you don't know how fat the per-token margins are. It's like saying the local buffet place is losing money because you eat $100 worth of takeout for $30.
> Not everyone gets $1K of usage, and you don't know how fat the per-token margins are.
Well, we're going to find out sooner rather than later. Right now you don't know how thin (or negative) the margins are, either, after all.
All we know for certain is how much VC cash they got. Revenue, spend, profit, etc calculated according to GAAP are still a secret.
[dead]
In addition to usage distribution aspects others called out .
$1K is not actual cost, just API pricing being compared to subscription pricing. It is quite possible that API has a large operating margins, and say costs only $100 to deliver $1K worth of API credits.
Yes and when we say things like that we are not talking about plans. Running inference at a profit means api token use is run profitably. It’s a huge unknown what’s happening at the plan level, we know there is subsidy happening but in aggregate impossible to know if it’s profitable or not.
The model developers across the board stand by that most/all models are profitable by EOL, and losses come from R&D/Training.
Out of curiosity, how many tokens are people using? I checked my openrouter activity - I used about 550 million tokens in the last month, 320M with Gemini and 240M with Opus. This cost me $600 in the past 30 days. $200 on Gemini, $400 on Opus.
Some of the newer models available on OpenRouter are good, but I agree that none of them are a replacement for Opus 4.6 for coding.
If you're trying to minimize cost then having one of the inexpensive models do exploratory work and simple tasks while going back to Opus for the serious thinking and review is a good hybrid model. Having the $20/month Claude plan available is a good idea even if you're primarily using OpenRouter available models.
I think trying to use anything other than the best available SOTA model for important work is not a good tradeoff, though.
I've been thinking of doing this — using one of the "pretty good but not Opus 4.6-good, YET very cheap" models for the implementation part of more basic code features, AFTER first using Opus 4.6 high for the planning stage.
Do you think this would be a decent approach?
Also, which client would I use for this? OpenCode? I don't think Claude Code supports using other models. Thoughts?
I have been doing this and the results have been fairly good.
I use claude to build requirements.md -> implementation.md -> todo.md. Then I tell opencode + openrouter to read those files and follow the todo using a cheap (many times free) model.
It works 90% of the time. The other 10% it will get stuck, in which case I revert to claude.
That has allowed me to stay on the $20/month claude subscription as opposed to the $100.
Yeah — I just created an anthropic API key to experiment with pi, and managed to spend $1 in about 30 minutes doing some basic work with Sonnet.
Extrapolating that out, the subscription pricing is HEAVILY subsidized. For similar work in Claude Code, I use a Pro plan for $20/month, and rarely bang up against the limits.
And it scales up - the $200 plan gets you something like 20x what the Pro plan gets you. I've never come close to hitting that limit.
It's obviously capital-subsidized and so I have zero expectation of that lasting, but it's pretty anti-competitive to Cursor and others that rely on API keys.
Ignoring the training costs, the marginal cost for inference is pretty low for providers. They are estimated to break even or better with their $20/month subscriptions.
That being said, they can't stop launching new models, so training is not a one time task. Therefore one might argue that it is part of the marginal cost.
I ran ccusage on my work Max account and I spend what would cost $300 a week if it was billed at API rates.
According to the meter, I used $15k in tokens with my Max plan (along with $5k of Codex tokens) in the last 30 days. That built an entire working and (lightly) optimized language, parser, compiler, runtime toolchain among other things.
Not everyone is just vibecoding everything and relying on agents running sota models to do anything tho.