Comment by dfabulich
1 day ago
For folks not following the drama: Anthropic's $200/month subscription for Claude Code is much cheaper than Anthropic's pay-as-you-go API. In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost you more than $1,000 if you'd paid via the API.
Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI. They want OpenCode users to pay API prices, which could be 5x or more.
So, of course, OpenCode has implemented a workaround, so that folks paying "only" $200/month can use their preferred OpenCode CLI at Anthropic's all-you-can-eat token buffet.
https://github.com/anomalyco/opencode/issues/7410#issuecomme...
Everything about this is ridiculous, and it's all Anthropic's fault. Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.
More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)
"Should have" for what reason? I would be happy if they open sourced Claude Code, but the reality is that Claude Code is what makes Anthropic so relevant in the programming more, much more than the Claude models themselves. Asking them to give it away for free to their competitors seems a bit much.
Well OpenCode already exists and you can connect it to multiple providers, so you could just say that the agentic CLI harness business model as a service/billable feature is no more. In hindsight I would say it never made sense in the first place.
Branding and customer relationships matter as much or more than the "billable service" part of Claude Code.
It's not unheard of for companies that have strong customer mindshare to find themselves intermediated by competitors or other products to the point that they just became part of the infrastructure and eventually lose that mindshare.
I doubt Anthropic wants to become a swappable backend for the actual thing that developers reach for to do their work (the CLI tool).
Don't get me wrong, I think developers should 100% have the choice of tooling they want to use.
But from a business standpoint I think maintaining that direct or first-party connection to the developer is what Anthropic are trying to protect here.
When I compared OpenCode and Claude Code head to head a couple of months ago, Claude Code worked much better for me. I don't know if they closed the gap in the meantime, but for sure Claude Code has improved since then.
4 replies →
Disagree, this is like terraform for Hashicorp. Give the cow away for free and no one will want to buy the milk. Claude code is a golden cow they should not give away.
The above does not prove that it is irrational for Anthropic to keep the Claude Code source code closed. There are many reasons I can see (and probably some I can’t) for why closed source is advantageous for A\. One such (mentioned in various places) is the value-add of certain kinds of analytics and or telemetry.
Aside: it is pretty easy to let our appreciation* of OSS turn into a kind of confirmation bias about its value to other people/orgs.
* I can understand why people promote OSS from various POVs: ethics, security, end user control, ecosystem innovation, sheer generosity, promotion of goodwill, expressions of creativity, helping others, the love of building things, and much more. I value all of these things. But I’m wary of reasoning and philosophies that offer merely binary judgments, especially ones that try to claim what is best for another party. That's really hard to know so we do well to be humble about our claims.**
**: Finally, being humble about what one knows does not mean being "timid" with your logic or reasoning. Just be sure to state it as clearly as you can by mentioning your premises and values.
Except that the cost is better with their harness and looks like people don’t want to fork 5x.
Adoption is how one wins. Look at all the crappy solutions out there that are still around.
Nah, I think Opus is fantastic but not Claude Code. Their models are way better.
Claude code is nothing more than a loop to Opus.
I use Q/aka kiro-cli at work with opus and it's clearly inferior to CC within the first 30s or so of usage. So no, not quite
2 replies →
> the reality is that Claude Code is what makes Anthropic so relevant in the programming more, much more than the Claude models themselves
but Claude Code cannot run without Claude models? What do you mean?
Relative to their competitors who also have comparable models, Anthropic's design choices in effectively managing context with a very well thought out and coherent design, makes them stand out.
5 replies →
https://github.com/musistudio/claude-code-router
Yeah, I've heard of people swapping out the model that Claude Code calls and apparently its not THAT much of a difference. What I'd love to see from Anthropic instead is, give me smaller LLM models, I don't even care if they're "open source" or not, just pull down a model that takes maybe 4 or 6 GB of VRAM into my local box, and use those for your coding agents, you can direct it and guide it with Opus anyway, so why not cut down on costs for everyone (consumer and Anthropic themselves!) by just letting users who can run some of the compute locally. I've got about 16GB of VRAM I can juice out of my Macbook Pro, I'm okay running a few smaller models locally with the guiding hand of Opus or Sonnet for less compute on the API front.
So, like, why don’t people just use the better-than-Claude OpenCode CLI with these other just-as-good-as-Claude models?
Anthropic might have good models, but they are the worse. I mentioned in another thread how they do whatever they can to bypass bot detection protection to scrap content.
not sure there are any models yet that you can get the quality out you need to do this and run on your mbp
What part of a TOS is ridiculous? Claude Code is obviously a loss leader to them, but developer momentum / market share is important to them and they consider it worth it.
What part of “OpenCode broke the TOS of something well defined” makes you think it’s all Anthropic’s fault?
It's probably not a "loss-leader" so much as "somewhat lower margin". Their bizdev guys are doubtless happy to make a switch between lower-margin, higher-multiple recurring revenue versus higher-margin, lower-multiple pay-as-you-go API billing. Corporate customers with contracts doubtless aren't paying like that for the API either. This is not uncommon.
When you have a "loss leader" whose sole purpose is to build up market share (e.g. put competitors out of business) that's called predatory pricing.
Every loss leader's purpose is to build up market share.
My guess is that ultimately the use of Claude code will provide the training data to make most of what you do now in Claude code irrelevant.
My guess is that ultimately the use of Claude code will provide the training data to make most of what you do irrelevant.
FTFY.
I keep hearing those claims that's they lose money on it, but I have more and more doubts about this being true.
GPU compute cost has falled down in the last two years a lot.
Do you think Anthropic followed all the ToS of every website on the internet when scraping them for training data?
You justify a wrong thing by attacking something else? Is that the only argument?
1 reply →
Poor behavior is still poor behavior even if the relevant ToS aligns with it.
Why is it poor behavior though?
1 reply →
I guess one issue is that you pay $200/month whether you use it or not. Potentially this could be better for Anthropic. What was not necessarily foreseeable (ok maybe it was) back when that started was that users have invented all kinds of ways to supervise their agents to be as efficient as possible. If they control the client, you can't do that.
I can easily get Claude Code to run for 8-10 hours unsupervised without stopping with sub-agents entirely within Claude Code.
I think it is more likely that if you stick with Claude Code, then you are more likely to stick with Opus/Sonnet, whereas if you use a third party CLI you might be more likely to mix and match or switch away entirely. It's in their interest to get you invested in their tooling.
> if you use a third party CLI you might be more likely to mix and match or switch away entirely.
I really like doing this, be it with OpenCode or Copilot or Cline/RooCode/KiloCode: I do have a Cerebras Code subscription (50 USD a month for a lot of tokens but only an okayish model) whereas the rest I use by paying per-token.
Monthly spend ends up being somewhere between 100-150 USD total, obviously depending on what I do and the proportion of simple vs complex tasks.
If Sonnet isn’t great for a given task, I can go for GPT-5 or Gemini 3.
1 reply →
I've yet to come up with a workflow where I would want Claude to do this much work... unless I had an extremely detailed spec defined for it. How do you ensure it doesn't go off the rails?
1 reply →
On the flip side I started using Claude with other LLMs (openai) because my Pro sub gets maxed out quickly and I want a cheaper alternative to finish a project.
I just use claude code proxy or litellm and set the ANTHROPIC_BASE_URL to my proxy and chose another LLM.
1 reply →
Multi model is the way of the future though as much as I like and prefer Anthropic.
> I guess one issue is that you pay $200/month whether you use it or not.
I can easily churn through $100 in an 8 hour work day with API billing. $200/month seems like an incredibly good deal, even if they apply some throttling.
Why is supervising one's agents to be as efficient as possible a problem for Anthropic?
When people say efficient here, they mean cost efficient, extracting as much work per dollar from Anthropic as possible. This is the opposite of Anthropic’s view of efficiency, which would be providing the minimal amount of service for the most amount of money.
More inference = more cost (to Anthropic)
What kind of ways to supervise?
ralph-loop & co
To extend your all you can eat analogy. It’s similar to how all you can eat restaurants allow you to eat all you can within the bounds of the restaurant, but you aren’t allowed to bring the food out with you.
Another analogy is that it’s a takeout but anthropic is insisting you only eat at home with the plastic utensils they’ve provided rather than the nice metal utensils you have at home.
Another analogy is that it’s a restaurant that offers delivery and they’re insisting you use their own in house delivery service instead of placing a pickup order and asking your friendly neighbor to pick it up for you on their way back from the office.
The all you can eat buffet analogy makes way more sense to me, because it speaks to the aspect of it where the customer can take a lot of something without restriction. That's the critical thing with the Anthropic subscription, and the takeout analogy or delivery service don't contain any element of it.
It's not really a fair analogy. Restaurants don't want you taking food away because they want to limit the amount you eat to a single meal, knowing that you'll stop when you get full. If you take food out you can eat more by waiting until the next meal when you're hungry again.
You don't "get full" and "get hungry again" by switching UIs. You can consume the same amount whether you switch or you don't switch.
> You don't "get full" and "get hungry again" by switching UIs. You can consume the same amount whether you switch or you don't switch.
This is actually a compelling argument for Claude Code getting the discount but not extending it to other cases. Claude Code, being subsidized by the company, is incentivized to minimize token usage. Third parties that piggyback on the same flat rate subscription, are not. i.e. Claude code wants you to eat less.
Of course, I don’t believe at all that this is why Anthropic has blocked this use case. But it is a reasonable argument.
Claude Code does a lot of work in optimizing context usage, how much output is included by tools and how that's done, and when to compact. This very well may make the cost of providing the subscription lower to Anthropic when Claude Code is used. It's well within the realm of possibility if not likelihood that other tools don't have the same incentive to optimize the buffet usage.
Not sure where that goes in the analogies here but maybe something about smaller plates.
The UI absolutely could influence the backend usage.
Think about a web browser that respects cache lifetimes vs one that downloads everything everytime. As an ISP I'd be more likely to offer you unlimited bandwidth if I knew you were using a caching browser.
Likewise Claude code can optimize how it uses tokens and potentially provide the same benefit with less usage.
Not really. At a buffet restaurant, if you could take the food out with you, you'd takeaway more food than you can eat at one sitting. OpenCode users and Claud Code™ CLI users use tokens at approximately the same rate.
This is more like an all-you-can-eat restaurant requiring you to eat with their flimsy plastic forks, forbidding you to bring your own utensils.
Claude Code does a lot regarding optimizing context usage, tool output, sub-agent interactions, context compaction, and stuff like that. I don't imagine OpenCode has the same financial incentive to decrease the token cost Anthropic takes on under the subscriptions.
yes with the whole goal to make the utensils better
Why is this being downvoted? This is the perfect analogy.
...no, that's more like "but you can't bring your own fork"
anthropic should not be criticizing the gluttony of others whilst licking its fingers surrounded by buckets full of fried chicken
Aren't you happy that you can use claude code unlimited for only 200/month? I don't really get your point tbh
I’d bet almost everyone who opts to buy the $200 plan is happy with the deal they’re getting relative to API pricing.
I think some people get triggered by the inconsistency in pricing or the idea of having a fixed cost for somewhat vague usage limits.
In practice it’s a great deal for anyone using the model at that level.
Even at the Max plan it's not unlimited. IIRC they say the limit is something like 20x the $20 plan.
With "normal use" you're unlikely to hit the limit though, unless you only use the Opus model and do things in parallell.
It is not unlimited, being not careful with your context management, you hit the limits quickly.
Isn't the context window the same for all plans, 200k? You would hit usage limits?
1 reply →
> Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI.
Because they are harvesting all the data they can harvest through CLI to train further models. API access in contrast provides much more limited data
As far as I know, OpenCode sends (has to send) the same data to Anthropic as Claude Code™ CLI (especially if they're going to successfully imitate CC™ in order to access cheap subscription pricing).
There are additional signals that a client can send as telemetry that they lose if you use a 3rd party app. Things like accepted vs rejected sessions and so on.
But I doubt you can opt in to them training on that data coming in via OpenCode.
2 replies →
Claude Code only trains on data if you opt in
They've recently switched to opt-out instead. And even then, if you read the legalese they say "train frontier models". That would (probably) allow them to train a reward model or otherwise test/validate on your data / signals without breaking the agreement. There's a lot of signal in how you use something (e.g. accepted vs. rejected rate) that they can use without strictly including it in the dataset for training their LLMs.
They switched to opt out, with some extra dark patterns to convert people who already opted out into opting in.
2 replies →
I used to be less cynical, but I could see them not honoring that, legal or not. The real answer, regardless of how you feel about that conversation, is that Claude Code, not any model, is the product.
6 replies →
That is not true, though. You have to opt in for them to train on your data
Claude code cli and api vs subscription are tangential. You can use Claude code with an api token.
> Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off
I have the "all-you-can-eat" plan _because_ I know what I'm getting and how much it'll cost me.
I don't see anything wrong with this. It's just a big time-limited amount of tokens you can use. Of course it sucks that it's limited to Claude-Code and Claude.ai. But the other providers have very similar subscriptions. Even the original ChatGTP pro subscription gives you a lot more tokens for the $20 it costs compared to the API cost.
I always assumed tokens over the API cost that much, because that's just what people are willing to pay. And what people are willing to pay for small pay-as-you-go tasks vs large-scale agentic coding just doesn't line up.
And then there's the psychological factor: if Claude messed up and wasted a bunch of tokens, I'm going to be super pissed that those specific tokens will have cost me $30. But when it's just a little blip on my usage limit, I don't really mind.
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)
Isn't the whole thesis behind LLM coding that you can easily clone the CLI using an LLM? Otherwise what are you paying $200/mo for?
In some sense that's what OpenCode is, and Anthropic's not having that.
This whole thing just seems like a "I never thought the leopards would eat my face" by all the people who have been shilling LLMs non-stop.
1 reply →
It's hard to understand what Anthropic are getting from forcing more people to use Claude Code vs any other tools via the API. Why do they care? Do they somehow get better analytics or do they dream that there's a magical lock-in effect... from a buggy CLI?
I suspect that they lose control over the cheaper models CC can choose for you for eg. file summaries or web fetch. Indeed, they lose web fetch and whatever telemetry it gives them completely.
It's not unreasonable to assume that without the ability to push haiku use aggresively for summarization, the average user in OC vs CC costs more.
This is a very good point. If the 3rd party tools are using opus for compacting/summarizing - that would increase inference costs for anthropic.
Not that hard to understand, they want to control how their users use their product. A CLI they built, even acquiring the framework it was built in, is a way to achieve that.
But to make a "gauche" analogy, that would be like Microsoft not letting you browse their websites without using their browser (Edge).
2 replies →
It's because the model companies believe there's no way to survive just selling a model via an API. That is becoming a low margin, undifferentiated commodity business that can't yield the large revenue streams they need to justify the investments. The differences between models just aren't large enough and the practice of giving model weights away for free (dumping) is killing that business.
So they all want to be product companies. OpenAI is able to keep raising crazy amounts of capital because they're a product company and the API is a sideshow. Anthropic got squeezed because Altman launched ChatGPT first for free and immediately claimed the entire market, meaning Anthropic became an undifferentiated Bing-like also-ran until the moment they launched Claude Code and had something unique. For consumer use Claude still languishes but when it comes to coding and the enormous consumption programmers rack up, OpenAI is the one cloning Claude Code rather than the other way around.
For Claude Code to be worth anything to Anthropic's investors it must be a product and not just an API pricing tier. If it's a product they have so many more options. They can e.g. include ads, charge for corporate SSO integrations, charge extra for more features, add social features... I'm sure they have a thousand ideas, all of which require controlling the user interface and product surface.
That's the entire reason they're willing to engage in their own market dumping by underpricing tokens when consumed via their CLI/web tooling: build up product loyalty that can then be leveraged into further revenue streams beyond paying for tokens. That strategy doesn't work if anyone can just emulate the Claude Code CLI at the wire level. It'd mean Anthropic buys market share for their own competitors.
N.B. this kind of situation is super common in the tech industry. If you've ever looked at Google's properties you'll discover they're all locked behind Javascript challenges that verify you're using a real web browser. The features and pricing of the APIs is usually very different to what consumers can access via their web browser and technical tricks are used to segment that market. That's why SERP scraping is a SaaS (it's far too hard to do directly yourself at scale, has to be outsourced now), and why Google is suing them for bypassing "SearchGuard", which appears to just be BotGuard rebranded. I designed the first version of BotGuard and the reason they use it on every surface now, and not just for antispam, is because businesses require the ability to segment API traffic that might be generated by competitors from end user/human traffic generated by their own products.
If Anthropic want to continue with this strategy they'll need to do the same thing. They'll need to build what is effectively an anti-abuse team similar to the BotGuard team at Google or the VAC team at Valve, people specialized in client integrity techniques and who have experience in detecting emulators over the network.
DAU/MAU for IPO.
> let's sell a loss leader
> oh no, people are actually buying the loss leader
I'm looking forward to the upcoming reckoning when all these AI companies start actually charging users what the services cost.
I see zero reason to believe the $200 subscription is losing money. Anthropic makes subscriptions cheaper because 1. Most users dont use all their allocated tokens 2. subscriptions create a lock in effect, even if its a weak one 3. Easier to raise money when you can point to your ARR from subscriptions 4. Lowering revenue variance month to month is very valuable for businesses
> They can and should just open source it now
Why you have this idea? why they should open source it now?
That's the entire reason I don't use Claude's models. I don't want to use Claude Code. I want to use their models, just not their crappy software.
Why should anthropic open source Claude Code CLI? I understand you and some others want it, maybe it would be better for the community, but is it what’s best for anthropic?
Why should subscribers get your specific discount rather than what anthropic has calculated the discount should be?
What can we learn from this?
The model is not a moat
They need to own the point of interaction to drive company valuation. Users can more about tool switching costs that the particular model they use.
I believe there are a number of cli tools which also use Anthropic's Max plan (subscription) - this isn't just an OpenCode issue.
I had the TaskMaster AI tool hooked up to my Anthropic sub, as well as a couple of other things - Kilo Code and and Roo Code iirc?
From discussions at the time (6 months ago) this "use your Anthropic sub" functionality was listed by at least one of the above projects as "thanks to the functionality of the Anthropic SDK you can now use your sub...." implying it was officially sanctioned rather than via a "workaround".
I agree with the principle, but reality dictates that users and exposure is the real currency. So while annoying it is understandable that Anthropic subsidizes their own direct users.
They are subsidizing Claude code so they can use your data to train better coding models. You’re paying them to show their models how to code better.
If true I wonder what kind of feedback loop is happening by training on human behavior that's directly influenced by the output of the same model
We build our fine tuning and reinforcement pipeline at cortex.build by synthesizing interactions between a user, the agent loops, and a codebase. The exact data they get from users in Claude Code.
That data is critical to improve tool call use (both in correctness but also to improve when the agent chooses to use that tool). It's also important for the context rewrites Claude does. They rewrite your prompt and continuously manage the back-and-forth with the model. So does Cortex, just more aggressively with a more powerful context graph.
I tend to think that their margins on API pricing are significantly higher. They likely gave up some of that margin to grow the Claude Code user base, though it probably still runs at a thin profit. Businesses are simply better customers than individuals and are willing to pay much more.
I guess we will find out on the updated TOCs very soon
Sorry, ClaudeCode is $200/mo? I’m not using it now, but was thinking about giving it a try. The website shows $200/year for Pro:
“$17 Per month with annual subscription discount ($200 billed up front). $20 if billed monthly.”
https://claude.com/pricing
What are you referring to that’s 10x that price? (Conversely, I’m wondering why Pro costs 1/10 the value of whatever you’re referring to?!?)
They don't exactly put it front and center. Click "usage limits" (at the bottom; https://support.claude.com/en/articles/9797557-usage-limit-b...) then "Max plan" in the first list (https://support.claude.com/en/articles/11014257-about-claude...). There is a $200/mo price which people are likely referring to with "20x more usage per session" (which kinda bothers me because I'd bet my bottom dollar it's 20x "as much" but that's a lost cause).
I got a pro subscription yesterday. With it you get a certain amount of tokens and you have a certain limit every 5 hours and every week.
Once the limit is reached, you can choose to pay-per-token, upgrade your plan, or just wait until it refreshes. The more expensive subscription variants just contain more tokens, that’s all.
Keep scrolling down, there is a Max option
By this logic ChatGPT shouldn't exist either and should be charged by API pricing
> Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.
Sorry, I don't understand this. Either you're saying
A) Everyone paying $200/mo should now pay $800/mo to match this 20% off figure you're theorizing... or B) Maybe you're implying that the $1,000+ costs are too high and they should be lowered, to like, what, $250/mo? (250 - 20% = $200)
Which confuses me, because neither option is feasible or ever gonna happen.
Not the OP, but it seems pretty clear to me - they're suggesting that fixed per-month pricing with unlimited usage shouldn't exist at all, as it doesn't really make sense for a product that has per token costs.
Instead, they're saying that a 200$/month subscription should pay for something like $250 worth of pay-per-token API tokens, and additionally give preferential pricing for using up more tokens than that.
So, if the normal API pricing were 10$ per million tokens, a 200$ per month subscription should offer 25M tokens for free, and allow you to use more tokens at a 9$/1M token rate. This way, if you used 50M tokens in a month, you'd pay 445$ with a subscription, versus paying 500$ with pay-as-you-go. This is still a good discount, but doesn't create perverse incentives, as the current model does.
Anthropic want you to use claude code cli badly and are prepared to be very generous if you do. People want to take that generosity without the reciprocity.
I don't normally like to come down on the side of the megabigcorp but in this case anthropic aren't being evil. Not yet anyway.
I think they are.
The key question is about why they want to you to use the CLI. If you're not the customer, you're the product.
There's also a monopolistic aspect to this. Having the best model isn't something over can legally exploit to gain advantage in adjacent markets.
It reeks of "Windows isn't done until Lotus won't run," Windows showing spurious error messages for DR-DOS, and Borland C++ losing to the then-inferior Visual C++ due to late support of new Windows features. And Internet Explorer bundling versus Netscape.
Yes, Microsoft badly wanted you to use Office, Visual C++, MS-DOS, and IE, but using Windows to get that was illegal.
Microsoft lost in court, paid a nominal fine, and executives were crying all the way to the bank.
If you're not the customer, you're the product.
You are the customer, you're paying them directly.
1 reply →
Well they are doing the same to website owners who rely on human visitors for their revenue streams.
Both scraping and on-demand agent-driven interactions erode that. So you could look at people doing the same to them as a sort of poetic justice, from a purely moral standpoint at least.
Assuming the actual price for many user is closer to 1k USD/mth than to 200 USD/mth, and the actual price is closer to their target margin to be a viable business, they're practically subsidising usage after 200 USD/mth. Together with other AI-TECH doing the same, they fabricate a false sense of "AI is capable AND affordable", which imo is evil.
There is nothing evil about prioritizing customer acquisition over immediate profit.
5 replies →
It’s all about the data.
They want you to use their tool so they can collect data.
>But they really want you to use the Claude Code™
They definitely want their absolutely proprietary software with sudo privilege on your machine. I wonder why they would want that geeez
it doesn't need sudo privileges...
Yeah, it is not necessary. Being able to run unprivileged commands on millions of machines is already a stupidly powerful thing
I wonder how are these pricings compared to running Claude over bedrock
The price is the same whether you run it on Bedrock or Anthropic....
Anthropic and all AI are playing chicken with each other. You need to win userbase and that is worth losing money for but if you sell discount tokens for Loveable clones to profit from that is not in your interest.
Anthropic is futher complicated by mission.
It is and was open source from the start?
https://github.com/anthropics/claude-code
Unless I'm severely mistaken that's not the source code for claude-code. It's a few official plugins and some helper scripts
Look a bit closer to the contents of this repo. There is basically no code.
> Why is Anthropic offering such favorable pricing to subscribers?
Most subscribers dont use up all their allocated tokens. Theyre banning these third parties because they consistently do use all their allocated tokens.
My problem with CC is that it is trying to be very creative. I am asking it to fix some test, or create a new test. What it is doing? It is running grep to find all tests in the code base and parses them. This eats a lot of tokens.
Then it runs the test, as if I could not do this myself, it reads the output, sometimes very long (so more and more tokens are burned) and so on.
If people had to pay for this cleverness and creativity an API price, they would be a bit shocked and give up quickly CC.
Using Aider with Claude Sonnet I am eating much less tokens than CC does.
> Why is Anthropic offering such favorable pricing to subscribers? I dunno
I do, it's called vendor lock-in. The product they're trying to sell is not the $200 subscription, it's the entire Claude Code ecosystem.
For the average person, the words "AI" and "ChatGPT" are synonims. OpenAI's competitors have long conceded this loss, and for the most part, they're not even trying to compete, because it's clear to everyone that there is no clear path to monetization in this market - the average joe isn't going to pay for a $100/mo subscription to ask a chatbot to do their homework or write a chocolate cake recipe, so good luck making money there.
The programming market is an entirely different story, though. It's clear that corporations are willing to pay decent money to replace human programmers with a service that does their work in a fraction of the time (and even the programmers themselves are willing to pay independently to do less work, even if it will ultimately make them obsolete), and they don't care enough about quality for that to be an issue. So everyone is currently racing to capture this potentially profitable market, and Claude Code is Anthropic's take on this.
Simply selling the subscription on its own without any lock-in isn't the goal, because it's clearly not profitable, nor is it currently meant to be, it's a loss leader. The actual goal is to get people invested long-term in the Claude Code ecosystem as a whole, so that when the financial reality catches up to the hype and prices have to go up 5x to start making real money, those people feel compelled to keep paying, instead of seeking out cheaper alternatives, or simply giving up on the whole idea. This is why using the subscription as an API for other apps isn't allowed, why Claude Code is closed source, why it doesn't support third party OpenAI-compatible APIs, and why it reads a file called CLAUDE.md instead of something more generic.
Claude Code is unusually efficient in the use if tokens in top of it all.
How is it different from what OpenAI and Codex, and Gemini offer?
I'm baffled that people, unknown to me, have apparently been considering Claude Code, the program, some kind of "secret sauce". It's a tool harness. Claude could one-shot write it for you, lol.
I guess it's another case of:
- effective moneytizeability of a lot of AI products seem questionable
- so AI cost strongly subsidized in all kinds of ways
- which is causing all kind of strange dynamics and is very much incompatible with "free market self regulation" (hence why a company long term running by investor money _and_ under-pricing any competition which isn't subsidized is theoretically not legal (in the US). Not that the US seem to care to actually run a functioning self regulating free market, even going back as far as Amazone. Turns out moving "state subsidized" to "subsidized by rich" somehow makes it no longer problematic / anti-free-market /non-liberal ... /s))
[flagged]
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)
I assume they're embarrassed by it. Didn't one of their devs recently say it's 100% vibe coded?
What an incredibly entitled message. If you know what anthropic should and shouldn't do, go start your own AI company.
Right? Lots of "Anthropic should do x because that's what makes sense to me".
What's ridiculous is that the subscription at 180€/month (excl. VAT) is already absurdly expensive for what you get. I doubt many would sign up for the per-API usage as it's just not sustainable pricing (as a user).
For the bizarre amount of work that gets done for that 180 euro, it is really cheap. We are just getting used to it and sinking prices everywhere, it is just that CC is the best (might be taste or bias, I at least think so), so we are staying with it for now. If it gets more expensive, we will go and try others for production instead of just trying them to get a feel for the competition as we do now.
This take is ridiculous. Nearly everyone who uses Max agrees that what they get for the money paid is an amazing deal. If you don't use or understand how LLMs fit in your workflows, you are not the target customer. But for people who use it daily, it is a relatively small investment compared to the time saved.
> If you don't use or understand how LLMs fit in your workflows, you are not the target customer.
I feel like this is a major area of divergence. The "vibes" are bifurcating between "coding agents are great!" and "coding agents are all hype!", with increasing levels of in-group communication.
How should I, an agent-curious user, begin to unravel this mess if $200 is significantly more than pocket change? The pro-agent camp remarks that these frontier models are qualitatively better and using older/cheaper approaches would give a misleading impression, so "buy the discount agent" doesn't even seem like a reasonable starting point.
2 replies →
That entirely depends on your business case. If that call costing 50 Cent has done something for me which would have taken me more than 1 minute of paid working time to do it's sustainable.
It pays for itself in a day for some folks. It is a lot but it’s still cheap.
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)
It's not like if it houses some top secret AI models inside of it, and it would make way more sense, and probably expand the capabilities of Claude Code itself. Do they lose out to having OpenAI or other competitors basically stealing their approach?
Update: Touché. The repo is just plugins and skills, not the meat.
In any case, another workaround would be using ACP that’s supported by Zed. Let’s editing tools access the power of CLI agents directly.
———
> Anthropic should have open sourced their Claude Code CLI a year ago
https://github.com/anthropics/claude-code
It has been open source for a while now. Probably 4-6 months.
> Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.
That's a very odd thing to wish for. I love my subscriptions and wouldn't have it any other way.
If you're going to link a repository, you should read it first. That repository is just a couple plugins and community links. Claude Code is, and always has been, completely closed source.
That repo does not contain the source code for Claude Code.
You can't use it as an SDK though, unlike codex
You can though?
https://platform.claude.com/docs/en/agent-sdk/overview
3 replies →