Anthropic blocks third-party use of Claude Code subscriptions

16 hours ago (github.com)

For folks not following the drama: Anthropic's $200/month subscription for Claude Code is much cheaper than Anthropic's pay-as-you-go API. In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost you more than $1,000 if you'd paid via the API.

Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI. They want OpenCode users to pay API prices, which could be 5x or more.

So, of course, OpenCode has implemented a workaround, so that folks paying "only" $200/month can use their preferred OpenCode CLI at Anthropic's all-you-can-eat token buffet.

https://github.com/anomalyco/opencode/issues/7410#issuecomme...

Everything about this is ridiculous, and it's all Anthropic's fault. Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.

More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

  • > More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

    "Should have" for what reason? I would be happy if they open sourced Claude Code, but the reality is that Claude Code is what makes Anthropic so relevant in the programming more, much more than the Claude models themselves. Asking them to give it away for free to their competitors seems a bit much.

    • Well OpenCode already exists and you can connect it to multiple providers, so you could just say that the agentic CLI harness business model as a service/billable feature is no more. In hindsight I would say it never made sense in the first place.

      8 replies →

    • > the reality is that Claude Code is what makes Anthropic so relevant in the programming more, much more than the Claude models themselves

      but Claude Code cannot run without Claude models? What do you mean?

      6 replies →

    • Yeah, I've heard of people swapping out the model that Claude Code calls and apparently its not THAT much of a difference. What I'd love to see from Anthropic instead is, give me smaller LLM models, I don't even care if they're "open source" or not, just pull down a model that takes maybe 4 or 6 GB of VRAM into my local box, and use those for your coding agents, you can direct it and guide it with Opus anyway, so why not cut down on costs for everyone (consumer and Anthropic themselves!) by just letting users who can run some of the compute locally. I've got about 16GB of VRAM I can juice out of my Macbook Pro, I'm okay running a few smaller models locally with the guiding hand of Opus or Sonnet for less compute on the API front.

      3 replies →

  • What part of a TOS is ridiculous? Claude Code is obviously a loss leader to them, but developer momentum / market share is important to them and they consider it worth it.

    What part of “OpenCode broke the TOS of something well defined” makes you think it’s all Anthropic’s fault?

    • It's probably not a "loss-leader" so much as "somewhat lower margin". Their bizdev guys are doubtless happy to make a switch between lower-margin, higher-multiple recurring revenue versus higher-margin, lower-multiple pay-as-you-go API billing. Corporate customers with contracts doubtless aren't paying like that for the API either. This is not uncommon.

    • My guess is that ultimately the use of Claude code will provide the training data to make most of what you do now in Claude code irrelevant.

    • When you have a "loss leader" whose sole purpose is to build up market share (e.g. put competitors out of business) that's called predatory pricing.

      1 reply →

  • I guess one issue is that you pay $200/month whether you use it or not. Potentially this could be better for Anthropic. What was not necessarily foreseeable (ok maybe it was) back when that started was that users have invented all kinds of ways to supervise their agents to be as efficient as possible. If they control the client, you can't do that.

    • I can easily get Claude Code to run for 8-10 hours unsupervised without stopping with sub-agents entirely within Claude Code.

      I think it is more likely that if you stick with Claude Code, then you are more likely to stick with Opus/Sonnet, whereas if you use a third party CLI you might be more likely to mix and match or switch away entirely. It's in their interest to get you invested in their tooling.

      5 replies →

    • > I guess one issue is that you pay $200/month whether you use it or not.

      I can easily churn through $100 in an 8 hour work day with API billing. $200/month seems like an incredibly good deal, even if they apply some throttling.

  • To extend your all you can eat analogy. It’s similar to how all you can eat restaurants allow you to eat all you can within the bounds of the restaurant, but you aren’t allowed to bring the food out with you.

    • Another analogy is that it’s a takeout but anthropic is insisting you only eat at home with the plastic utensils they’ve provided rather than the nice metal utensils you have at home.

      Another analogy is that it’s a restaurant that offers delivery and they’re insisting you use their own in house delivery service instead of placing a pickup order and asking your friendly neighbor to pick it up for you on their way back from the office.

      1 reply →

    • It's not really a fair analogy. Restaurants don't want you taking food away because they want to limit the amount you eat to a single meal, knowing that you'll stop when you get full. If you take food out you can eat more by waiting until the next meal when you're hungry again.

      You don't "get full" and "get hungry again" by switching UIs. You can consume the same amount whether you switch or you don't switch.

      3 replies →

    • Not really. At a buffet restaurant, if you could take the food out with you, you'd takeaway more food than you can eat at one sitting. OpenCode users and Claud Code™ CLI users use tokens at approximately the same rate.

      This is more like an all-you-can-eat restaurant requiring you to eat with their flimsy plastic forks, forbidding you to bring your own utensils.

      3 replies →

    • anthropic should not be criticizing the gluttony of others whilst licking its fingers surrounded by buckets full of fried chicken

  • Aren't you happy that you can use claude code unlimited for only 200/month? I don't really get your point tbh

    • I’d bet almost everyone who opts to buy the $200 plan is happy with the deal they’re getting relative to API pricing.

      I think some people get triggered by the inconsistency in pricing or the idea of having a fixed cost for somewhat vague usage limits.

      In practice it’s a great deal for anyone using the model at that level.

  • > Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI.

    Because they are harvesting all the data they can harvest through CLI to train further models. API access in contrast provides much more limited data

  • > Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off

    I have the "all-you-can-eat" plan _because_ I know what I'm getting and how much it'll cost me.

    I don't see anything wrong with this. It's just a big time-limited amount of tokens you can use. Of course it sucks that it's limited to Claude-Code and Claude.ai. But the other providers have very similar subscriptions. Even the original ChatGTP pro subscription gives you a lot more tokens for the $20 it costs compared to the API cost.

    I always assumed tokens over the API cost that much, because that's just what people are willing to pay. And what people are willing to pay for small pay-as-you-go tasks vs large-scale agentic coding just doesn't line up.

    And then there's the psychological factor: if Claude messed up and wasted a bunch of tokens, I'm going to be super pissed that those specific tokens will have cost me $30. But when it's just a little blip on my usage limit, I don't really mind.

  • What can we learn from this?

    The model is not a moat

    They need to own the point of interaction to drive company valuation. Users can more about tool switching costs that the particular model they use.

  • > More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

    Isn't the whole thesis behind LLM coding that you can easily clone the CLI using an LLM? Otherwise what are you paying $200/mo for?

  • It's hard to understand what Anthropic are getting from forcing more people to use Claude Code vs any other tools via the API. Why do they care? Do they somehow get better analytics or do they dream that there's a magical lock-in effect... from a buggy CLI?

    • I suspect that they lose control over the cheaper models CC can choose for you for eg. file summaries or web fetch. Indeed, they lose web fetch and whatever telemetry it gives them completely.

      It's not unreasonable to assume that without the ability to push haiku use aggresively for summarization, the average user in OC vs CC costs more.

      1 reply →

    • Not that hard to understand, they want to control how their users use their product. A CLI they built, even acquiring the framework it was built in, is a way to achieve that.

      3 replies →

    • It's because the model companies believe there's no way to survive just selling a model via an API. That is becoming a low margin, undifferentiated commodity business that can't yield the large revenue streams they need to justify the investments. The differences between models just aren't large enough and the practice of giving model weights away for free (dumping) is killing that business.

      So they all want to be product companies. OpenAI is able to keep raising crazy amounts of capital because they're a product company and the API is a sideshow. Anthropic got squeezed because Altman launched ChatGPT first for free and immediately claimed the entire market, meaning Anthropic became an undifferentiated Bing-like also-ran until the moment they launched Claude Code and had something unique. For consumer use Claude still languishes but when it comes to coding and the enormous consumption programmers rack up, OpenAI is the one cloning Claude Code rather than the other way around.

      For Claude Code to be worth anything to Anthropic's investors it must be a product and not just an API pricing tier. If it's a product they have so many more options. They can e.g. include ads, charge for corporate SSO integrations, charge extra for more features, add social features... I'm sure they have a thousand ideas, all of which require controlling the user interface and product surface.

      That's the entire reason they're willing to engage in their own market dumping by underpricing tokens when consumed via their CLI/web tooling: build up product loyalty that can then be leveraged into further revenue streams beyond paying for tokens. That strategy doesn't work if anyone can just emulate the Claude Code CLI at the wire level. It'd mean Anthropic buys market share for their own competitors.

      N.B. this kind of situation is super common in the tech industry. If you've ever looked at Google's properties you'll discover they're all locked behind Javascript challenges that verify you're using a real web browser. The features and pricing of the APIs is usually very different to what consumers can access via their web browser and technical tricks are used to segment that market. That's why SERP scraping is a SaaS (it's far too hard to do directly yourself at scale, has to be outsourced now), and why Google is suing them for bypassing "SearchGuard", which appears to just be BotGuard rebranded. I designed the first version of BotGuard and the reason they use it on every surface now, and not just for antispam, is because businesses require the ability to segment API traffic that might be generated by competitors from end user/human traffic generated by their own products.

      If Anthropic want to continue with this strategy they'll need to do the same thing. They'll need to build what is effectively an anti-abuse team similar to the BotGuard team at Google or the VAC team at Valve, people specialized in client integrity techniques and who have experience in detecting emulators over the network.

  • > They can and should just open source it now

    Why you have this idea? why they should open source it now?

  • > More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

    It's not like if it houses some top secret AI models inside of it, and it would make way more sense, and probably expand the capabilities of Claude Code itself. Do they lose out to having OpenAI or other competitors basically stealing their approach?

  • Sorry, ClaudeCode is $200/mo? I’m not using it now, but was thinking about giving it a try. The website shows $200/year for Pro:

    “$17 Per month with annual subscription discount ($200 billed up front). $20 if billed monthly.”

    https://claude.com/pricing

    What are you referring to that’s 10x that price? (Conversely, I’m wondering why Pro costs 1/10 the value of whatever you’re referring to?!?)

  • I believe there are a number of cli tools which also use Anthropic's Max plan (subscription) - this isn't just an OpenCode issue.

    I had the TaskMaster AI tool hooked up to my Anthropic sub, as well as a couple of other things - Kilo Code and and Roo Code iirc?

    From discussions at the time (6 months ago) this "use your Anthropic sub" functionality was listed by at least one of the above projects as "thanks to the functionality of the Anthropic SDK you can now use your sub...." implying it was officially sanctioned rather than via a "workaround".

  • Anthropic want you to use claude code cli badly and are prepared to be very generous if you do. People want to take that generosity without the reciprocity.

    I don't normally like to come down on the side of the megabigcorp but in this case anthropic aren't being evil. Not yet anyway.

    • I think they are.

      The key question is about why they want to you to use the CLI. If you're not the customer, you're the product.

      There's also a monopolistic aspect to this. Having the best model isn't something over can legally exploit to gain advantage in adjacent markets.

      It reeks of "Windows isn't done until Lotus won't run," Windows showing spurious error messages for DR-DOS, and Borland C++ losing to the then-inferior Visual C++ due to late support of new Windows features. And Internet Explorer bundling versus Netscape.

      Yes, Microsoft badly wanted you to use Office, Visual C++, MS-DOS, and IE, but using Windows to get that was illegal.

      Microsoft lost in court, paid a nominal fine, and executives were crying all the way to the bank.

      1 reply →

    • Well they are doing the same to website owners who rely on human visitors for their revenue streams.

      Both scraping and on-demand agent-driven interactions erode that. So you could look at people doing the same to them as a sort of poetic justice, from a purely moral standpoint at least.

    • Assuming the actual price for many user is closer to 1k USD/mth than to 200 USD/mth, and the actual price is closer to their target margin to be a viable business, they're practically subsidising usage after 200 USD/mth. Together with other AI-TECH doing the same, they fabricate a false sense of "AI is capable AND affordable", which imo is evil.

      4 replies →

  • They are subsidizing Claude code so they can use your data to train better coding models. You’re paying them to show their models how to code better.

    • If true I wonder what kind of feedback loop is happening by training on human behavior that's directly influenced by the output of the same model

      1 reply →

  • I tend to think that their margins on API pricing are significantly higher. They likely gave up some of that margin to grow the Claude Code user base, though it probably still runs at a thin profit. Businesses are simply better customers than individuals and are willing to pay much more.

  • > More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

    Who cares? Just have Claude vibe code it in an afternoon...

  • >But they really want you to use the Claude Code™

    They definitely want their absolutely proprietary software with sudo privilege on your machine. I wonder why they would want that geeez

  • > Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.

    Sorry, I don't understand this. Either you're saying

    A) Everyone paying $200/mo should now pay $800/mo to match this 20% off figure you're theorizing... or B) Maybe you're implying that the $1,000+ costs are too high and they should be lowered, to like, what, $250/mo? (250 - 20% = $200)

    Which confuses me, because neither option is feasible or ever gonna happen.

    • Not the OP, but it seems pretty clear to me - they're suggesting that fixed per-month pricing with unlimited usage shouldn't exist at all, as it doesn't really make sense for a product that has per token costs.

      Instead, they're saying that a 200$/month subscription should pay for something like $250 worth of pay-per-token API tokens, and additionally give preferential pricing for using up more tokens than that.

      So, if the normal API pricing were 10$ per million tokens, a 200$ per month subscription should offer 25M tokens for free, and allow you to use more tokens at a 9$/1M token rate. This way, if you used 50M tokens in a month, you'd pay 445$ with a subscription, versus paying 500$ with pay-as-you-go. This is still a good discount, but doesn't create perverse incentives, as the current model does.

  • Anthropic and all AI are playing chicken with each other. You need to win userbase and that is worth losing money for but if you sell discount tokens for Loveable clones to profit from that is not in your interest.

    Anthropic is futher complicated by mission.

  • My problem with CC is that it is trying to be very creative. I am asking it to fix some test, or create a new test. What it is doing? It is running grep to find all tests in the code base and parses them. This eats a lot of tokens.

    Then it runs the test, as if I could not do this myself, it reads the output, sometimes very long (so more and more tokens are burned) and so on.

    If people had to pay for this cleverness and creativity an API price, they would be a bit shocked and give up quickly CC.

    Using Aider with Claude Sonnet I am eating much less tokens than CC does.

  • Claude Code is unusually efficient in the use if tokens in top of it all.

  • > Why is Anthropic offering such favorable pricing to subscribers? I dunno

    I do, it's called vendor lock-in. The product they're trying to sell is not the $200 subscription, it's the entire Claude Code ecosystem.

    For the average person, the words "AI" and "ChatGPT" are synonims. OpenAI's competitors have long conceded this loss, and for the most part, they're not even trying to compete, because it's clear to everyone that there is no clear path to monetization in this market - the average joe isn't going to pay for a $100/mo subscription to ask a chatbot to do their homework or write a chocolate cake recipe, so good luck making money there.

    The programming market is an entirely different story, though. It's clear that corporations are willing to pay decent money to replace human programmers with a service that does their work in a fraction of the time (and even the programmers themselves are willing to pay independently to do less work, even if it will ultimately make them obsolete), and they don't care enough about quality for that to be an issue. So everyone is currently racing to capture this potentially profitable market, and Claude Code is Anthropic's take on this.

    Simply selling the subscription on its own without any lock-in isn't the goal, because it's clearly not profitable, nor is it currently meant to be, it's a loss leader. The actual goal is to get people invested long-term in the Claude Code ecosystem as a whole, so that when the financial reality catches up to the hype and prices have to go up 5x to start making real money, those people feel compelled to keep paying, instead of seeking out cheaper alternatives, or simply giving up on the whole idea. This is why using the subscription as an API for other apps isn't allowed, why Claude Code is closed source, why it doesn't support third party OpenAI-compatible APIs, and why it reads a file called CLAUDE.md instead of something more generic.

  • I'm baffled that people, unknown to me, have apparently been considering Claude Code, the program, some kind of "secret sauce". It's a tool harness. Claude could one-shot write it for you, lol.

  • I guess it's another case of:

    - effective moneytizeability of a lot of AI products seem questionable

    - so AI cost strongly subsidized in all kinds of ways

    - which is causing all kind of strange dynamics and is very much incompatible with "free market self regulation" (hence why a company long term running by investor money _and_ under-pricing any competition which isn't subsidized is theoretically not legal (in the US). Not that the US seem to care to actually run a functioning self regulating free market, even going back as far as Amazone. Turns out moving "state subsidized" to "subsidized by rich" somehow makes it no longer problematic / anti-free-market /non-liberal ... /s))

  • > More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

    I assume they're embarrassed by it. Didn't one of their devs recently say it's 100% vibe coded?

  • What an incredibly entitled message. If you know what anthropic should and shouldn't do, go start your own AI company.

  • What's ridiculous is that the subscription at 180€/month (excl. VAT) is already absurdly expensive for what you get. I doubt many would sign up for the per-API usage as it's just not sustainable pricing (as a user).

    • For the bizarre amount of work that gets done for that 180 euro, it is really cheap. We are just getting used to it and sinking prices everywhere, it is just that CC is the best (might be taste or bias, I at least think so), so we are staying with it for now. If it gets more expensive, we will go and try others for production instead of just trying them to get a feel for the competition as we do now.

    • This take is ridiculous. Nearly everyone who uses Max agrees that what they get for the money paid is an amazing deal. If you don't use or understand how LLMs fit in your workflows, you are not the target customer. But for people who use it daily, it is a relatively small investment compared to the time saved.

      2 replies →

    • That entirely depends on your business case. If that call costing 50 Cent has done something for me which would have taken me more than 1 minute of paid working time to do it's sustainable.

    • It pays for itself in a day for some folks. It is a lot but it’s still cheap.

  • Update: Touché. The repo is just plugins and skills, not the meat.

    In any case, another workaround would be using ACP that’s supported by Zed. Let’s editing tools access the power of CLI agents directly.

    ———

    > Anthropic should have open sourced their Claude Code CLI a year ago

    https://github.com/anthropics/claude-code

    It has been open source for a while now. Probably 4-6 months.

    > Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.

    That's a very odd thing to wish for. I love my subscriptions and wouldn't have it any other way.

    • If you're going to link a repository, you should read it first. That repository is just a couple plugins and community links. Claude Code is, and always has been, completely closed source.

Lots of arguing about semantics of what the subscription is actually intended for.

Claude Code, as a coding assistant, isn't even mediocre, it's kind of crap. The reason it's at all good is because of the model underneath - there's tons of free and open agent tools that are far better than Claude Code. Regardless of what they say you're paying the subscription for, the truth is the only thing of value to developers is the underlying AI and API.

I can only think of few reasons why they'd do this: 1. Their Claude Code tool is not simply an agent assistant - perhaps it's feeding data for model training purposes, or something of the sorts where they gain value from it. 2. They don't want developers to use competitor models in any capacity. 3. They're offloading processing or doing local context work to drive down the API usage locally, making the usage minimal. This is very unlikely.

I currently use Opus 4.5 for architecting, which then feeds into Gemini 3 Flash with medium reasoning for coding. It's only a matter of time before Google competes with Opus 4.5, and when they do, I won't have any loyalty to Anthropic.

  • For AI companies the access to the interaction is very valuable, that explains the price difference. It is data that the competition does not have access to. Of course they are storing that data for model training purposes, that's the whole reason this exists in the first place. They are subsidizing until they get their quality up to the point that the addiction is so strong you won't be able to get through your workday without it. And then surprise the per month access fee will start to rise.

This is an unusual L for Anthropic. The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code. Obviously, CC is a great tool, but that's more about the magic of the model than the engineering of the CLI.

The opencode team[^1][^2] built an entire custom TUI backend that supports a good subset of HTML/CSS and the TypeScript ecosystem (i.e. not tied to Opencode, a generic TUI renderer). Then, they built the product as a client/server, so you can use the agent part of it for whatever you want, separate from the TUI. And THEN, since they implemented the TUI as a generic client, they could also build a web view and desktop view over the same server.

It also doesn't flicker at 30 FPS whenever it spawns a subagent.

That's just the tip of the iceberg. There are so many QoL features in opencode that put CC to shame. Again, CC is a magical tool, but the actual nuts and bolts engineering of it is pretty damning for "LLMs will write all of our code soon". I'm sorry, but I'm a decent-systems-programmer-but-terminal-moron and I cranked out a raymarched 3D renderer in the terminal for a Claude Wrapped[^] in a weekend that...doesn't flicker. I don't mean that in a look-at-me way. I mean that in a "a mid-tier systems programmer isn't making these mistakes" kind of way.

Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.

[^1] https://github.com/anomalyco/opentui

[^2] From my loose following of the development, not a monolith, and the person mostly responsible for the TUI framework is https://x.com/kmdrfx

[^3] https://spader.zone/wrapped/

  • My favorite is running CC in a screen session. There if I type out a prompt and then just start holding down the backspace key to delete a bunch of characters, at some point they key press refresh rate outruns CC’s brains and it just starts acting like it moved the cursor but didn’t delete anything. It is an embarrassing bug, but one that I suspect wouldn’t be found in automated testing.

    • Talking about embarrassing bugs, Claude chat (both web and iOS apps) lately tend to lose the user message when there is a network error. This happens every day to me lately. It is frustrating to retype a message from memory, first time you are "in the flow" second time it feels like unjust punishment.

      With all the Claude Code in the world how come they don't write good enough tests to catch UI bugs? I have come to the point where I preemptively copy the message in clipboard to prevent retyping.

      2 replies →

    • If you want to work around this bug, Claude Code supports all the readline shortcuts such as Ctrl-W and Ctrl-U.

  • > Anyway, this is embarrassing for Anthropic.

    Why? A few times in this thread I hear people saying "they shouldn't have done this" or something similar but not given any reason why.

    Listing features you like of another product isn't a reason they shouldn't have done it. It's absolutely not embarrassing, and if anything it's embarrassing they didn't catch and do it sooner.

    • Because the value proposition that has people pay Anthropic is that it's the best LLM-coding tool around. When you're competing on "we can ban you from using the model we use with the same rate limits we use" everyone knows you have failed to do so.

      They might or might not currently have the best coding LLM - but they're admitting that whatever moat they thought they were building with claude code is worthless. The best LLM meanwhile seems to change every few months.

      They're clearly within their rights to do this, but it's also clearly embarrassing and calls into question the future of their business.

      8 replies →

    • It is embarrassing to restrict an open source tool that is (IMO) a strictly and very superior piece of software from using your model. It is not immoral, like I said, because it's clearly against the ToC; but it's not like OC is stealing anything from Anthropic by existing. It's the same subscription, same usage.

      Obviously, I have no idea what's going on internally. But it appears to be an issue of vanity rather than financials or theft. I don't think Anthropic is suffering harm from OC's "login" method; the correct response is to figure out why this other tool is better than yours and create better software. Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.

      8 replies →

    • As a user it is because I can no longer use the subscription with the greater tooling ecosystem.

      As for Anthropic, they might not want to do this as they may lose users who decide to use another provider, since without the cost benefit of the subscription it doesn't make sense to stay with them and also be locked into their tooling.

      7 replies →

    • The Claude plans allow you to send a number of messages to Anthropic models in a specific interval without incurring any extra costs. From Anthropic's "About Claude's Max Plan Usage" page:

      > The number of messages you can send per session will vary based on the length of your messages, including the size of files you attach, the length of current conversation, and the model or feature you use. Your session-based usage limit will reset every five hours. If your conversations are relatively short and use a less compute-intensive model, with the Max plan at 5x more usage, you can expect to send at least 225 messages every five hours, and with the Max plan at 20x more usage, at least 900 messages every five hours, often more depending on message length, conversation length, and Claude's current capacity.

      So it's not a "Claude Code" subscription, it's a "Claude" subscription.

      The only piece of information that might suggest that there are any restrictions to using your subscription to access the models is the part of the Pro plan description that says "Access Claude Code on the web and in your terminal" and the Max plan description that says "Everything in Pro".

    • It is embarrassing, because it means they’re afraid of competition. If CC was so great, at least a fraction of they sell it, they wouldn’t need to do it.

    • It's embarrassing because they use Claude Code to build all of their software and they can't write decent software to save their lives. Their software quality is embarrassing, basically Microsoft tier, which calls into question both the effectiveness of their AI products and Agentic workflows.

      Like seriously, the creator of CC claims to run 10 simultaneous agents at once. We sure can tell bud.

  • I've used both CC and OpenCode quite a bit and while I like both and especially appreciate the work around OpenTUI, experience-wise I see almost no difference between the two. Maybe it's because my computer is fast and I use Ghostty, but I don't experience any flickering in CC. Testing now, I see typing is slightly less responsive in CC (very slightly: I never noticed until I was testing it on purpose).

    We will see whether OpenCode's architecture lets them move faster while working on the desktop and TUI versions in parallel, but it's so early — you can't say that vision has been borne out yet.

  • I am curious, I haven't faced any major issues using claude code in my daily workflow. Never noticed any flickering either.

    Why do you think opencode > CC? what are some productivity/practical implications?

    • Opencode has a web UI, so I can open it on my laptop and then resume the same session on the web from my phone through Tailscale. It’s pretty handy from time to time and takes almost zero effort from me.

      The flickering is still happening to me. It's less frequent than before, but still does for long/big sessions.

  • > The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code

    I'm curious, what made you think of that?

  • > Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.

    This is nothing new, they pulled Claude models from the Trae editor over "security concerns." It seems like Anthropic are too pearl-clutching in comparison to other companies, and it makes sense given they started in response to thinking OpenAI was not safety oriented enough.

  • > The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code.

    If only Claude Code developers had access to a powerful LLM that would allow them to close the engineering gap. Oh, wait...

  • Update: Ah, I see this part: "This credential is only authorized for use with Claude Code and cannot be used for other API requests."

    Old comment for posterity: How do we know this was a strategy/policy decision versus just an engineering change? (Maybe the answer is obvious, but I haven't seen the source for it yet.) I skimmed the GitHub issue, but I didn't see discussion about why this change happened. I don't mean just the technical change; I mean why Anthropic did it. Did I miss something?

  • Or just maybe submit feature requests instead of backdooring a closed source system.

    • All the TUI agents are awful at scrolling. I'm on Ubuntu 24.04 and both Claude Code and Gemini CLI absolutely destroy scrolling. I've tested Claude Code in the VS Code and it's better there, but in the Gnome Terminal it's plain unusable.

      And a lot of people are reporting scrolling issues.

      As someone was saying, it's like they don't have access to the world's best coding LLM to debug these issues.

      1 reply →

This headline is misleading. EDIT: Or rather was, at it has now been edited to be accurate.

You can still bring your own Anthropic API key and use Claude in OpenCode.

What you can no longer do is reverse engineer undocumented Anthropic APIs and spoof being a Claude Code client to use an OAuth token from a subscription-based Anthropic account.

This really sucks for people who want a thriving competitive market of open source harnesses since BYOK API tokens mean paying a substantial premium to use anything but Anthropic's official clients.

But it's hard to say it's surprising or a scandal, or anything terribly different from what tons of other companies have done in the past. I'd personally advise people to expect everything about using frontier coding models becoming much more pay-to-play.

  • The API key is not a subscription. The title says subscriptions are blocked from using third-party tools. Or am I misunderstanding?

    • Headline's been edited since my post. It previously said something along the lines of "Anthropic bans API use in OpenCode CLI"

  • The ideal endgame is that AI lets us build tools that make it impossible to tell what application or device is using their APIs and everything becomes open to third party clients whether they like it or not.

This will piss a lot of people off, and seems like a strange move. I get that this was always a hack and against the ToS. But I've been paying Anthropic money every month to do exactly what I would have done with Claude Code, but in another harness that I like better. All they've achieved here is that I am no longer giving them money. Their per-token pricing is really expensive compared to OpenAI, and I like the results from the OpenAI models better too, they're just very slow.

Here's a good benchmark from the brokk team showing the performance per dollar, GPT-5.1 is around half the price of Opus 4.5 for the same performance, it just takes twice as long.

https://brokk.ai/power-ranking?dataset=openround&models=flas...

So as of today, my money is going to OpenAI instead of Anthropic. They probably don't care though, I suspect that not many users are sufficiently keen on alternative harnesses to make a difference to their finances. But by the same token (ha ha), why enforce this? I don't understand why it's so important to them that I'm using Claude Code instead of something else.

  • Presumably Claude Code is a loss leader to try to lock you into their ecosystem or at least get you to exclusive associate “AI” with “Claude”. So if it’s not achieving those goals, they’d prefer if you use OpenAI instead.

    • That's my understanding and that's what I see happening at some places.

      People got a CC sub, invest on the whole tooling around CC (skills and whatnot) and once they're a few weeks/months in, they'll need a lot of convincing to even try something else.

      And given how often CC itself changes and they need to keep up with it, that's even worse. It's not just not wanting to get out of your confort zone, it's just trying to keep up with your current tools. Now if you also have to try a new tool every other day, the 10x productivity improvements claimed won't be enough to cover the lack of actual working hours you'll be left with in a week.

The API is not banned only using the Claude Code subscription is

I actually tried this several months back to do a regular API request using the CC subscription token and it gave the same error message

So this software must have been pretending to be Claude Code in order to get around that

A Claude Code subscription should not work with other software, I think this is totally fair

  • > A Claude Code subscription should not work with other software, I think this is totally fair

    Why the hell not? What an L take - if I pay a subscription fee for an API, I should be able to use that API however I want. If they're forcing users to only consume their APIs with a proprietary piece of software, it really begs what's in that software that makes it valuable to them. Seems like there's something nefarious involved.

  • > A Claude Code subscription should not work with other software.

    why not though? aren't you paying for the model usage regardless of the client you use?

    • No, you are paying to use Claude code… it uses the model underneath, but you aren’t paying for raw model usage. For whatever reason, Anthropic thinks this is the best way to divide up their market.

      They want to charge more for direct access to the model.

      1 reply →

    • That's not up to you or me. I think it's pretty clear by the phrase "Claude Code subscription" that it's meant for only "Claude Code". Why are you confused?

      This could be so easily abused by companies who spend thousands of dollars per month for API costs you could just reverse engineer it and use the subscription tokens to get that down to a few hundred

      18 replies →

    • > aren't you paying for the model usage

      No, you’re paying for “Claude Code” usage.

  • >A Claude Code subscription should not work with other software, I think this is totally fair

    Strongly disagree. They are just trying to moat.

    • It’s a private API. What part of this is hard to understand? This is why you don’t code against undocumented APIs with no contract. It’s self destructive.

  • Is Claude Code still available on IDEs through ACP?

    Like https://zed.dev/docs/ai/external-agents

    • Should be, yes - ACP is basically just a different way of invoking the agent, so you're still using Claude Code. It's alternative clients like OpenCode, the CharmBracelet one and pi which will be affected - they basically reimplement the agent part and just call the API directly.

    • Yes. I've been using it today with Zed (a mind-blowing editor, by the way).

      One must use an API key to work through Zed, but my Max subscription can be used with Claude Code as an external agent via Zed ACP. And there's some integration; it's a better experience than Claude Code in a terminal next to file viewing in an editor.

  • but if something is used in CLI it makes sense it would be in order to be used with other things in the CLI

Engineer working on Amp here.

I'm very surprised that it took them this long to crack down on it. It's been against the terms of service from the start. When I asked them back in March last year whether individuals can use the higher rate limits that come with the Claude Code subscription in other applications, that was also a no.

Question is: what changed? New founding round coming up, end of fiscal year, planning for IPO? Do they have to cut losses?

Because the other surprise here is that apparently most people don't know the true cost of tokens and how much money Anthropic is losing with power users of Claude Code.

  • > Question is: what changed? New founding round coming up, end of fiscal year, planning for IPO? Do they have to cut losses?

    I'm gonna say IPO considering their recent aggressive stealth marking campaign on X, Reddit, and HN.

  • Yeah. If my claude code usage was on API directly, it would be in thousands. I know this because I have addon credits on top of the max plan because I run into weekly limits often

  • You think anthropic is losing money now with the weekly limits? And while hitting the gas on mass market?

I feel like I'm the only person on this site that doesn't use AI for coding. I guess there's probably a lot of other people that haven't commented on this story who don't use it either. But when I read about how much hype and all that sort of stuff there is in the AI industry, and then I see the amount of posts and commentary and deep technical discussion about how this feature has affected people, I'm not so sure. Everyone I know hates AI and how it's been shoved into every corner of our lives, but I look here and it's insanely popular. Anyway, sorry this was a very off topic comment. It's just very interesting to me that the hype isn't all just hype.

  • I also don’t use AI for coding. I tried, I explored, I learned how it works.

    At the end, “maybe-sometimes works” and “sends a copy of all your code to some server in the US” are just incompatible with the kind of software I create.

    Regarding the post, I think it’s telling that Anthropic is trying to force people into using their per-usage billing more than the subscription. My take is that the subscription offers a lot as a way of hooking developers into it and is not sustainable for Anthropic if people end up actually maxing their usage.

    Given how much money is wasted into the LLM craze, I can imagine there will be more “tightening of the belt” from the AI corps going forward.

    For the five coders out there, maybe it’s time to use your tokens to get back control of your codebases … you may have to “meat code” them soon.

    • I'll say "maybe-sometimes works" is a misunderstanding.

      It feels like that initially, but that's no different from any new tool you adopt. A jackhammer also "maybe-sometimes works" as a hammer replacement.

      14 replies →

  • > I feel like I'm the only person on this site that doesn't use AI for coding.

    I’m surprised by that. One reason I follow discussions here about AI and coding is that strong opinions are expressed by professionals both for and against. It seems that every thread that starts out with someone saying how AI has increased their productivity invites responses from people casting doubt on that claim, and that every post about the flaws in AI coding gets pushback from people who claim to use it to great effect.

    I’m not a programmer myself, but I have been using Claude Code to vibe-code various hobby projects and I find it enormously useful and fun. In that respect, I suppose, I stand on the side of AI hype. But I also appeciate reading the many reports from skeptics here who explain how AI has failed them in more serious coding scenarios than what I do.

  • I feel the same. I don't want to hear about it all the time (although I welcome discussion). I wish this site would go back to talking about other tech things.

  • I don't use it at all for a variety of reasons, but I rarely bother to get into discussions on HackerNews.

    Looking at how new it is, and how quickly things are changing, it seems likely that I could adopt it into my workflow in a month or two if it turns out that that's necessary.

    On the other hand, I've spent the last 2 decades building skills as a developer. I'm far more worried that becoming a glorified code reviewer will atrophy those skills than I am about falling behind. Maybe it will turn out that those skills are now obsolete, but that feels unlikely to me.

    • > I'm far more worried that becoming a glorified code reviewer will atrophy those skills

      A co-worker who went all-in around a year ago admitted a few months ago he's noticed this in himself, and was trying to stop using the code-generating functionality of any of these tools. Emphasis on "try": apparently the times it does work amazingly makes it addictive like gambling, and it's far too easy to reach for.

  • It will be shoved into your life anyway. You might like it or not, but the only safe choice is to learn and understand it IMHO.

    About usage: it looks like web development gets benefits here, but other areas are not that successful somehow. While I use it successfully for Neovim Lua plugins development, CLI apps (in JS) and shell development (WezTerm Lua + fish shell). So I don't know if:

    a) it simply has clicked for me and it will click for everyone who invests into it;

    b) it is not for everybody because of tech;

    c) is it not for everybody because of mindset;

  • AI is indeed just hype in a lot of cases, but also has revolutionary value in a other cases. Trying it is the only way you'll be able to differentiate the latter from the former.

  • I share your experience. Additionally, I am surprised anyone on this site did not see this progression coming. Between costs, the race to be THE provider, and anyone who has an awareness of how the tech industry has been operating the last 15 years, this move by Anthropic was so laughably predictable that the discourse in this thread is pretty disappointing.

  • All those people are on the drug they got on the cheap during the fun party nights.

    They are, or soon will be, surprised that the price is going to increase, and they are the only losers in that great story of theirs...

  • I hate how AI is being shoved in most things, but I do love AI in a few of those places (ai coding and google search replacement)

    • Have you noticed how Google search summaries have taken the shape of those annoying blogposts that take you through several “What is a computer program” explainers before answering the question?

  • I use it sparingly. I do still have to produce boilerplate and don't have the time/will to engineer a better solution. But any actual logic etc I do myself. Why would I take a chance on an LLM doing it wrong when I know exactly how I want it and am perfectly capable of doing it myself. Also, what the hell am I going to do in the minutes it takes to generate, just sit there and watch it? No thanks.

The fix has been merged in https://github.com/anomalyco/opencode-anthropic-auth/pull/11, and PR https://github.com/anomalyco/opencode/pull/7432 is open to bump the version.

Until it's released, here's a workaround:

1. git clone https://github.com/anomalyco/opencode-anthropic-auth.git

2. Add to ~/.config/opencode/opencode.json: "plugin": ["file:///path/to/opencode-anthropic-auth/index.mjs"]

3. Run: OPENCODE_DISABLE_DEFAULT_PLUGINS=true opencode

  • Anthropic shot themselves in the foot with this decision. It‘s a PR nightmare and at the same time the open source community will always find a way. They just wasted everyone‘s time and likely lost a bunch of users while doing so.

    Thank you for sharing this!

    • The open source community won't always find a way. Remote attestation isn't a new concept (it doesn't have to be hardware backed, the concept is general).

      The industry has enough experience with this by now to know how it goes, and open source projects are always the first to drop out of the race. The time taken to keep up becomes much too high to justify doing on a voluntary basis or giving away the results, so as the difficulty of bypassing checks goes up the only people who can do it become SaaS providers.

      BluRay BD+ was a good example of that back in the day. AACS was breakable by open source players. Once BD+ came along the open source doom9 crowd were immediately wiped out. For a long time the only breaks came from a company in Antigua that sold a commercial ripper, which was protected from US law enforcement by a WTO decision specific to that island.

      You also see this with stuff like Google YouTube/SERP scraping. There currently aren't any open source solutions that don't get rapidly blocked server side, AFAIK. Companies that know how to beat it keep their solutions secret and sell bypasses as a service.

I know this will sound strange, but SOTA model companies will eventually allow subscription based usage through third-party tools. For any usage whatsoever.

Models are pretty much democratized. I use Claude Code and opencode and I get more work done these days with GLM or Grok Code (using opencode). Z.ai (GLM) subscription is so worth it.

Also, mixing models, small and large ones, is the way to go. Different models from different providers. This is not like cloud infra where you need to plan the infra use. Models are pretty much text in, text out (let's say for text only models). The minor differences in API are easy to work with.

  • Wouldn't this mean SOTA model companies are incentivized not to allow subscriptions through third parties?

    If all the models are interchangeable at the API layer, wouldn't they be incentivized to add value at the next level up and lock people in there to prevent customers from moving to competitors on a whim.

    • > If all the models are interchangeable at the API layer, wouldn't they be incentivized to add value at the next level up

      Just the other day, a 2016 article was reposted here [https://news.ycombinator.com/item?id=46514816] on the 'stack fallacy', where companies who are experts in their domain repeatedly try and fail to 'move up the value chain' by offering higher-level products or services. The fallacy is that these companies underestimate the essential compexities of the higher-level and approach the problem with arrogance.

      That would seem to apply here. Why should a model-building company have any unique skill at building higher-level integration?

      If their edge comes from having the best model, they should commoditize the complement and make it as easy as possible for everyone to use (and pay for) their model. The standard API allows them to do just this, offering 'free' benefits from community integrations and multi-domain tasks.

      If their edge does not come from the model – if the models are interchangeable in performance and not just API – then the company will have deeper problems justifying its existing investments and securing more funding. A moat of high-level features might help plug a few leaks, but this entire field is too new to have the kind of legacy clients that keep old firms like IBM around.

    • I do not know what that next level is to be honest. Web search, crawler, code execution, etc. can all be easily added on the agent side. And some of the small models are so good when the context is small that being locked into one provider makes no sense. I would rather build a heavy multi-agent solution, using Gemini, GLM, Sonnet, Haiku, GPT, and even use BERT, GlinER and other models for specific tasks. Low cost, no lock-in, still get high quality output.

  • AI labs are not charities and there is no way to make money offering unlimited access to SOTA LLMs. Even as costs drop, that will continue to be true for the best models in 2027, 2028 etc. - as demonstrated by the fact that CPU time still costs money. The current offerings are propped up by a VC bubble and not sustainable.

    • I agree but that is not the issue. See the really "large" models are great at a few things but they are not needed for daily tasks, including most coding tasks. Claude Code itself uses Haiku for a lot of tasks.

      The non-SOTA companies will eat more of this pie and squeeze more value out of the SOTA companies.

FWIW this isn’t new, using a Claude/Max subscription auth token as a general-purpose “API key” has been known (and blocked) for ages. OpenCode basically had to impersonate the official Claude Code client to make that work, and it always felt like a loophole that would get patched eventually.

This is exactly why (when OpenCode and Charm/Crush started diverging) Charm chose not to support “use your Claude subscription” auth and went in a different direction (BYOK / multi-provider / etc). They didn’t want to build a product on top of a fragile, unofficial auth path.

And I think there’s a privacy/policy reason tightening this now too: the recent Claude Code update (2.1-ish) pops a “Help improve Claude” prompt in the terminal. If you turn that ON, retention jumps from 30 days to up to 5 years for new/resumed chats/coding sessions (and data can be used for model improvement). If you keep it OFF, you stay on the default 30-day retention. You can also delete data anytime in settings. That consent + retention toggle is hard to enforce cleanly if you’re not in an official client flow, so it makes sense they’re drawing a harder line.

  • Yea exactly, I’m surprised people are calling this “drama”. It was from the beginning against the ToS, all the stuff supporting it just reverse engineered what Claude Code is doing and spoof being a client.

    I tried something similar few months back and Claude already has restrictions against this in place. You had to very specifically pretend to be real Claude Code (through copying system prompts etc) to get around it, not just a header.

I’m not surprised they closed the loophole, it always felt a little hacky using an Anthropic monthly sub as an API with a spoofed prompt (“You are Claude Code, Anthropic's official CLI for Claude”) with OpenCode.

Google will probably close off their Antigravity models to 3P tools as well.

Funnily enough, I didn't know about opencode and will now test it out and likely use it instead.

Improve your client so people prefer it? Nah.

Try to force people to use your client by subsidizing it? Now that's what I'm talking about.

As others said, why not just run a bunch of agents on Claude Code to surpass Opencode? I'm sure that's easy with their unlimited tokens!

  • Lol this is my exact thought as well. Just downloaded it now and taking it for a spin...pretty good so far!

Honest question: Why would I use Claude with OpenCode if I have a Claude Max subscription? Why not Claude Code?

Cancelled my claude subscription over it. Opencode is miles ahead of any coding tools. Will stick to using it rather than claude. Other models / other ways to access claude exists.

  • same here, will cancle the subscription and move away from this nonsense. I want to use their LLM, not their CLI.

Ugh, well at least this was the nudge I needed to cancel my Claude Pro subscription... I've already had a bad taste in my mouth watching the rate limits on the plan get worse and worse since I first subscribed and I have a few other subscriptions to fall back on while I've been evaluating different options. I literally never use the regular Claude Chat web UI either, that's pretty much 100% Gemini since I get it via my Google One plan.

OpenCode makes me feel a lot better knowing that my workflow isn't completely dependent on single vendor lock-in, and I generally prefer the UX to Claude Code anyway.

This appears to be a part of a crackdown on third-party clients using Claude Code's credentials/subscriptions but not through Claude Code.

Not surprising as this type of credential reuse is always a gray area, but weird Anthropic deployed it on a Thursday night without any warning as the inevitable shitstorm would be very predictable.

  • Yes, it appears they've been cracking down elsewhere as well: https://github.com/charmbracelet/crush/pull/1783

    Are they really that strapped already? It took Netflix like 20 years before they began nickel and diming us.. with Anthro it's starting after less than 20 months in the spotlight.

    I suspect it's really about control and the culture of Anthropic, rather than only finances. The message is: no more funtime, use Claude CLI, pay a lot for API tokens, or get your account banned.

  • The "crackdown" is really mild though. To be fair to Anthropic, I don't think they have been committed to banning third-party tools.

    github:anomalyco/opencode?rev=5e0125b78c8da0917173d4bcd00f7a0050590c55 (a trivial patch that works for now)

  • They've added this change at the same time they added random trick prompts to try and get you hit enter on the training opt in from late last year. I've gotten three popups inside claude code today at random times trying to trick me into having it train my data with a different selection defaulted than I've already chosen.

    (edit 4 times now just today)

    • More evidence the EU solved the wrong problem. Instead of mandating cookie banners, mandate a single global “fuck off” switch: one-click, automatic opt-out from any feature/setting/telemetry/tracking/training that isn’t strictly required or clearly beneficial to the user as an individual. If it’s mainly there for data collection, ads, attribution, “product improvement”, or monetization, it should be off by default and remain that way so long as the “fuck off” option is toggled. Burden of proof on the provider. Fines exceeding what it takes to get growth teams and KPI hounds to have legal coach them on what “fuck off” means and why they need to.

      4 replies →

  • They're losing money on every inference, so of course they want as many banned users as they can get away with.

Here's how to get a refund on the website (all automated):

1. Profile Icon -> Get Help

2. Send us a Message

3. Click 'Refund'

Big corpos only talk money, so it's the best you could do in this situation.

If you can't refund, and need to wait till sub runs out after cancelling, go to the OpenCode repo and rename your tools so they start with capital letters. That'll work around it. They just match on lowercase tool names of standard tools.

  • Really useful thanks!

    I signed up thinking Claude Code was an IDE and really disappointed with it. Their plugin for vscode is complete trash. Way over hyped. Their models are good but I can get that through other ways.

  • That actually worked since I subscribed a few days ago specifically to try open code.

    "Your subscription has been canceled and your refund is on the way. Please allow 5-10 business days for the funds to appear in your account."

If this helps to keep the $200 around longer, I’m happy.

The thing I most fear is them banning multiple accounts. That would be very expensive for a lot of folks.

This makes total sense to me. Limiting the usage to their tooling means they can place reasonable limits on usage by controlling how the client interacts with the LLM and making those calls as efficient as possible. The current state of things didn't really feel sustainable.

Wow, i sat down to do a little bit of late night coding and ended up running into this nightmare. Just canceled my anthropic subscription and started paying for opencode zen. Unfortunately opencode is enough of a better product i will indeed pay 10 times the price to use it.

I know this is somewhat unreasonable but watching "devs" unable to work because "faceless corp 1007" cut their access definitely has a level of schadenfreude to it.

Not strictly related, but since Copilot could be the next to violate the TOS, I've asked for an official response here: https://github.com/orgs/community/discussions/183809. If someone can help raise this question, it's more than welcome.

  • Copilot in VSCode is integrated with VSCode's LLM provider API. Which means that any plugins that needs LLM capabilities can submit request to Copilot. Roo Code support that as an option. And of course, there's a plugin that start an OpenAI/Anthropic-compatible web server inside VSCode that just calls to the LLM provider API. It seems that if you use unlimited models (like GPT-4.1) you probably get unlimited API calls. However, those model doesn't seems to be very agentic when used in Claude Code.

  • GitHub doesn't offer any unlimited style AI model plans so I don't think they'll care. Their pricing is fairly aligned with their costs.

    This only affects Claude as they try to market their plan as unlimited with various usage rate limits but its clearly costing them a lot more than what they sell it for.

    • Copilot plan limits are however "per prompt", and prompts that ask the agent to do a lot of stuff with a large context are obviously going to be more expensive to run than prompts that don't.

For anyone coming looking for a solution; I peeked around the OC repository, and a few PRs got merged in. Add this to $HOME/.config/opencode/opencode.json: plugin = ["opencode-anthropic-auth"]

That is, if that's not pulled in to latest OC by the time I post this. Not sure what the release cycle is for builtin plugins like that, but by force specifying it it definitely pulls master which has a fix.

https://opencode.ai/docs/plugins/

Why act like it’s a mystery when the Claude Code repo clearly explains:

> When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the /bug command.

They subsidize Claude Code because it gives them your codebase and chat history

  • They should be getting most of it from third party clients too. At least the chat and the files are being sent to or from Anthropic's own servers.

This situation feels like a +1 for Agent Client Protocol (ACP) [1].

In ACP, you auth directly to the underlying agent (eg Claude Code SDK) rather than a third-party tool (eg OpenCode) that then calls an inference endpoint on your behalf. If you're logged into Claude Code, you're already logged into Claude Code through any ACP client.

[1] https://agentclientprotocol.com/overview/agents

OpenCode brought this on themselves and their users. Plugging Claude Max subscriptions into other agents has been against the terms of service basically since the start and I imagine Anthropic must have issued plenty of warnings here that were ignored. They wouldn’t do this unless they really had to. If folks are mad about being rugged, blame OpenCode for misleading their users when they’ve long known this day was coming. Brilliant cynical strategy though to exploit soft enforcement for growth and lay the blame at the company that provided them cheap tokens.

inference costs nothing in comparison to training (you have so many requests in parallel at their scale), for inference they should be profitable even when you drain whole weekly quota every week

but of course they have to pay for training too.

this looks like short sighted money grab (do they need it?), that trade short term profit for trust and customer base (again) as people will cancel their unusable subscriptions.

changing model family when you have instructions tuned for for one of them is tricky and takes long time so people will stick to one of them for some time, but with API pricing you quickly start looking for alternatives and openai gpt-5 family is also fine for coding when you spend some time tuning it.

another pain is switching your agent software, moving from CC to codex is more painful than just picking different model in things like OC, this is plausible argument why they are doing this.

It doesn't mean much but I cancelled my 5x Max subscription to Claude. Only way how I can tell them what I think about this change.

Woke up and everything is on Fire, I thought opencode had some bug coz it updated itself this morning, but realised its claude who blocked third party clients :( L for Anthropic indeed, opencode had wayyy better experience than claude code.

Genuine question, as someone who never used Claude Code, but used OpenCode/Aider/GeminiCli - as many here say Opencode is better, mind sharing why (from end user perspective)?

I was thinking to try Claude Code later and may reconsider doing so.

  • I experimented with Claude Code but returned to the familiar Aider which existed before all of these tools AFAIK.

    You’ll notice people in Aider GitHub issues being concerned about its rather conservative pace of change, lack of plug-in ecosystem. But I actually started to appreciate these constraints as a way to really familiarise myself with the core “edit files in a loop with an end goal” that is the essence of all agent coding.

    Anytime I feel a snazzy feature is lacking from Aider I think about it and realise I can already solve it in Aider by changing the problem to editing a file in a loop.

    • Well, there is Aider-CE aka Cecli, which moves, updates almost every day (I'm tried to try it but much).

      Opencode is totally different beast comparing to Aider and I mostly stopped using Aider for 2 months or so - it just iterate simpler and faster with OpenCode for me.

The TOS, which is a contract of adhesion for consumer-facing products, does not really matter that much in my opinion since "we have to lock you in to our specific interface on our public offering" is not a cognizable interest. SCOTUS is also very clear in requiring actual damages (in incremental harms) to establish a CFAA violation. At any rate, opencode is essentially providing equitable estoppel as a service by being open and popular - cannot go after me without first dealing with the "unionized" project (last words)! I don't think they get to conflate the issues of alternative interface dispute and their intentional pricing strategy losing money on heavy users.

Of course, they are banning for financial economic interests, not nominal alleged contractual violations, so Anthropic is not sympathetic.

// NOT LEGAL ADVICE

Obviously, I think it can make sense to Anthropic since opencode users likely disproportionately cost them money with little lock-in - you can switch the moment a comparable model is available elsewhere. It does not (necessarily) mean there are any legal or ethical issues barring us from continuously using the built-in opencode OAuth though.

I have a background agents app I'm running - https://claudecontrol.com and it seems I am not impacted by this change. My anthropic sub still works fine.

I believe this is because I am using claude code as a CLI for SDK purposes vs using it as a typescript library. Quite a fortunate choice at the time!

Unsure of the other competition, but I can vouch for synthetic.new's subscription for GLM (+ other open models). Note quite as accurate as Anthropic's models but good enough for basically everything I do.

Honestly with how good OpenCode is, this really just makes GitHub copilot the best subscription for the average user. It’s the cheapest. It’s free for students. You get access to all of OpenAI models AND Anthropic models AND Gemini models and you still have a pretty dang good CLI/TUI (OC, not Copilot CLI). And the limits are pretty reasonable. I’ve never hit the limits in a month though admittedly I am not a “five agents at once” kind of vibe coder.

Curious about portability of CC -> OpenCode. I wonder how much of my CC setup (skills, commands, agents, hooks etc) will work if I were to switch to OpenCode.

I’m curious whether this is related to the recent update. When I opened Claude Code, I was greeted with a “Help improve Claude” message that changes the retention policy from 30 days to 5 long years.

They can’t apply these changes or update parts of the flow for the non-Claude CLI, which explains their latest move.

A crucial context is that this "block" is resolved (for now) via bumping version numbers. It is almost as if Anthropic deployed this to test the water on community reaction... Right now it is trivial to fingerprint opencode users without deep inspection into the conversations (privacy conerns), but Anthropic is not doing that.

https://github.com/anomalyco/opencode/commit/5e0125b78c8da09...

So is this the Bezos play of depressing the acquisition price? iirc Bezos froze the Amazon referral program of GoodReads.com to force them to take a lower price. If so, shame on them!

Hopefully this doesn't happen with GitHub Copilot. OpenCode is fantastic. They offer a server and an SDK. This means I build amazing personal tools. GitHub Copilots low price + OpenCode is just amazing.

I understand them not wanting to allow non-coding agents to use the subscription, but why specifically block another coding agent? Is the value Anthropic gets from users specifically using claude code that high? Is it about the training data opt-ins?

Maybe a subscription based payment model would also work for in general?

Similar to a gym membership where only a small part of the paying users actually show up.

Why don't you just ask Claude Code to write you a workaround? I'm sure if you say "fix plz" enough times, it'll work eventually.

just use free antigravity subscription with Opus 4.5 and reverse engineered API, and bunch of cheap google accounts

It’s the standard enshittification lifecycle: subsidize usage to get adoption, then lock down the API to force users into a controlled environment where you can squeeze them.

Like Reddit, they realized they can't show ads (or control the user journey) if everyone is using a third-party client. The $200 subscription isn't a pricing tier. It's a customer acquisition cost for their proprietary platform. Third-party clients defeat that purpose.

Switched to the z.ai coding plan, and used the GLM 4.7 model for a few complex changes since posting this, and it works really well.

I don't think I will renew Anthropic, the open models have reached an inflection point.

A rare glimpse into the enshittification that is to come to these tools. It’s only a matter of time.

Meanwhile, OpenAI co-signs https://github.com/steipete/oracle which lets you use your ChatGPT subscription to gain programmatic/agentic access to 5.2 Pro via automating browser access to the web frontend. Karpathy and other leaders have praised this feature on X.

If that is indeed so welcome, imagine what else you could script via their website to get around Codex rate limits or other such things.

After all what coud be so different about this than what browsers like Atlas do already

  • Codex requires stuffing a very specific system prompt otherwise the custom endpoint will reject you

So, models are officially a commodity now.

The battle is for the harness layer. And it's quickly going the commodity way as well.

What's left for boutique-style AI companies?

as much as I love opus I hate this company (not for the reasons you'd think thought). I just have a proxy that exposes an unauthenticated endpoint and bypasses all their attempts at banning for opencode usage since I was already on like my 5th claude account trying to get around random bans.

Why is OpenCode better than Claude Code?

  • I wouldn’t say it’s better, but it does have some nice features. Opencode has a web UI, so I can open it on my laptop and then resume the same session on the web from my phone through Tailscale. It’s pretty handy from time to time and takes almost zero effort from me.

  • Works with several providers (e.g. Github copilot or bring your own key). They offer a server and an sdk, so you can build all kinds of personal tools. It's amazing.

  • It is open source, to start with, and works with a lot of LLMs providers instead of being vendor locked into one.

Just open source Claude Code and maybe it gets supported by fostering a community... Oh wait, no lock in? Sorry there's no stakeholder value in that.

No. Do you realize how much of a joke Claude code is? Under the hood. How they implemented client auth?

Well let me tell you

https://github.com/anomalyco/opencode/blob/dev/packages/open...

You literally send your first message “you are Claude code”

The fact that this ever worked was insane.

Headline is more like anthropic vibes a bug and finally catches it.

Anthro is having its Apple moment: too many customers means the company is always on the news, for better or worse.

When iPhones receive negative reviews it's not like only Apple screwed up; others did too, but they sell so much less than Apple that no one hears about them:

    "Apple violated my privacy a tiny bit" makes the news;
    "Xiaomi sold my fingerprint info to 3rd party vendors" doesn't.

Similarly, Anthropic is under heavy fire recently because frankly, Claude Code is the best coding agent out there, and it's not even close.