Anthropic Explicitly Blocking OpenCode

7 days ago (gist.github.com)

The title is misleading if you don’t read the whole text: Anthropic is not blocking OpenCode from the API that they sell.

They’ve blocked OpenCode from accessing the private Claude Code endpoints. These were not advertised or sold as usable with anything else. OpenCode reverse engineered the API and was trying to use it.

The private API isn’t intended for use with other tools. Any tool that used it would get blocked.

  • > Any tool that used it would get blocked.

    Isn't that misleading from Anthropic side? The gist shows that only certain tools are block, not all. They're selectively enforcing their ToS.

    • The gist showsmthat the first line ofmthe system prompt must be "You are Claude Code, Anthropic's official CLI for Claude."

      That’s a reasonable attempt to enforce the ToS. For OpenCode, they also take the next step of additionally blocking a second line of “You are OpenCode.”

      There might be more thorough ways to effect a block (e.g. requiring signed system prompts), but Anthropic is clearly making its preferences known here.

    • What do you mean by "not all"? They aren't obligated to block every tool/project trying to use the private API all the way to a lone coder making their own closed-source tool. That's just not feasible. Or did you have a way to do that?

    • > The gist shows that only certain tools are block, not all.

      Are those other phrases actually used by any tools? I thought they were just putting phrases into the LLM arbitrarily. Any misuse of the endpoint is detected at scale they probably add more triggers for that abuse.

      Expecting it to magically block different phrases is kind of silly.

      > They're selectively enforcing their ToS.

      Do you have anything to support that? Not a gist of someone putting arbitrary text into the API, but links to another large scale tool that gets away with using the private API?

      Seems pretty obvious that they’re just adding triggers for known abusers as they come up.

I do admit to feeling some schadenfreude over them reacting to their product being leeched by others.

I get it though, Anthropic has to protect their investment in their work. They are in a position to do that, whereas most of us are not.

  • They’re not literally blocking OpenCode. You can use OpenCode with their API like any other tool.

    They’ve blocked the workaround OpenCode was using to access a private API that was metered differently.

    Any tool that used that private endpoint would be blocked. They’re not pushing an agenda. They’re just enforcing their API terms like they would for any use.

    • after they exploited us by training without any limits on code without licensing it (including GPLed code) they now scramble to ban and restrict when we want to do the same to them. that's the schadenfreude...

  • Hey! It was a lot of work stealing everything from you, of course you have to pay me a premium to get access to it!

  • > protect their investment

    Viewed another way, the preferential pricing they're giving to Claude Code (and only Claude Code) is anticompetitive behavior that may be illegal.

    • Are you suggesting Anthropic has a “duty to deal” with anyone who is trying to build competitive products to Claude Code, beyond access to their priced API? I don’t think so. Especially not to a product that’s been breaking ToS.

      3 replies →

  • Seems like another donation to python is coming to mitigate this pr scandal

Obviously Anthropic are within their rights to do this, but I don’t think their moat is as big as they think it is. I’ve cancelled my max subscription and have gone over to ChatGPT pro, which is now explicitly supporting this use case.

  • Is opencode that much better than Codex / Claude Code for cli tooling that people are prepared forsake[1] Sonnet 4.5/Opus 4.5 and switch to GPT 5.2-codex ?

    The moat is Sonnet/Opus not Claude Code it can never be a client side app.

    Cost arbitrage like this is short lived, until the org changes pricing.

    For example Anthropic could release say an ultra plan at $500-$1000 with these restrictions removed/relaxed that reflects the true cost of the consumption, or get cost of inference down enough that even at $200 it is profitable for them and they will stop caring if higher bracket does not sell well, Then $200 is what market is ready to pay, there will be a % of users who will use it more than the rest as is the case in any software.

    Either way the only money here i.e. the $200(or more) is only going to Anthropic.

    [1] Perceived or real there is huge gulf in how Sonnet 4.5 is seen versus GPT 5.2-codex .

    • The combination of Claude Code and models could be a moat of its own; they are able to use RL to make their agent better - tool descriptions, reasoning patterns, etc.

      Are they doing it? No idea, it sounds ridiculously expensive; but they did buy Bun, maybe to facilitate integrating around CC. Cowork, as an example, uses CC almost as an infrastructure layer, and the Claude Agent SDK is basically LiteLLM for your Max subscription - also built on/wrapping the CC app. So who knows, the juice may be worth the RL squeeze if CC is going to be foundational to some enterprise strategy.

      Also IMO OpenCode is not better, just different. I’m getting great results with CC, but if I want to use other models like GLM/Qwen (or the new Nvidia stuff) it’s my tool of choice. I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.

      2 replies →

    • I’ve used both Claude and Codex extensively, and I already preferred Codex the model. I didn’t like the harness, but recently pi got good enough to be my daily driver, and I’ve since found that it’s much better than either CC or Codex CLI. It’s OSS, very simple and hackable, and the extension system is really nice. I wouldn’t want to go back to Claude Code even if I were convinced the model were much better - given that I already preferred the alternative it’s a no-brainer. OpenAI have officially allowed the use of pi with their sub, so at least in the short term the risk of a rug pull seems minimal.

      2 replies →

  • I hope the upcoming DeepSeek coding model puts a dent in Anthropic’s armor. Claude 4.5 is by far the best/fastest coding model, but the company is just too slimy and burning enough $$$ to guarantee enshitification in the near future.

  • Honestly, I'm a big Claude Code fan, even despite how bad its CLI application is, because it was so much better than other models. Anthropic's move here pretty much signals to me that the model isn't much better than other models, and that other models are due for a second chance.

    If their model was truly ahead of the game, they wouldn't lock down the subsidized API in the same week they ask for 5-year retention on my prompts and permission to use for training. Instead, they would have been focusing on delivering the model more cheaply and broadly, regardless of which client I use to access it.

Neither OpenCode nor Anthropic is in the wrong IMO. Opencode is trying to do a good thing by letting people choose which CLI to use with their subscription, and Anthropic is perfectly within their rights to enforce their own TOS. I don't have a problem with OpenCode breaking TOS but then it's up to them to deal with the problems that stem from that. I use OpenCode with my Claude Code subscription, and it's really good, so personally I hope OpenCode continues to find ways around it.

Everyone goes on and on how "anthtropic has the right to do this", sure, we also have the right to work around these blocks and fight against behavior that uses their position to create a walled garden and vendor lock-in using anti-competitive pricing and temporary monopoly on the 'best' model.

  • You're not wrong, but most people on this forum are generally positive about companies using private APIs (which this is) for a competitive advantage.

    This is pretty undisputed I think... So if we're going to condemn anthropic for it, it'd be pretty one-sided unless we also took it up with any other companies doing so, like Apple, Google, ... And frankly basically all closed source companies.

    It's just coincidentally more obvious with this Claude code API because the only difference between it and the public one is the billing situation...

    The only basis we'd have to argue otherwise is that the subscription predates Claude code

    https://www.anthropic.com/news/claude-pro (years ago)

    But I didn't think we're strangers to companies pivoting the narrative like this

OpenCode is doing nothing wrong and adversarial interoperability is the cornerstone of hacker ethos.

As such, the sentiment in this thread is chilling.

This is definitely Barbara Streisanding right now. I had never heard of OpenCode. But I sure have now! Will have to check it out. Doubt I’ll end up immediately canceling Claude Code Max, but we’ll see.

  • I don’t know if the Streisand Effect is relevant here since Anthropic will block any other uses of their private APIs, not just OpenCode. The private Claude Code API was never advertised nor sold as a general purpose API for use with any tool.

    OpenCode is an interesting tool but if this is your first time hearing of it you should probably be aware of their recent unauthenticated RCE issues and the slow response they’ve had to fixing it: https://news.ycombinator.com/item?id=46581095 They say they’re going to do better in the future but it’s currently on my list of projects to keep isolated until their security situation improves.

    • Imo I don't trust ANY of these tools to run in non-isolated environments.

      All of these tools are either

      - created by companies powered by VC money that never face consequences for mishandling your data

      - community vibecoded with questionable security practices

      These tools also need to have a substantial amount of access to be useful so it is really hard to secure even if you try. Constantly prompting for approval leads to alert fatigue and eventually a mistake leading to exfiltration.

      I suggest just stick to LXC or VM. Desktop (including linux) userland security is just bad in general. I try to keep most random code I download for one off tasks to containers.

      4 replies →

    • A coding agent is just a massive RCE, what do you think happens when claude gets prompt injected? Although I don't defend not fixing an RCE.

      Absolutely all coding agents should be run in sandboxed containers, 24/7, if you do otherwise, please don't cry when you're pwned.

  • OpenCode is kind of a security disaster though: https://news.ycombinator.com/item?id=46581095. To be clear, I know all software has bugs, including security bugs. But that wasn't an obscure vulnerability, that was "our entire dev team fundamentally has no fucking clue what they're doing, and our security reporting and triage process is nonexistent". No way am I entrusting production code and secrets to that.

    • So is Claude. They nuked everyone's claude app a few days ago by pushing a shoddy changelog that crashed the app during init. Team literally doesnt understand how to implement try...catch. The thing clearly was vibe coded into existence.

    • Last week Claude Code (CC) had a bug that completely broke the Claude Code app because of a change in the CC changelog markdown file.

      Claude Code’s creator has also said that CC is 100% AI generated these days.

  • agreed. This is definitely free PR for OpenCode. I didn't try it myself until I heard the kerfuffle around Anthropic enforcing their ToS. It definitely has a much nicer UX than claude-code, so I might give the GPT subscription a shot sometime, given that it's officially supported w/ 3rd party harnesses, and gpt 5.2 doesn't appear to be that far behind Opus (based on what other people say).

i've been on claude code since before they even HAD subscriptions (api only) and since getting max from day 1 - I haven't once have assumed that access was allowed outside of CC. anyone who thinks otherwise is leaning into that cognitive dissonance

When using their web UI with Firefox and ublock origin it regularly freezes the tab when the answer is written out. Someone at Anthropic had to create a letter-by-letter typing animation with GIF image and sentry callbacks every five seconds, which ends up in an infinite loop.

I've seen reports about this bug affecting Firefox users since Q3 2025. They were reported over various channels.

Not a fan of them prioritizing the combat against opencode instead of fixing issues that affect paying users.

  • How can you be sure the issue is not with ublock?

    • It also happens with extensions and Firefox adblocker disabled. Might be connected to one of the Firefox anti tracking features, but I was unable to figure it out. The profiler shows an infinite loop.

      I've found several reports about this issue. Seems they don't care about Firefox.

Asked Opus a question on Openrouter. 0.30$

Asked Minimax 2.1 that question. 0.008$

At some point it stops making sense. You cannot use "the good model" just for the hard bits without basically hand writing you own harness. Even then, it will need full, uncached context.

Feels like consulting a premium lawyer to ask how much time is it.

Soft plug: the team at nori just announced our own CLI today. Most people build on top of the provider layer, but we build on top of the agent layer. This means that you can use your subscriptions, and you get the benefit of getting the best system prompts and tools that the base models were fine tuned with.

Cliff posted a show hn earlier today here: https://news.ycombinator.com/item?id=46616562

It’ll be interesting to see how far they take this cat and mouse game. Will “model attestation” become a new mechanism for enforcing tight coupling between client and inference endpoint? It could get weird, with secret shibboleths inserted into model weights…

  • Cat and mouse indeed... such is the way of the internet nomad

    There ain't no client validation mechanism you can't fake with enough time, patience, reverse-engineering, and good-old-fashioned stubborn hacker ethos.

  • I would be so furious if fucking LLM agents are what finally give browser attestation a foothold on our hardware.

Given that Claude Code is a scriptable CLI tool with an SDK, why can't OpenCode just call Claude instead of reusing its auth tokens?

  • You can't control it to the level of individual LLM requests and orchestration of those. And that is very valuable, practically required, to build a tool like this. Otherwise, you just have a wrapper over another big program and can barely do anything interesting/useful to make it actually work better.

    • What can't you do exactly? You can send Claude arbitrary user prompts—with arbitrary custom system prompts—and get text back. You can then put those text responses into whatever larger system you want.

      3 replies →

  • This is what ACP and https://github.com/zed-industries/claude-code-acp enables. ACP controls agents - there is native support in Copilot CLI and Gemini and adapters for claude code and codex.

    • https://platform.claude.com/docs/en/agent-sdk/overview#get-s... reads to me like you have to use the public API for the Claude Agent SDK, not a Claude Code plan:

      > Unless previously approved, we do not allow third party developers to offer Claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.

    • wow. ACP is used within zed so I guess zed is safe with ACP using claude code

      I wonder if Opencode could use ACP protocol as well. ACP seems to be a good abstraction, I should probably learn more about it. Any TLDR's on how it works?

      4 replies →

I believe LLM providers should ultimately be utilities from a consumer perspective, like water suppliers. I own the faucet, washer, bathtub, and can switch suppliers at will. I’ve been working on a FOSS client for them for nearly three years.

I hope that why the following is purely a factual distinction, not an excuse or an attempt to empathize.

The difference between the other entities named and OpenCode is this:

OpenCode uses people’s Claude Code subscriptions. The other entities use the API.

Specifically, OpenCode reverse‑engineers Claude Code’s OAuth endpoints and API, then uses them. This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API.

Edit: I’m getting “You’re posting too fast” when replying to mr_mitm. For clarity, there is no separate API subscription. Anthropic wants you to use one of two funnels for coding with their LLMs: 1. The API (through any frontend), or 2. A subscription through an Anthropic‑owned frontend.

  • You're hitting an important point. I might go on a tangent here.

    It's up to operating systems to offer a content consumption experience for end users which reverses the role of platforms back to their original, most basic offers. They all try to force you into their applications which are full of tracking, advertisements, upsells, and anti-consumer interface design decisions.

    Ideally the operating system would untangle the content from these applications and allow the end user to consume the content in a way that they want. For example Youtube offers search, video and comments. The operating system should extract these three things and create a good UI around it, while discarding the rest. Playlists and viewing history can all be managed in the offline part of the application. Spotify offers music, search and lyrics but they want you to watch videos and use social media components in their very opinionated UIs, while actively fighting you to create local backup of your music library.

    Software like adblockers, yt-dlp and streamlink are already solving parts of these issues by untangling content from providers for local consumption in a trusted environment. For me the fight by Anthropic against OpenCode fits into this picture.

    These companies are acting hostile even towards paying customers, each of them trying to build their walled gardens.

  • I believe they want you to use the API subscription if you want to use their service with OpenCode. It's possible, just more expensive.

    • That is analogous to the water company charging you more if you use a faucet from another company. It's not a fair competition.

      That's why we are supposed to have legislation to regulate that utilities and common carriers can't behave that way.

      3 replies →

  • Fwiw, your main point seems scattered across your post where sentences refer to supposed context established by other sentences. It's making it hard to understand your position.

    Maybe try the style where you start off with your position in a self-contained sentence, and then write a paragraph elaborating on it.

    • Also, they should try editing their post less frequently. Hard to have a discussion this way.

  • It's exactly like water. Use their API, and you pay as much water as you drink. But visit them in their pub, and you get a pretty big buffet with lots of water for a one-time price.

  • This is what the APIs are for. You pay for what you use, just like water.

    • We have a flat-rate minimum charge or a minimum tariff for water service here.

      It means that even though the cost depends on usage, you are billed at least a fixed minimum amount, regardless of how little water you actually use.

I don't understand what's the threat from a CLI which is useless without AI models and Anthropic could be one of them?

  • Switching models is too easy and the models are turning into commodities. They want to own your dev environment, which they can ultimately charge more when compared to access to their model.

  • I think the focus on OpenCode is distorting the story. If any tool tried to use the CC API instead of the regular API they’d block it.

    Claude Code as a product doesn’t use their pay per call API, but they’ve never sold the Claude Code endpoint as a cheaper way to access their API without paying for the normal API

While Anthropic can choose whatever tool uses their api or subscription but I never fully understood what they gain from having the subscription explicitly only work for claude code. Is the issue that it disincentivizes the use of their API?

  • It’s basic market segmentation.

    They gave Claude Code a discount to make it work as a product.

    The API is priced for all general purpose usage.

    They never sold the Claude Code endpoint as a cheaper general purpose API. The stories about “blocking OpenCode” are getting kind of out of hand because they’d block any use of the Claude Code endpoint that wasn’t coming from their Claude Code tool.

  • Perhaps concentrated use of Claude Code increases their perceived market value.

    It also perhaps tries to preserve some moat around their product/service.

    • And telemetry and tooling reports and usage by cloud code signs PR on GitHub and things like that.

  • Are they ZDR with prompts and completions and possibly rely on usage statistics from their CLI to infer how people are using it?

  • Owning the client gives them full control over which model to use for which query, prompt caching, rate limiting and lots more. So they can drive massive savings for the ~same output over just giving unrestricted access to the API.

    • Wouldn’t most of the savings be done on the server side anyway? I would be very surprised if Claude code does those on the client side.

  • The issue is that claude code is cheap because it uses API's unused capacity. These kind of circumventions hurt them both ways, one they dont know how to estimate api demand, and two, the nature of other harnesses is more bursty (eg: parallel calls) compared to claude code, so it screws over other legit users. Claude code very rarely makes parallel calls for context commands etc. but these ones do.

    re the whole unused capacity is the nature of inference on GPUs. In any cluster, you can batch inputs (ie takes same time for say 1 query or 100 as they can be parallelized) and now continuous batching[1] exists. With API and bursty nature of requests, clusters would be at 40%-50% of peak API capacity. Makes sense to divert them to subscriptions. Reduces api costs in future, and gives anthropic a way to monetize unused capacity. But if everyone does it, then there is no unused capacity to manage and everyone loses.

    [1]: https://huggingface.co/blog/continuous_batching

    • Your suggested functionality is server side, not client side.

      > it uses API's unused capacity

      I see no waiting or scheduling on my usage - it runs, what appears to be, full speed till I hit my 4 hour / 7 day limit and then it stops.

      Claude code is cheap (via a subscription) because it is burning piles of investor cash, while making a bit back on API / pay per token users.

      1 reply →

    • They have rate limits for this purpose. Many folks run claude code instances in parallel, which has roughly the same characteristics.

      1 reply →

This is exactly like an opensource project called OpenVideo would pretend to be a Netflix/Prime/HBO/AppleTV+ client and allowing access to content that way, skipping the official clients.

Then they get angry when their use is blocked.

Only in this case they can 100% use the service via a paid API.

Have had max for awhile, funny thing opencode still sorta works with my cc max subscription. That said after awhile open code just hangs. My workflow involves saving state frequently. I cancel open back up and continue then it’s performant for maybe 2-3 token context windows, repeat

Yeah, the pro/max access require Claude Code. Should use the API if you want to build a tool on it.

I didn't know, guessing some others don't either:

"The open source AI coding agent

Free models included or connect any model from any provider, including Claude, GPT, Gemini and more."

This is ironic timing given I was just banned from vibe coding and abusing my own desktop Hinge client relying on their API.

> This script demonstrates that Anthropic has specifically blocked

> the phrase "You are OpenCode" in system prompts

you can get around this by making an agent in opencode and that agent should not mention opencode at all, e.g. "You're an agent that uses Claude Opus..." and it will just work.

Well, using Claude Pro/Max Calude Code api without Claude Code, instead of their actual API they monetize goes against their ToS.

I don't like it too, but it is what it is.

If I gave free water refils if you used my brand XYZ water bottle, you should not cry that you don't get free refills to your ABC branded bottle.

It may be scummy, but it does make sense.

Meh, if you want access to the API then pay for the API. It's as simple as that.

  • Well, they are paying. Just not for the product Anthropic wants to sell. Really at root this is a marketing failure. They really, really want to push Claude CLI as a loss leader, and are having to engage in this disaster of a anti-PR campaign to plug all the leaks from people sneaking around.

    The root cause is and remains their pricing: the delta between their token billing and their flat fee is just screaming to be exploited by a gray market.

  • It’s because their models burn tokens like crazy. API use is way too expensive

    Edit: or should I say, the subscription is artificially cheap

    • While the subscription is definitely subsidized (technically cross-subsidized, because the subsidy is coming from users who pay but barely use it), Claude Code also does a ton of prompt caching that reduces LLM dependency. I have done many hours-long coding sessions and built entire websites using the latest Opus and the final tally came to like $4, whereas without caching it would have been $25-30.

      1 reply →

    • > API use is way too expensive

      Cry me a river - I never stop hearing how developers think their time is so valuable that no amount of AI use could possibly not be worth it. Yet suddenly, paying for what you use is "too expensive".

      I'm getting sick of costs being distorted. It's resulting in dysfunctional methodologies where people are spinning up ridiculous number agents in the background, burning tokens to grind out solutions where a modicum of oversight or direction from a human would result in 10x less compute. At very least the costs should be realised by the people doing this.

      1 reply →

  • This level of hypocrisy is comical. Exploiting the pricing gap between API usage and subscription leads to vastly increased efficiency and productivity therefore it needs to be legally protected. That's the argument when it comes to LLM training and copyright.

Please stop spreading this nonsense. Anthropic is not blocking Opencode. You can use all their models within Opencode using API. Anthropic simply let Dax and team use unlimited plans for the past year or so. I don’t even know if it was official. I find this a bit comical and immature. You want to use the models, just pay for it. Why are people trying to nickel and dime on tools that they use day in day out?

  • You can clearly run the provided gist. Calling “You are OpenCode” in the system prompt fails, but not if you replace the name with another tool name (e.g. “You are Cursor”, “You are Devin”). Pretty blatant difference in behavior based on a blacklisted value.

    • This is not how business is conducted in real world. You can’t just hack something together and expect the other party to let you “get away” with it indefinitely. If your product relies on some other vendor, then do it properly with ACTUAL contracts. People in tech can be so entitled.

I do not understand the stubbornness with wanting to use the auth part. On local, just call the claude code from your harness, or better there is a claude agent sdk, both of which have clear auth and are permitted acc to anthropic. But to say that they want to use this auth as a substitution for API is a different issue altogether.