Anthropic officially bans using subscription auth for third party use

9 hours ago (code.claude.com)

It might be some confirmation bias here on my part but it feels as if companies are becoming more and more hostile to their API users. Recently Spotify basically nuked their API with zero urgency to fix it, redit has a whole convoluted npm package your obliged to use to create a bot, Facebook requires you to provide registered company and tax details even for development with some permissions. Am I just old man screaming at cloud about APIs used to being actually useful and intuitive?

  • They put no limits on the API usage, as long as you pay.

    Here, they put limits on the "under-cover" use of the subscription. If they can provide a relatively cheap subscription against the direct API use, this is because they can control the stuff end-to-end, the application running on your system (Claude Code, Claude Desktop) and their systems.

    As you subscribe to these plans, this is the "contract", you can use only through their tools. If you want full freedom, use the API, with a per token pricing.

    For me, this is fair.

    • > If they can provide a relatively cheap subscription against the direct API use

      Except they can't. Their costs are not magically lower when you use claude code vs when you use a third-party client.

      > For me, this is fair.

      This is, plain and simple, a tie-in sale of claude code. I am particularly amused by people accepting it as "fair" because in Brazil this is an illegal practice.

      3 replies →

    • I think what most people don't realize is running an agent 24/7 fully automated is burning a huge hole in their profitability. Who even knows how big it is. It could be getting it on the 8/9 figures a day for all we know.

      There's this pervasive idea left over from the pre-llm days that compute is free. You want to rent your own H200x8 to run your Claude model, that's literally going to cost $24/hour. People are just not thinking like that. I have my home PC, it does this stuff I can run it 24/7 for free.

      7 replies →

    • It would be less of an issue if Claude-Code was actually the best coding client, and would actually somehow reduce the amount of tokens used. But it's not. I get more things done with less tokens via OpenCode. And in the end, I hit 100% usage at the end of the week anyway.

      1 reply →

    • I don't see how it's fair. If I'm paying for usage, and I'm using it, why should Anthropic have a say on which client I use?

      I pay them $100 a month and now for some reason I can't use OpenCode? Fuck that.

      20 replies →

    • Their subscriptions aren't cheap, and it has nothing really to do with them controlling the system.

      It's just price differentiation - they know consumers are price sensitive, and that companies wanting to use their APIs to build products so they can slap AI on their portfolio and get access to AI-related investor money can be milked. On the consumer-facing front, they live off branding and if you're not using claude code, you might not associate the tool with Anthropic, which means losing publicity that drives API sales.

    • It doesn't really make sense to me because the subscriptions have limits too.

      But I agree they can impose whatever user hostile restrictions they want. They are not a monopoly. They compete in a very competitive market. So if they decide to raise prices in whatever shape or form then that's fine.

      Arbitrary restrictions do play a role for my own purchasing decisions though. Flexibility is worth something.

  • Everyone has heard the word "enshittification" at this point and this falls in line. But if you haven't read the book [0] it's a great deep dive into the topical area.

    But the real issue is that these companies, once they have any market leverage, do things in their best interest to protect the little bit of moat they've acquired.

    [0] https://www.mcdbooks.com/books/enshittification

  • Can you sell ads via api? If answer is no then this “feature” would be at the bottom of the list

    • They can sell API access via transparent pricing.

      Instead, many, many websites (especially in the music industry) have some sort of funky API that you can only get access to if you have enough online clout. Very few are transparent about what "enough clout" even means or how much it'd cost you, and there's like an entire industry of third-party API resellers that cost like 10x more than if you went straight to the source. But you can't, because you first have to fulfill some arbitrary criteria that you can't even know about ahead of time.

      It's all very frustrating to deal with.

  • I think that these companies are understanding that as the barrier to entry to build a frontend gets lower and lower, APIs will become the real moat. If you move away from their UI they will lose ad revenue, viewer stats, in short the ability to optimize how to harness your full attention. It would be great to have some stats on hand and see if and how much active API user has increased decreased in the last two years, as I would not be surprised if it had increased at a much faster pace than in the past.

    • > the barrier to entry to build a frontend gets lower

      My impression is the opposite: frontend/UI/UX is where the moat is growing because that's where users will (1) consume ads (2) orchestrate their agents.

  • APIs leak profit and control vs their counterpart SDK/platforms. Service providers use them to bootstrap traffic/brand, but will always do everything they can to reduce their usage or sunset them entirely if possible.

  • Given the Cambridge Analytica scandal, I don’t take too much issue to FB making their APIs a little tougher to use

  • I don't it's particularly hard to figure it out: APIs have been particularly at risk of being exploited for negative purposes due the explosion of AI powered bots

  • I’m predicting that there would be a new movement to make everything an MCP. It’s now easier to consume an api by non technical people.

  • Facebook doing that is actually good, to protect consumers from data abuse after incidents like cambridge analytica. They are holding businesses who touches your personal data responsible.

    • > Facebook doing that is actually good, to protect consumers from data abuse after incidents like cambridge analytica.

      There is nothing here stopping cambridge analytica from doing this again, they will provide whatever details needed. But a small pre launch personal project work that might use a facebook publishing application can't be developed or tested without first going through all the bureaucracy.

      Nevermind the non profit 'free' application you might want to create on the FB platform, lets say a share chrome extension "Post to my FB", for personal use, you can't do this because you can't create an application without a company and IVA/TAX documents. It's hostile imo.

      Before, you could create an app, link your ToS, privacy policy etc, verify your domain via email, and then if users wanted to use your application they would agree, this is how a lot of companies still do it. I'm actually not sure why FB do this specifically.

    • Facebook knew very early and very well about the data harvesting that was going on at Cambridge Analytica through their APIs. They acted so incredibly slowly and not-harsh that it's IMO hard to believe that they did not implicitly support it.

      > to protect consumers

      We are talking about Meta. They have never, and will never, protect customers. All they protect is their wealth and their political power.

    • Is it? I’ve never touched Facebook api, but it sounds ridiculous that you need to provide tax details for DEVELOPMENT. Can’t they implement some kind of a sandbox with dummy data?

      4 replies →

    • They just want people to use facebook. If you can see facebook content without being signed in they have a harder time tracking you and showing you ads.

  • APIs are the best when they let you move data out and build cool stuff on top. A lot of big platforms do not really want that anymore. They want the data to stay inside their silo so access gets slower harder and more locked down. So you are not just yelling at the cloud this feels pretty intentional.

  • This is sort of true!

    Spotify in particular is just patently the very worst. They released an amazing and delightful app sdk, allowing for making really neat apps in the desktop app in 2011. Then cancelled it by 2014. It feels like their entire ecosystem has only ever gone downhill. Their car device was cancelled nearly immediately. Every API just gets worse and worse. Remarkable to see a company have only ever such a downward slide. The Spotify Graveyard is, imo, a place of singnificantly less honor than the Google Graveyard. https://web.archive.org/web/20141104154131/https://gigaom.co...

    But also, I feel like this broad repulsive trend is such an untenable position now that AI is here. Trying to make your app an isolated disconnected service is a suicide pact. Some companies will figure out how to defend their moat, but generally people are going to prefer apps that allow them to use the app as they want, increasingly, over time. And they are not going to be stopped even if you do try to control terms!

    Were I a smart engaged company, I'd be trying to build WebMCP access as soon as possible. Adoption will be slow, this isn't happening fast, but people who can mix human + agent activity on your site are going to be delighted by the experience, and that you will spread!

    WebMCP is better IMHO than conventional APIs because it layers into the experience you are already having. It's not a separate channel; it can build and use the session state of your browsing to do the things. That's a huge boon for users.

I'm only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.

Opus has gone down the hill continously in the last week (and before you start flooding with replies, I've been testing opus/codex in parallel for the last week, I've plenty of examples of Claude going off track, then apologising, then saying "now it's all fixed!" and then only fixing part of it, when codex nailed at the first shot).

I can accept specific model limits, not an up/down in terms of reliability. And don't even let me get started on how bad Claude client has become. Others are finally catching up and gpt-5.3-codex is definitely better than opus-4.6

Everyone else (Codex CLI, Copilot CLI etc...) is going opensource, they are going closed. Others (OpenAI, Copilot etc...) explicitly allow using OpenCode, they explicitly forbid it.

This hostile behaviour is just the last drop.

  • I’m unsure exactly in what way you believe it has gone “down the hill” so this isn’t aimed at you specifically but more a general pattern I see.

    That pattern is people complaining that a particular model has degraded in quality of its responses over time or that it has been “nerfed” etc.

    Although the models may evolve, and the tools calling them may change, I suspect a huge amount of this is simply confirmation bias.

  • > Opus has gone down the hill continously in the last week

    Is a week the whole attention timespan of the late 2020s?

  • Opus 4.6 genuinely seems worse than 4.5 was in Q4 2025 for me. I know everyone always says this and anecdote != data but this is the first time I've really felt it with a new model to the point where I still reach for the old one.

    I'll give GPT 5.3 codex a real try I think

    • I asked Codex 5.3 and Opus 4.6 to write me a macos application with a certain set of requirements.

      Opus 4.6 wrote me a working macos application.

      Codex wrote me a html + css mockup of a macos application that didn't even look like a macos application at all.

      Opus 4.5 was fine, but I feel that 4.6 is more often on the money on its implementations than 4.5 was. It is just slower.

      4 replies →

  • It's the most overrated model there is. I do Elixir development primarily and the model sucks balls in comparison to Gemini and GPT-5x. But the Claude fanboys will swear by it and will attack you if you ever say even something remotely negative about their "god sent" model. It fails miserably even in basic chat and research contexts and constantly goes off track. I wired it up to fire up some tasks. It kept hallucinating and swearing it did when it didn't even attempt to. It was so unreliable I had to revert to Gemini.

    • It might simply be that it was not trained enough in Elixir RL environments compared to Gemini and gpt. I use it for both ts and python and it's certainly better than Gemini. For Codex, it depends on the task.

  • No offense, but this is the most predicable outcome ever. The software industry at large does this over and over again and somehow we're surprised. Provide thing for free or for cheap, and then slowly draw back availability once you have dominant market share or find yourself needing money (ahem).

    The providers want to control what AI does to make money or dominate an industry so they don't have to make their money back right away. This was inevitable, I do not understand why we trust these companies, ever.

    • No offense taken here :)

      First, we are not talking about a cheap service here. We are talking about a monthly subscription which costs 100 USD or 200 USD per month, depending on which plan you choose.

      Second, it's like selling me a pizza and pretending I only eat it while sitting at your table. I want to eat the pizza at home. I'm not getting 2-3 more pizzas, I'm still getting the same pizza others are getting.

  • No developer writes the same prompt twice. How can you be sure something has changed?

    • I regularly run the same prompts twice and through different models. Particularly, when making changes to agent metadata like agent files or skills.

      At least weekly I run a set of prompts to compare codex/claude against each other. This is quite easy the prompt sessions are just text files that are saved.

      The problem is doing it enough for statistical significance and judging the output as better or not.

I really hope someone from any of those companies (if possible all of them) would publish a very clear statement regarding the following question: If I build a commercial app that allows my users to connect using their OAuth token coming from their ChatGPT/Claude etc. account, do they allow me (and their users) to do this or not?

I totally understand that I should not reuse my own account to provide services to others, as direct API usage is the obvious choice here, but this is a different case.

I am currently developing something that would be the perfect fit for this OAuth based flow and I find it quite frustrating that in most cases I cannot find a clear answer to this question. I don't even know who I would be supposed to contact to get an answer or discuss this as an independent dev.

EDIT: Some answers to my comment have pointed out that the ToS of Anthropic were clear, I'm not saying they aren't if taken in a vacuum, yet in practice even after this being published some confusion remained online, in particular regarding wether OAuth token usage was still ok with the Agent SDK for personal usage. If it happens to be, that would lead to other questions I personally cannot find a clear answer to, hence my original statement. Also, I am very interested about the stance of other companies on this subject.

Maybe I am being overly cautious here but I want to be clear that this is just my personal opinion and me trying to understand what exactly is allowed or not. This is not some business or legal advice.

  • I don't see how they can get more clear about this, considering they have repeatedly answered it the exact same way.

    Subscriptions are for first-party products (claude.com, mobile and desktop apps, Claude Code, editor extensions, Cowork).

    Everything else must use API billing.

    • And at that point, you might as well use OpenRouter's PKCE and give users the option to use other models..

      These kinds of business decisions show how these $200.00 subscriptions for their slot/infinite jest machines basically light that $200.00 on fire, and in general how unsustainable these business models are.

      Can't wait for it all to fail, they'll eventually try to get as many people to pay per token as possible, while somehow getting people to use their verbose antigentic tools that are able to inflate revenue through inefficient context/ouput shenanigans.

      11 replies →

    • You are talking about Anthropic and indeed compared to OpenAI or GitHub Copilot they have seemed to be the ones with what I would personally describe as a more restrictive approach.

      On the other hand OpenAI and GitHub Copilot have, as far as I know, explicitly allowed their users to connect to at least some third party tools and use their quotas from there, notably to OpenCode.

      What is unclear to me is whether they are considering also allowing commercial apps to do that. For instance if I publish a subscription based app and my users pay for the app itself rather than for LLM inference, would that be allowed?

      1 reply →

    • Then why does the SDK support subscription usage? Can I at least use my subscription for my own use of the SDK?

    • Quick question but what if I use claude code itself for the purpose?

      https://news.ycombinator.com/item?id=46912682)

      This can make Opencode work with Claude code and the added benefit of this is that Opencode has a Typescript SDK to automate and the back of this is still running claude code so technically should work even with the new TOS?

      So in the case of the OP. Maybe Opencode TS SDK <-> claude code (using this tool or any other like this) <-> It uses the oauth sign in option of Claude code users?

      Also, zed can use the ACP protocol itself as well to make claude code work iirc. So is using zed with CC still allowed?

      > I don't see how they can get more clear about this, considering they have repeatedly answered it the exact same way.

      This is confusing quite frankly, there's also the claude agent sdk thing which firloop and others talked about too. Some say its allowed or not. Its all confusing quite frankly.

  • That’s very clearly a no, I don’t understand why so many people think this is unclear.

    You can’t use Claude OAuth tokens for anything. Any solution that exists worked because it pretended/spoofed to be Claude Code. Same for Gemini (Gemini CLI, Antigravity)

    Codex is the only one that got official blessing to be used in OpenClaw and OpenCode, and even that was against the ToS before they changed their stance on it.

  • I think you're just trying to see ambiguity where it doesn't exist because the looser interpretation is beneficial to you. It totally makes sense why you'd want that outcome and I'm not faulting you for it. It's just that, from a POV of someone without stake in the game, the answer seems quite clear.

  • It is pretty obviously no. API keys billed by the token, yes, Oauth to the flat rate plans no.

    > OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.

    • If you look at this tweet [1] and in particular responses under it, it still seems to me like some parts of it need additional clarification. For instance, I have seen some people interpret the tweet as meaning using the OAuth token is actually ok for personal experimentation with the Agent SDK, which can be seen as a slight contradiction with what you quoted. A parent tweet also mentioned the docs clean up causing some confusion.

      None of this is legal advice, I'm just trying to understand what exactly is allowed or not.

      [1] https://x.com/trq212/status/2024212380142752025?s=10

      4 replies →

  • > OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai.

    I think this is pretty clear - No.

    • So it’s forbidden to use the Claude Mac app. I would say the ToS as it is, can’t be enforced

  • Does https://happy.engineering/ need to use the API keys or can use oauth? It's basically a frontend for claude-cli.

    • It doesn't even touch auth right?

      """ Usage policy

      Acceptable use Claude Code usage is subject to the Anthropic Usage Policy. Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK """

      That tool clearly falls under ordinary individual use of Claude code. https://yepanywhere.com/ is another such tool. Perfectly ordinary individual usage.

      https://yepanywhere.com/sdk-auth-clarification.html

      The TOS are confusing because just below that section it talks about authentication/credential use. If an app starts reading api keys / credentials, that starts falling into territory where they want a hard line no.

  • Usually, it is already stated in their documentation (auth section). If a statement is vague, treat it as a no. It is not worth the risk when they can ban you at any time. For example, ChatGPT allows it, but Claude and Gemini do not.

    https://developers.openai.com/codex/auth

    • Maybe I am missing something from the docs of your link, but I unfortunately don't think it actually states anything regarding allowing users to connect and use their Codex quota in third party apps.

      4 replies →

  • One set of applications to build with subscription is to use the claude-go binary directly. Humanlayer/Codelayer projects on GitHub do this. Granted those are not ideal for building a subscription based business to use oathu tokens from Claude and OpenaAI. But you can build a business by building a development env and gating other features behind paywall or just offering enterprise service for certain features like vertical AI(redpanada) offerings knowledge workers, voice based interaction(there was a YC startup here the other day doing this I think), structured outputs and workflows. There is lots to build on.

The economic tension here is pretty clear: flat-rate subscriptions are loss leaders designed to hook developers into the ecosystem. Once third parties can piggyback on that flat rate, you get arbitrage - someone builds a wrapper that burns through $200/month worth of inference for $20/month of subscription cost, and Anthropic eats the difference.

What is interesting is that OpenAI and GitHub seem to be taking the opposite approach with Copilot/OpenCode, essentially treating third-party tool access as a feature that increases subscription stickiness. Different bets on whether the LTV of a retained subscriber outweighs the marginal inference cost.

Would not be surprised if this converges eventually. Either Anthropic opens up once their margins improve, or OpenAI tightens once they realize the arbitrage is too expensive at scale.

  • these subscriptions have limits.. how could someone use $200 worth on $20/month.. is that not the issue with the limits they set on a $20 plan, and couldn't a claude code user use that same $200 worth on $20/month? (and how do i do this?)

    • The limits in the max subscriptions are more generous and power users are generating loss.

      I'm rather certain, though cannot prove it, that buying the same tokens would cost at least 10x more if bought from API. Anecdotally, my cursor team usage was getting to around 700$ / month. After switching to claude code max, I have so far only once hit the 3h limit window on the 100$ sub.

      What Im thinking is that Anthropic is making loss with users who use it a lot, but there are a lot of users who pay for max, but don't actually use it.

      With the recent improvements and increase of popularity in projects like OpenClaw, the number of users that are generating loss has probably massively increased.

      1 reply →

    • I'd agree on this. I ended up picking up a Claude Pro sub and am very less than impressed at the volume allowance. I generally get about a dozen queries (including simple follow up/refinements/corrections) across a relatively small codebase, with prompts structured to minimize the parts of the code touched - and moving onto fresh contexts fairly rapidly, before getting cut off for their ~5 hour window. Doing that ~twice a day ends up getting cut off on the weekly limit with about a day or two left on it.

      I don't entirely mind, and am just considering it an even better work:life balance, but if this is $200 worth of queries, then all I can say is LOL.

      1 reply →

    • The median subscriber generates about 50% gross margin, but some subscribers use 10x the amount of inference compute as other subscribers (due to using it more...), and it's a positive skewness distribution.

    • The usage limit on your $20/month subscription is not $20 of API tokens (if it was, why subscribe?). Its much much higher, and you can hit the equivalent of $20 of API usage in a few days.

      4 replies →

I think I've made two good decisions in my life. The first was switching entirely to Linux around '05 even though it was a giant pain in the ass that was constantly behind the competition in terms of stability and hardware support. It took awhile but wow no regrets.

The second appears to be hitching my wagon to Mistral even though it's apparently nowhere as powerful or featureful as the big guys. But do you know how many times they've screwed me over? Not once.

Maybe it's my use cases that make this possible. I definitely modified my behavior to accommodate Linux.

I don't think it's a secret that AI companies are losing a ton of money on subscription plans. Hence the stricter rate limits, new $200+ plans, push towards advertising etc. The real money is in per-token billing via the API (and large companies having enough AI FOMO that they blindly pay the enormous invoices every month).

  • They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.

    Banning third-party tools has nothing to do with rate limits. They’re trying to position themselves as the Apple of AI companies -a walled garden. They may soon discover that screwing developers is not a good strategy.

    They are not 10× better than Codex; on the contrary, in my opinion Codex produces much better code. Even Kimi K2.5 is a very capable model I find on par with Sonnet at least, very close to Opus. Forcing people to use ONLY a broken Claude Code UX with a subscription only ensures they loose advantage they had.

    • > "just a few dollars per million tokens"

      Google AI Pro is like $15/month for practically unlimited Pro requests, each of which take million tokens of context (and then also perform thinking, free Google search for grounding, inline image generation if needed). This includes Gemini CLI, Gemini Code Assist (VS Code), the main chatbot, and a bunch of other vibe-coding projects which have their own rate limits or no rate limits at all.

      It's crazy to think this is sustainable. It'll be like Xbox Game Pass - start at £5/month to hook people in and before you know it it's £20/month and has nowhere near as many games.

      2 replies →

    • Inference might be cheap, but I'm 100% sure Anthropic has been losing quite a lot of money with their subscription pricing with power users. I can literally see comparison between what my colleagues Claude cost when used with an API key vs when used with a personal subscription, and the delta is just massive

    • I’m not familiar with the Claude Code subscription, but with Codex I’m able to use millions of tokens per day on the $200/mo plan. My rough estimate was that if I were API billing, it would cost about $50/day, or $1200/mo. So either the API has a 6x profit margin on inference, the subscription is a loss leader, or they just rely on most people not to go anywhere near the usage caps.

      1 reply →

    • Of course they bundle R&D with inference pricing, how else could you the recoup that investment.

      The interesting question is: In what scenario do you see any of the players as being able to stop spending ungodly amounts for R&D and hardware without losing out to the competitors?

      1 reply →

    • Didn't OpenAI spend like 10 billion on inference in 2025? Which is around the same as their total revenue?

      Why do people keep saying inference is cheap if they're losing so much money from it?

      2 replies →

    • > They are not losing money on subscription plans. Inference is very cheap - just a few dollars per million tokens. What they’re trying to do is bundle R&D costs with inference so they can fund the training of the next generation of models.

      You've described every R&D company ever.

      "Synthesizing drugs is cheap - just a few dollars per million pills. They're trying to bundle pharmaceutical research costs... etc."

      There's plenty of legit criticisms of this business model and Anthropic, but pointing out that R&D companies sink money into research and then charge more than the marginal cost for the final product, isn't one of them.

      3 replies →

  • The secret is there is no path on making that back.

    • the path is by charging just a bit less than the salary of the engineers they are replacing.

    • My crude metaphor to explain to my family is gasoline has just been invented and we're all being lent Bentley's to get us addicted to driving everywhere. Eventually we won't be given free Bentley's, and someone is going to be holding the bag when the infinite money machine finally has a hiccup. The tech giants are hoping their gasoline is the one that we all crave when we're left depending on driving everywhere and the costs go soaring.

      17 replies →

  • Depends on how you do the accounting. Are you counting inference costs or are you amortizing next gen model dev costs. "Inference is profitable" is oft repeated and rarely challenged. Most subscription users are low intensity users after all.

  • I agree; unfortunately when I brought up that they're losing before I get jumped on demanding me to "prove it" and I guess pointing at their balance sheets isn't good enough.

  • The question I have: how much are they _also_ losing on per-token billing?

    • From what I understand, they make money per-token billing. Not enough for how much it costs to train, not accounting for marketing, subscription services, and research for new models, but if they are used, they lose less money.

      Finance 101 tldr explanation: The contribution margin (= price per token -variable cost per token ) this is positive

      Profit (= contribution margin x cuantity- fix cost)

      1 reply →

  • Why do you think they're losing money on subscriptions?

    • Does a GPU doing inference server enough customers for long enough to bring in enough revenue to pay for a new replacement GPU in two years (and the power/running cost of the GPU + infrastructure). That's the question you need to be asking.

      If the answer is not yes, then they are making money on inference. If the answer is no, the market is going to have a bad time.

  • But why does it matter which program you use to consume the tokens?

    The sounds like a confession that claude code is somewhat wasteful at token use.

  • Honestly I think I am already sold on AI, who is the first company that is going to show us all how much it really costs and start enshitification? First to market wins right?

I just cancelled my Pro subscription. Turns out that Ollama Cloud with GLM-5 and qwen-coder-next are very close in quality to Opus, I never hit their rate limits even with two sessions running the whole day and there zero advantage for me to use Claude Code compared to OpenCode.

Your core customers are clearly having a blast building their own custom interfaces, so obviously the thing to do is update TOS and put a stop to it! Good job lol.

I know, I know, customer experience, ecosystem, gardens, moats, CC isn't fat, just big boned, I get it. Still a dick move. This policy is souring the relationship, and basically saying that Claude isn't a keeper.

I'll keep my eye-watering sub for now because it's still working out, but this ensures I won't feel bad about leaving when the time comes.

Update: yes yes, API, I know. No, I don't want that. I just want the expensive predictable bill, not metered corporate pricing just to hack on my client.

  • They'll all do this eventually.

    We're in the part of the market cycle where everyone fights for marketshare by selling dollar bills for 50 cents.

    When a winner emerges they'll pull the rug out from under you and try to wall off their garden.

    Anthropic just forgot that we're still in the "functioning market competition" phase of AI and not yet in the "unstoppable monopoly" phase.

    • "Naveen Rao, the Gen AI VP of Databricks, phrased it quite well:

      all closed AI model providers will stop selling APIs in the next 2-3 years. Only open models will be available via APIs (…) Closed model providers are trying to build non-commodity capabilities and they need great UIs to deliver those. It's not just a model anymore, but an app with a UI for a purpose."

      ~ https://vintagedata.org/blog/posts/model-is-the-product A. Doria

      > new Amp Free (10$) access is also closed up since of last night

    • Unstoppable monopoly will be extremely hard to pull off given the number of quality open (weights) alternatives.

      I only use LLMs through OpenRouter and switch somewhat randomly between frontier models; they each have some amount of personality but I wouldn't mind much if half of them disappeared overnight, as long as the other half remained available.

      17 replies →

    • > They'll all do this eventually

      And if the frontier continues favouring centralised solutions, they'll get it. If, on the other hand, scaling asymptotes, the competition will be running locally. Just looking at how much Claude complains about me not paying for SSO-tier subscriptions to data tools when they work perfectly fine in a browser is starting to make running a slower, less-capable model locally competitive with it in some research contexts.

  • Imagine having a finite pool of GPUs worth more than their weight in gold, and an infinite pool of users obsessed with running as many queries against those GPUs in parallel as possible, mostly to review and generate copious amounts of spam content primarily for the purposes of feeling modern, and all in return for which they offer you $20 per month. If you let them, you must incur as much credit liability as OpenAI. If you don't, you get destroyed online.

    It almost makes me feel sorry for Dario despite fundamentally disliking him as a person.

    • Hello old friend, I've been expecting you.

      First of all, custom harness parallel agent people are so far from the norm, and certainly not on the $20 plan, which doesn't even make sense because you'd hit token limit in about 90 seconds.

      Second, token limits. Does Anthropic secretly have over-subscription issues? Don't know, don't care. If I'm paying a blistering monthly fee, I should be able to use up to the limit.

      Now I know you've got a clear view of the typical user, but FWIW, I'm just an aging hacker using CC to build some personal projects (feeling modern ofc) but still driving, no yolo or gas town style. I've reached the point where I have a nice workflow, and CC is pretty decent, but it feels like it's putting on weight and adding things I don't want or need.

      I think LLMs are an exciting new interface to computers, but I don't want to be tied to someone else's idea of a client, especially not one that's changing so rapidly. I'd like to roll my own client to interface with the model, or maybe try out some other alternatives, but that's against the TOS, because: reasons.

      And no, I'm not interested in paying metered corporate rates for API access. I pay for a Max account, it's expensive, but predictable.

      The issue is Anthropic is trying for force users into using their tool, but that's not going to work for something so generic as interfacing with an LLM. Some folks want emacs while others want vim, and there will never be a consensus on the best editor (it's nvim btw), because developers are opinionated and have strong preferences for how they interface with computers. I switched to CC maybe a year ago and haven't looked back, but this is a major disappointment. I don't give a shit about Anthropic's credit liability, I just want the freedom to hack on my own client.

      3 replies →

    • Why do you fundamentally dislike him as a person?

      The only thing I've seen from him that I don't like is the "SWEs will be replaced" line (which is probably true and it's more that I don't like the factuality of it).

      2 replies →

  • Don’t be mad at it, be happy you were able to throw some of that sweet free vc money at your hobbies instead of paying the market rate.

    • Oh I'm not mad, it's more of a sad clown type of thing. I'm still stoked to use it for now. We can always go back to the old ways if things don't work out.

  • They offer an API for people who want to build their own clients. They didn't stop people from being able to use Claude.

  • So basically you are saying Anthropic models are indispensable but you are too cheap to pay for it.

    • Nowhere did I say they're indispensable, and I explicitly said I'm still paying for it. If all AI companies disappear tomorrow that's fine. I'm just calling out what I think is tone-deaf move, by a company I pay a large monthly bill to.

  • Sure they are having a blast, they are paying 20$ instead of getting charged hundreds for forr tokens.

    It's simple, follow the ToS

I pay a Max subscription since a long time, I like their model but I hate their tools:

- Claude Desktop looks like a demo app. It's slow to use and so far behind the Codex app that it's embarassing.

- Claude Code is buggy has hell and I think I've never used a CLI tool that consume so much memory and CPU. Let's not talk about the feature parity with other agents.

- Claude Agent SDK is poorly documented, half finished, and is just thin wrapper around a CLI tool…

Oh and none of this is open source, so I can do nothing about it.

My only option to stay with their model is to build my own tool. And now I discover that using my subscription with the Agent SDK is against the term of use?

I'm not going to pay 500 USD of API credits every months, no way. I have to move to a different provider.

  • I agree that Claude Code is buggy as hell, but:

    > Let's not talk about the feature parity with other agents.

    What do you mean feature parity with other agents? It seems to me that other CLI agents are quite far from Claude Code in this regard.

    • Which other CLI agents are that? Because I've found OpenCode to be A LOT better than Claude-Code.

Not according to this guy who works on Claude Code: https://x.com/trq212/status/2024212378402095389?s=20

What a PR nightmare, on top of an already bad week. I’ve seen 20+ people on X complaining about this and the related confusion.

  • No, it is prohibited. They're just updating the docs to be more clear about their position, which haven't changed. Their docs was unclear about it.

  • woof, does Anthropic not have a comms team and a clear comms policy for employees that aren’t on that comms team?

  • Incorrect, the third-party usage was already blocked (banned) but it wasn't officially communicated or documented. This post is simply identifying that official communication rather than the inference of actual functionality.

The analogy I like to use when people say "I paid" is that you can't pay for a buffet then get all the food take-home for free.

This is how you gift wrap the agentic era to the open source chinese LLMs. devs don't need the best model, they need one without lawyers attached.

Going to keep using the agents sdk with my pro subscription until I get banned. It's not openclaw it's my own project. It started by just proxying requests to claude code though the command line, the sdk just made it easier. Not sure what difference it makes to them if I have a cron job to send Claude code requests or an agent sdk request. Maybe if it's just me and my toy they don't care. We'll see how the clarify tomorrow.

Anthropic is dead. Long live open platforms and open-weight models. Why would I need Claude if I can get Minimax, Kimi, and Glm for the fraction of the price?

  • To get comparable results you need to run those models on at least prosumer hardware and it seems that two beef-up Mac Studios are the minimum. Which means that instead of buying this hardware you can purchase Claude, Codex and many other subscriptions for next 20 years.

    • Or you purchase a year's worth of almost unlimited MiniMax coding plan for a price you'd pay for 15 days of limited Claude usage.

      And as a bonus, you can choose your harness. You don't have to suffer CC.

      And if something better appears tomorrow, you switch your model, while still using your harness of choice.

      1 reply →

The pressure is to boost revenue by forcing more people to use the API to generate huge numbers of tokens they can charge more for. LLMs are becoming common commodities as open weight models keep catching up. There are similarities with pirating in the 90s when users realize they can ctrl+c ctrl+v to copy a file/model and you don't need to buy a cd/use their paid API.

  • And that is how it should be - the knowledge that the LLM trained on should be free, and cannot (and should never be) gatekept behind money.

    It's merely the hardware that should be charged for - which ought to drop in price if/when the demand for it rises. However, this is a bottleneck at the moment, and hard to see how it gets resolved amidst the current US environment on sanctioning anyone who would try.

Not sure what the problem is, I am on Max and use Claude Code, never get usage issues, that's what I pay for and want that to always be an option (capped monthly cost). For other uses it makes sense to go through their API service. This is less confusing and provides clarity for users, if you are a first party user use Claude's tools to access's the models otherwise API

AI is the new high-end gym membership. They want you to pay the big fee and then not use what you paid for. We'll see more and more roadblocks to usage as time goes on.

  • This was the analogy I was looking for! It feels like a very creepy way to make money, almost scammy and the gym membership/overselling hits the nail.

  • This feels more like the gym owner clarifying it doesn't want you using their 24-hour gym as a hotel just because you find their benches comfortable to lie down on, rather than a "roadblock to usage"

And because of this i'll obviously opt to not subscribe to a Claude plan, when i can just use something like Copilot and use the models that way via OpenCode.

OpenClaw, NanoClaw, et al all use AgentSDK which will from now on be forbidden.

They are literally alienating a large percentage of OpenClaw, NanoClaw, PicoClaw, customers because those customers will surely not be willing to pay API pricing, which is at least 6-10x Max Plan pricing (for my usage).

This isn’t too surprising to me since they probably have a direct competitor to openclaw et al in the works right now, but until then I am cancelling my subscription and porting my nanoclaw fork with mem0 integration to work with OpenAI instead.

Thats not a “That’ll teach ‘em” statement, it is just my own cost optimization. I am quite fond of Anthropic’s coding models and might still subscribe again at the $20 level, but they just priced me out for personal assistant, research, and 90% of my token use case.

  • What does Anthropic have to gain from users who use a very high amount of tokens for OpenClaw, NanoClaw etc and pay them only $20?

OK I hope someone from anthropic reads this. Your API billing makes it really hard to work with it in India. We've had to switch to openrouter because anthropic keeps rejecting all the cards we have tried. And these are major Indian banks. This has been going on for MONTHS

  • It’s the same here in Hong Kong. I can’t use any of my cards (personal or corporate) for OpenAI or Anthropic.

    Have to do everything through Azure, which is a mess to even understand.

I would expect, it still is only enforced in a semi-strict way.

I think what they want to achieve here is less "kill openclaw" or similar and more "keep our losses under control in general". And now they have a clear criteria to refer when they take action and a good bisection on whom to act on.

In case your usage is high they would block / take action. Because if you have your max subscription and not really losing them money, why should they push you (the monopoly incentive sounds wrong with the current market).

  • Openclaw is unaffected by this as the Claude Code CLI is called directly

    • Many people use the Max subscription OAuth token in OpenClaw. The main chat, heartbeat, etc., functionality does not call the Claude Code CLI. It uses the API authenticated via subscription OAuth tokens, which is precisely what Anthropic has banned.

      There are many other options too: direct API, other model providers, etc. But Opus is particularly good for "agent with a personality" applications, so it's what thousands of OpenClaw users go with, mostly via the OAuth token, because it's much cheaper than the API.

I got banned for violating terms of use apparently, but I'm mystified as to what I rule I broke, and appealing just vanishes into the ether.

  • Two accounts of mine were banned for some reason and my sub was refunded. Literally from just inane conversations. Conversations also disappear and break randomly, but this happens on ChatGPT too sometimes

In enterprise software, this is an embedded/OEM use case.

And historically, embedded/OEM use cases always have different pricing models for a variety of reasons why.

How is this any different than this long established practice?

  • It's not, but do you really think the people having Claude build wrappers around Claude were ever aware of how services like this are typically offered.

there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense. If anthropic wanted to own that market, they could introduce a bring-your-own-Claude metaphor, where you login with Claude and token costs get billed to your personal account (after some reasonable monthly freebies from your subscription).

But the big guys don’t seem interested in this, maybe some lesser known model will carve out this space

  • This is going to happen. Unfortunately.

    I shudder to think what the industry will look like if software development and delivery becomes like Youtubing, where the whole stack and monetization is funneled through a single company (or a couple) get to decide who gets how much money.

  • I am a bit worried that this is the situation I am in with my (unpublished) commercial app right now: one of the major pain points I have is that while I have no doubt the app provides value in itself, I am worried about how many potential users will actually accept paying inference per token...

    As an independent dev I also unfortunately don't have investors backing me to subsidize inference for my subscription plan.

    • I recommend kimi. It's possible for people to haggle with it to get cheap for the first month and as such try out your project and best part of the matter is that kimi intentionally supports api usage in any of their subscribed plan and they also recently changed their billing to be more token usage based like others instead of their previous tool calling limits

      It's seriously one of the best models. very comparable to sonnet/opus although kimi isn't the best in coding. I think its a really great solid model overall and might just be worth it in your use case?

      Is the use case extremely coding intensive related (where even some minor improvement can matter for 10-100x cost) or just in general. Because if not, then I can recommend Kimi.

  • >> there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense

    Maybe they are not worth building at all then. Like MoviePass wasn’t.

From the legal docs:

> Authentication and credential use

> Claude Code authenticates with Anthropic’s servers using OAuth tokens or API keys. These authentication methods serve different purposes:

> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.

> Developers building products or services that interact with Claude’s capabilities, including those using the Agent SDK, should use API key authentication through Claude Console or a supported cloud provider. Anthropic does not permit third-party developers to offer Claude.ai login or to route requests through Free, Pro, or Max plan credentials on behalf of their users.

> Anthropic reserves the right to take measures to enforce these restrictions and may do so without prior notice.

  • why wouldn't they just make it so the SDK can't use claude subs? like what are they doing here?

    • When your company happens upon a cash cow, you can either become a milk company or a meat company.

Thariq has clarified that there are no changes to how SDK and max suscriptions work:

https://x.com/i/status/2024212378402095389

---

On a different note, it's surprising that a company that size has to clarify something as important as ToS via X

  • > On a different note, it's surprising that a company that size has to clarify something as important as ToS via X

    Countries clarify nation policy on X. Seriously it feels like half of the EU parliament live on twitter.

  • What's wrong with using X?

    • In the case you are asking in good faith, a) X requires logging in to view most of its content, which means that much of your audience will not see the news because b) much of your audience is not on X, either due to not having social media or have stopped using X due to its degradation to put it generally.

      6 replies →

    • Not bad per se but how much legal weight does it actually carry?

      I presume zero.. but nonetheless seems like people will take it as valid anyway.

      That can be dangerous I think.

That page is... confusing.

> Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK.

This is literally the last sentence of the paragraph before the "Authentication and credential use"

It's a bit unclear to me. I'm building a system around the Claude Agent SDK. Am I allowed to use it or not? Apparently not.

Seems fair enough really, not that I like it either, but they could easily not offer the plans and only have API pricing. Makes it make more sense to have the plans be 'the Claude Code pricing' really.

Their moat is evaporating before our eyes. Anthropic is Microsoft's side piece, but Microsoft is married with kids to OpenAI.

And OpenAI just told Microsoft why they shouldn't be seeing Anthropic anymore; Gpt-5.3-codex.

RIP Anthropic.

Is this a direct shot at things like OpenClaw, or am I reading it wrong?

  • They even block Claude Code of you've modified it via tweakcc. When they blocked OpenCode, I ported a feature I wanted to Claude Code so I could continue using that feature. After a couple days, they started blocking it with the same message that OpenCode gets. I'm going to go down to the $20 plan and shift most of my work to OpenAI/ChatGPT because of this. The harness features matter more to me than model differences in the current generation.

  • I wonder if it has to do with Grok somehow. They had a suspiciously high reputation until they just binarily didn't, after Anthropic said they did something.

  • Opencode as well. Folks have been getting banned for abusing the OAuth login method to get around paying for API tokens or whatever. Anthropic seems to prefer people pay them.

    • its not that innocent.

      a 200 dollar a month customer isn't trying to get around paying for tokens, theyre trying to use the tooling they prefer. opencode is better in a lot of ways.

      tokens get counted and put against usage limits anyway, unless theyre trying to eat analytics that are CC exclusive they should allow paying customers to consume to the usage limits in however way they want to use the models.

      5 replies →

¡Quick reminder! We are in the golden era of big company programming agents. Enjoy it while you can because it is likely going to get worse over time. Hopefully, there were will be competitive open source agents and some benevolent nerds put together a reasonable service. Otherwise I can see companies investing in their own AI infrastructure and developers who build their own systems becoming the top performers.

This is the VC funded startup playbook. It has been repeated many times, but maybe for the younger crowd it is new. Start a new service that is relatively permissive, then gradually restrict APIs and permissions. Finally, start throwing in ads and/or making it more expensive to use. Part of the reason is in the beginning they are trying to get as many users as possible and burning VC money. Then once the honey moon is over, they need to make a profit so they cut back on services, nerf stuff, increase prices and start adding ads.

This article is somewhat reassuring to me, someone experimenting with openclaw on a Max subscription. But idk anything about the blog so would love to hear thoughts.

https://thenewstack.io/anthropic-agent-sdk-confusion/

In my opinion (which means nothing). If you are using your own hardware and not profiting directly from Claude’s use (as in building a service powered by your subscription). I don’t see how this is a problem. I am by no means blowing through my usage (usually <50% weekly with max x5).

Why does it matter to Anthropic if my $200 plan usage is coming from Claude Code or a third party?

Doesn’t both count towards my usage limits the same?

  • If you buy a 'Season's Pass' for Disneyland, you cant 'sublet' it to another kid to use on the days you don't; It's not really buying a 'daily access rate'.

    Anthropic subs are not 'bulk tokens'.

    It's not an unreasonable policy and it's entirely inevitable that they have to restrict.

    • Disingenuous analogy.

      It's more buying a season pass for Disneyland, then getting told you can't park for free if you're entering the park even though free parking is included with the pass. Still not unreasonable, but brings to light the intention of the tool is to force the user into an ecosystem rather.

      1 reply →

  • Any user who is using a third-party client is likely self-selected into being a power user who is less profitable.

  • They don't get as much visibility into your data, just the actual call to/from the api. There's so much more value to them in that, since you're basically running the reinforcement learning training for them.

  • Increasing the friction of switching providers as much as possible is part of their strategy to push users to higher subscription tiers and deny even scraps to their competitors.

  • They're losing money on this $200 plan and they're essentially paying you to make you dependent on Claude Code so they can exploit this (somehow) in the future.

    • When using Claude Code, it's possible to opt out of having one's sessions be used for training. But is that opt out for everything? Or only message content, such that there could remain sufficient metadata to derive useful insight from?

I think that their main problem is that they don't have enough resources to serve too many users, so they resort to this kind of limitations to keep Claude usage under control. Otherwise I wouldn't be able to explain a commercial move that limits their offer so strongly in comparison to competitors.

At this point, where Kimi K2.5 on Bedrock with a simple open source harness like pi is almost as good the big labs will soon have to compete for users,... openai seems to know that already? While anthropic bans bans bans

  • Do you know by any chance if Bedrock custom model import also works with on-demand use, without any provisioned capacity? I'm still puzzled why they don't offer all qwen3 models on Bedrock by default.

    • I see a lot of Qwen3 in us west 2 And i have no experience with custom model on bedrock

how can they even enforce this? can't you just spoof all your network requests to appear like it's coming from claude code?

in any case Codex is a better SOTA anyways and they let you do this. and if you aren't interested in the best models, Mistral lets you use both Vibe and their API through your vibe subscription api key which is incredible.

  • > how can they even enforce this?

    Many ways, and they’re under no obligation to play fair and tell you which way they’re using at any given time. They’ve said what the rules are, they’ve said they’ll ban you if they catch you.

    So let’s say they enforce it by adding an extra nonstandard challenge-response handshake at the beginning of the exchange, which generates a token which they’ll expect on all requests going forward. You decompile the minified JS code, figure out the protocol, try it from your own code but accidentally mess up a small detail (you didn’t realize the nonce has a special suffix). Detected. Banned.

    You’ll need a new credit card to open a new account and try again. Better get the protocol right on the first try this time, because debugging is going to get expensive.

    Let’s say you get frustrated and post on Twitter about what you know so far. If you share info, they’ll probably see it eventually and change their method. They’ll probably change it once a month anyway and see who they catch that way (and presumably add a minimum Claude Code version needed to reach their servers).

    They’ve got hundreds of super smart coders and one of the most powerful AI models, they can do this all day.

    • the internet has hundreds of thousands of super smart coders with the most powerful ai models as well, I think it's a bit harder than you're assuming.

      you just need to inspect the network traffic with Claude code and mimic that

      6 replies →

    • see my comment here but I think instead of worrying about the decompile minified JS code etc., you can just essentially use claude code in the background and still do it even using opencode/its SDK thus giving sort of API access over CC subscription https://news.ycombinator.com/item?id=47069299#47070204

      I am not sure how they can detect this. I can be wrong, I usually am but I think its still possible to use CC etc. even after this change if you really wanted to

      But at this point, to me the question of GP that is that is it even worth it is definitely what I am thinking?

      I think not. There are better options out there, they mentioned mistral and codex and I think kimi also supports maybe GLM/z.ai as well

  • Pretty easy to enforce it - rather than make raw queries to the LLM Claude Code can proxy through Anthropic's servers. The server can then enforce query patterns, system prompts and other stuff that outside apps cannot override.

  • And once all the Claude subscribers move over to Codex subscriptions, I'd bet a large sum that OpenAI will make their own ToS update preventing automated/scripted usage.

  • They can't catch everything but they can make your product you're building on top of it non viable when it gets popular enough to look for, like they did with opencode.

  • > how can they even enforce this?

    I would think that different tools would probably have different templates for their prompts?

  • We don’t enforce speed limits, but it sucks when you get caught.

    OpenAI will adjust, their investors will not allow money to be lost on ”being nice” forever, not until they’re handsomely paid back at least.

This month was the first month i spent >$100 on it and it didn't feel like it was money well spent. I feel borderline scammed.

I'm just going to accept that my €15 (which with vat becomes €21) is just enough usage to automate some boring tasks.

I wrote a mcp bridge so that I don't have to copy and paste prompt back and forth between CLI and claude, chatgpt, grok, gemini

https://github.com/agentify-sh/desktop

Does this mean I have to remove claude now and go back to copy & pasting prompts for a subscription I am paying for ?!

wth happened to fair use ?

In the old days, think Gmail, or before the "unlimited" marketing scam. People genuinely are smart enough to know they are doing something that they are not suppose to be doing. Even Pirating software, say Windows or Adobe. I mean who can afford those when they were young?

Things get banned, but that is OK along as they give us weeks or days to prep for alternative solution. Users ( Not Customers ) are happy with it. Too bad, the good days are over.

Somewhere along the line, no just in software but even in politics, the whole world on entitlement. They somehow believe they deserve this, what they were doing were wrong but if it is allowed in the first place they should remain allowed to do so.

Judging from account opening time and comments we can also tell the age group and which camp they are on.

Honestly seeing throttling of AI usuage across all providers:

- Google reduced AI Studio's free rate limits by 1/10th

- Perplexity imposing rate limits, card filing to continue free subscriptions

- Now Anthropic as well

There has been a false narrative that AI will get cheaper and more ubiquitous, but model providers have been stuck in a race for ever more capabilities and performance at higher costs.

OpenAI has endorsed OAuth from 3rd party harnesses, and their limits are way higher. Use better tools (OpenCode, pi) with an arguably better model (xhigh reasoning) for longer …

  • I am looking forward to switching to OpenAI once my claude max account is banned for using pi....

Their model actually doesn't have that much of a moat if at all. Their agent harness also doesn't, at least not for long. Writing an agent harness isn't that difficult. They are desperately trying to stay in power. I don´t like being a customer of this company and am investing lots of my time in moving away from them completely.

  • They are obviously losing money on these plans, just like all of the other companies in the space.

    They are all desperately trying to stay in power, and this policy change (or clarification) is a fart in the wind in the grand scheme of what's going on in this industry.

Not surprised, its the official stance by Anthropic.

I'm more surprised by people using subscription auth for OpenClaw when its officially not allowed.

At this point, are there decent alternatives to Anthropic models for coding that allow third-party usage?

I'm a bit lost on this one.

I can get a ridiculous amount of tokens in and out of something like gpt-5.2 via the API for $100.

Is this primarily about gas town and friends?

So even simple apps that are just code usage monitors are banned?

  • Always have been, unless you're using the API meant for apps.

    But if you're doing something very basic, you might be able to slop together a tool that does local inferencing based on a small, local model instead, alleviating the need to call Claude entirely.

The reason I find this so egregious is because I don’t want to use Claude Code! It’s complete rubbish, completely sidelines security, and nobody seems to care. So I’m forced to use their slop if I want to use Claude models without getting a wallet emptying API bill? Forget it, I will use Codex or Gemini.

Claude Code is not the apex. We’re still collectively figuring out the best way to use models in software, this TOS change kills innovation.

Anthropic is just doing this out of spite. They had a real scenario to win mindshare and marketshare and they fucked up instead. They could have done what Open AI did - hired the OpenClaw/d founder. Instead, they sent him a legal notice for trademark violation. And now they're just pissed he works for their biggest competitor. Throw all tantrums you want, you're on the wrong side of this one, Anthropic.

  • Agreed! I don't understand how so many people on here seem to think it is completely reasonable for Anthropic to act like this.

    • Apple/OpenAI = god

      Anthropic = good

      Google = evil

      That's pretty much HN crowd logic to be honest

This confirms they're selling those subscriptions at a loss which is simply not sustainable.

  • They probably are but I don’t think that’s what this confirms. Most consumer flat rate priced services restrict usage outside of the first party apps, because 3rd party and scripted users can generate orders of magnitude more usage than a single user using the app can.

    So it makes sense to offer simple flat pricing for first party apps, and usage priced apis for other usage. It’s like the difference between Google Drive and S3.

    • I get your point - they might count on the user not using their full quota they're officially allowed to use (and if that's the case, Anthropic is not losing money). But then still - IF the user used the whole quota, Anthropic loses.. so what's advertised is not actually honest.

      For me, flat rates are simply unfair either ways - if I'm not using the product much, I'm overpaying (and they're ok with that), otherwise it magically turns out that it's no longer ok when I actually want to utilize what I paid for :)

      1 reply →

important they have clarified that it's OK to use it for personal experimentation if you don't build a business out of it!

Cancelled my Claude and bought GLM coding plan + Codex.

  • This is something I think Anthropic does not get. They want to be Microsoft of AI, make people their solution, so they will not to move to the other provided. Thing is, giving access to a text prompt is not something that you can monopolize easily. Even if you provide some stuff like skills, MCP server integration, that is not a big deal.

You can use Claude CLI as a relay - yes, it needs to be there -but its not that different than use the API

People on here are acting like school children over this. It’s their product that they spent billions to make. Yet here we are complaining about why they should let you use third party products specifically made to compete against Anthropic.

You can still simply pay for API.

Sonnet literally just recommended using a subscription token for openclaw. Even anthropic's own AI doesn't understand its own TOS.

So here goes my OpenClaw integration with Anthropic via OAuth… While I see their business risk I also see the onboarding path for new paying customers. I just upgraded to Max and would even consider the API if cost were controllable. I hope that Anthropic finds a smart way to communicate with customers in a constructive way and offers advice for the not so skilled OpenClaw homelabbers instead of terminating their accounts… Is anybody here from Anthropic that could pick up that message before a PR nightmare happens?

May we still use the agent sdk for our own private use with the max account? I’m a bit confused.

Oh crap. I just logged into HN to ask if anyone knew of a working alternative to the Claude Code client. It's lost Claude's work multiple times in the last few days, and I'm ready to switch to a different provider. (4.6 is mildly better than 4.5, but the TUI is a deal breaker.)

So, I guess it's time to look into OpenAI Codex. Any other viable options? I have a 128GB iGPU, so maybe a local model would work for some tasks?

  • Local? No, not currently. You need about 1TB VRAM. There are many harnesses in development at the time, keep a good look out. Just try many of them, look at the system prompts in particular. Consider DeepSeek using the official API. Consider also tweaking system prompts for whatever tool you end up using. And agree that TUI is meh; we need GUI.

  • Zed with CC using ACP?

    Opencode with CC underneath using Gigacode?

    OpenAI codex is also another viable path for what its worth.

    I think the best model to my liking open source is kimi k2.5, so maybe you can run that?

    Qwen is releasing some new models so I assume keep an eye on those and maybe some model can fit your use case as well?

I have no issues with this. Anthropic did a great job with Claude Code.

It's a little bit sleazy as a business model to try to wedge one's self between Claude and its users.

OpenAI acquiring OpenClaw gives me bad vibes. How did OpenClaw gain so much traction so quickly? It doesn't seem organic.

I definitely feel much more aligned with Anthropic as a company. What they do seems more focused, meritocratic, organic and genuine.

OpenAI essentially appropriated all their current IP from the people... They basically gutted the non-profit and stole its IP. Then sold a huge chunk to Microsoft... Yes, they literally sold the IP they stole to Microsoft, in broad daylight. Then they used media spin to make it sound like they appropriated it from Elon because Elon donated a few millions... But Elon got his tax deduction! The public footed the bill for those deductions... The IP belonged to the non-profit; to the public, not Elon, nor any of the donors. I mean let's not even mention Suchir Balaji, the OpenAI researcher who supposedly "committed suicide" after trying to warn everyone about the stolen IP.

OpenAI is clearly trying to slander Anthropic, trying to present themselves as the good guys after their OpenClaw acquisition and really rubbing it in all over HN... Over which they have much influence.

Just a friendly reminder also to anyone outside the US that these subscriptions cannot be used for commercial work. Check the consumer ToS when you sign up. It’s quite clear.

  • Yeah for context the TOS outside the US has:

    Non-commercial use only. You agree not to use our Services for any commercial or business purposes and we (and our Providers) have no liability to you for any loss of profit, loss of business, business interruption, or loss of business opportunity.

That's too bad, in a way it was a bit of an unofficial app store for Anthropic - I am sure they've probably looked at that and hopefully this means there's something on it's way.

Not really sure if its even feasible to enforce it unless the idea is to discourage the big players from doing it.

The number one thing we need is cheap abundant decentralized clean energy, and these things are laughable.

Unfortunately neither political party can get all of the above.

  • Are you implying that no one would use LLM SaaSes and everyone would self-host if energy costs were negligible?

    That is...not how it works. People self-hosting don't look at their electricity bill.

    • I was stuck on the part where they said neither party could provide cheap abundant decentralized clean energy. Biden / Obama did a great job of providing those things, to the point where dirty coal and natural gas are both more expensive than solar or wind.

      So, which two parties could they be referring to? The Republicans and the Freedom Caucus?

      1 reply →

And I just bought my mac mini this morning... Sorry everyone

  • You know that if you are just using a cloud service and not running local models, you could have just bought a raspberry pi.

    • Yeah. I know it’s dumb but it’s also a very expensive machine to run BlueBubbles, because iMessage requires a real Mac signed into an Apple ID, and I want a persistent macOS automation host with native Messages, AppleScript, and direct access to my local dev environment, not just a headless Linux box calling APIs.

      2 replies →

I think this is shortsighted.

The markets value recurring subscription revenue at something like 10x “one-off” revenue, Anthropic is leaving a lot of enterprise value on the table with this approach.

In practice this approach forces AI apps to pay Anthropic for tokens, and then bill their customers a subscription. Customers could bring their own API key but it’s sketchy to put that into every app you want to try, and consumers aren’t going to use developer tools. And many categories of free app are simply excluded, which could in aggregate drive a lot more demand for subscriptions.

If Anthropic is worried about quota, seems they could set lower caps for third-party subscription usage? Still better than forcing API keys.

(Maybe this is purely about displacing other IDE products, rather than a broader market play.)

  • I think they are smart making a distinction between a D2C subscription which they control the interface to and eat the losses for vs B2B use where they pay for what they use.

    Allows them to optimize their clients and use private APIs for exclusive features etc. and there’s really no reason to bootstrap other wannabe AI companies who just stick a facade experience in front of Anthropic’s paying customer.

    • > eat the losses

      Look at your token usage of the last 30 days in one of the JSON files generated by Claude Code. Compare that against API costs for Opus. Tell me if they are eating losses or not. I'm not making a point, actually do it and let me know. I was at 1 million. I'm paying 90 EUR/m. That means I'm subsidizing them (paying 3-4 times what it would cost with the API)! And I feel like I'm a pretty heavy user. Although people running it in a loop or using Gas Town will be using much more.

      2 replies →

  • There's no decision to be made here, it's just way too expensive to have 3rd parties soak up the excess tokens, that's not the product being sold.

    Especially as they are subsidized.

  • That’s not true, the market loves pay per use, see ”cloud”. It outperforms subscriptions by a lot, it’s not ”one-off”. And your example is not how companies building on top tend to charge, you either have your own infrastructure (key) or get charged at-cost + fees and service costs.

    I don’t think Anthropic has any desire to be some B2C platform, they want high paying reliable customers (B2B, Enterprise).

    • > the market loves pay per use, see ”cloud”.

      Cloud goes on the books as recurring revenue, not one-off; even though it's in principle elastic, in practice if I pay for a VM today I'll usually pay for one tomorrow.

      (I don't have the numbers but the vast majority of cloud revenue is also going to be pre-committed long-term contracts from enterprises.)

      > I don’t think Anthropic has any desire to be some B2C platform

      This is the best line of argument I can see. But still not clear to me why my OP doesn't apply for enterprise, too.

      Maybe the play is just to force other companies to become MCPs, instead of enabling them to have a direct customer relationship.

      1 reply →