I've complained, extensively, about this before but Anthropic really needs to make it clear what is and is not supported with or without a subscription. Until then, it's hard to know where you stand with using their products.
I say all of this as someone who doesn't use OpenClaw or any Claw-like product currently. I just want to know what I can and can't do and currently it's impossible to know.
I don’t get why people are so surprised. Didn’t they learn anything from Twitter APIs and the like. The APIs are open as long as they serve the short term problem then Anthropic builds the features people actually use (more or less) and ban the usage of APIs for competing clients
The poor communication and flip-flopping are what concern me.
How can I buy into an ecosystem that might disallow one of my main workflows? I currently use several hook scripts to route specific work to different models. Will they disallow that at some point? We don't know because they can't get their story straight.
Same building on their API. You design around what you think is allowed, then a blog post shifts everything. A proper developer policy page would fix this.
Stealibg OAuth keys from first party app to impersonate it in order to not have to pay for usage with properly generated API key was never part of normal use anywhere.
> Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again
Anthropic staff have had contradictive statements in Twitter and have corrected each other. Their intent for clarifications lead to confusion.
> OpenClaw treats Claude CLI reuse and claude -p usage as sanctioned for this integration unless Anthropic publishes a new policy.
Oh cool, so everything is back to business now, until they all or sudden update their policy tomorrow that retracts everything.
Anthropic have proved themselves to be be unreliable when it comes to CC. Switching to other providers is the best way to go, if you want to keep your insanity.
It's the PayPal model of customer service: they'll ban you at any time for any reason or none at all, but if you're very nice they might be willing to have a human look at that decision at some point, but probably not.
They had this on here since day 1 of the block. This is just Openclaw saying "if you run Openclaw inside Claude Code, it's compliant with the Anthropic ToS", because, well, it's literally running inside Claude Code.
What's not allowed is grabbing the oauth tokens and using these for your own custom agent, which is what was (and still is) banned.
Nothing has changed, this appears to just be a giant misunderstanding (and probably a poor choice of words from Openclaw).
The most recent Anthropic announcement was not that people would be banned for using subscriptions with OpenClaw, but that it would be charged as extra usage. I think the reason this was changed three days after that announcement is that being charged for extra usage meant people would not be banned for using their subscription OAuth tokens directly against the Anthropic API with a third party harness, as they had been before. But rather both that usage, and the more recent claude -p usage both be charged as extra usage.
I don't see anything on this page that claims something different from that, or that addresses that claim at all.
> Switching to other providers is the best way to go, if you want to keep your insanity
I remember when I’d periodically rage quit from Uber One to Lyft Pink and back again every time I had a terrible customer-service experience. In the end, I realized picking a demon and getting familiar with its quirks was the better way to go.
I’m currently sticking with Claude, in part because I’m not exposed to this nonsense due to OpenClaw, in larger part because of the Hegseth-Altman DoD nonsense. More broadly, however, I’m not sure if any of Google, Anthropic or OpenAI are coming across as stars in AI communication and customer service.
With Uber and Lyft or even Anthropic versus OpenAI versus <insert flavor of the month here> I don’t even try to attach myself to any one brand.
It’s so easy to switch between all of them. I can open the Uber and Lyft apps and compare in a minute. I can run Claude and ChatGPT in parallel and see which one gets a better handle on the question. I can switch LLM providers with a few minutes of signing up for one and cancelling the other.
They all try to encourage brand lock in but it’s easy to pick up and move if you’re using them for their main service.
you know how s bunch of IT people are trying to "escape the permanent underclass" well it seems like anyone building their tools on cloud providers is doing the opposite. theyre willingly bexoming the underclass in hopes it trickles down
> until they all or sudden update their policy tomorrow that retracts everything.
Oh no. They won't update the policy. Boris or Thariq will casually mention in a random off-hand commebt on Twitter that this is banned now, and then will gaslight everyone that this has always been the case.
Looks like this was restored 2 weeks ago[0], 3 days after Anthropic said OpenClaw requires extra usage[1]. At this point, it's hard to take this seriously. No official statement and not even a tweet?
No, it's just that it's confusing, because there are two ways of using Claude Code credentials:
1. Take the oauth credentials and roll your own agent -- this is NOT allowed
2. Run your agentic application directly in Claude Code -- this IS allowed
When OpenClaw says "Open-Claw style CLI usage", it means literally running OpenClaw in an official Claude Code session. Anthropic has no problems with this, this is compliant with their ToS.
When you use Claude Code's oauth credentials outside of the claude code cli Anthropic will charge you extra usage (API pricing) within your existing subscription.
But... Even when running it in mode 2 ("claude -p") they at certain points tried to detect OpenClaw-usage based prompts made, and blocked them [0]. Now OpenClaw says that Antrophic sanctions this as allowable again.
I agree with GP that this is hard to take seriously.
And yet running the Claude Code cli with `-p` in ephemeral VMs gets me the "Third-party apps now draw from extra usage, not plan limits. We've added a credit to your organization to get you started. Ask your workspace admin to claim it and keep going." error.
One day you're experimenting just fine. The next, everything breaks.
And I'd gladly use their web containerized agents instead (it would pretty much be the same thing), but we happen to do Apple stuff. So unless we want to dive into relying on ever-changing unreliable toolchains that break every time Apple farts, we're stuck with macOS.
I think this is consistent with the Anthropic announcement. I do not see anything on this page that says it will NOT be charged as extra usage.
The most recent Anthropic announcement was not that people would be banned for using subscriptions with OpenClaw, but that it would be charged as extra usage. I think the reason this was changed three days after that announcement is that being charged for extra usage meant people would not be banned for using their subscription OAuth tokens directly against the Anthropic API with a third party harness, as they had been before. But rather both that usage, and the more recent claude -p usage both be charged as extra usage.
^every comment when someone says something remotely negative about LLM’s and their less useful cousins, cryptocurrencies. It’s baffling how similar the language and attitude is sometimes.
Anthropic was, even to me, “one of the better ones” until recently. They have made many questionable/poor decisions the last 6-8 weeks and people are right to call them out for it, especially when they want our money.
Anthropic is really trying to burn all that goodwill they worked up by raising prices, reducing limits and making it impossible to know what the actual policies are.
If you want LLMs to continue to be offered we have to get to a point where the providers are taking in more money than they are spending hosting them. And we still aren't there (or even close).
The open models may not be as great but maybe these are good enough. AI users can switch when the prices rise before it becomes sustainable for (some) of the large LLM providers.
Like with all new products. It takes time to let the market do its work. See if from a positive side. The demand for more and faster and bigger hardware is finally back after 15 years of dormancy. Finally we can see 128gb default memory or 64gb videocards in 2 years from now.
I see the current situation as a plus. I get SOTA models for dumping prices. And once the public providers go up with their pricing, I will be able to switch to local AI because open models have improved so much.
What shareholders, Anthropic is a money burning pit. Not to the same extent as OpenAI, but both will struggle hard to actually turn a profit some day, let alone make back the massive investments they've received.
Not that they don't bring value, I'm just not convinced they'll be able to sell their products in a sticky enough way to make up the prices they'll have to extract to make up for the absurd costs.
Aren't they just doing what Hacker News was trying to tell them to do? That AI is useful but not sure if sustainable. Now they're increasing prices and decreasing tokens and you guys are pissed off.
I've been trying to toe the line here myself, here's how I've been doing it. For context, I pay for a Max 5x subscription.
My main goal is to maximize my subscription token usage while trying to comply with the rules, but its not clear where the line is for automation so I feel like I need to be clever.
- regular development (most token use): all interactive claude mode, standard use case
- automated background development: experimenting with claude routines (first-class feature, on subscription)
- personal non-nanoclaw claude automations (claude -p): uses subscription token, but only called as needed (generally just fix something if something in my homelab infra goes does down, its set up to not fire on an exact cron time)
- other LLM based automations: usually openrouter API key, cheap models as needed
- nanoclaw: all API key based, but since its expensive I keep usage mostly minimal and try to defer anything heavyweight to one of the other automation strategies (nanoclaw mainly just connects my homelab infra with telegram)
Oh that's interesting. Right after they signed the deal with Amazon so maybe it was all compute constrained. In any case, I tried using the Codex $20/mo plan and the limits are so low I can hardly get anywhere before my agent swaps to a different agent.
Somewhat suspicious that if I do this without an official Anthropic notice I'll lose my precious Max $200/mo account so I'll sit tight perhaps for a while.
PSA: Since you are still required to use Claude Code and I have had a bunch of non-technical people asking me to make https://github.com/rcarmo/piclaw based on Claude rather than pi (which is never gonna happen), I have started pivoting its Python grand-daddy into a Go-based web front-end that runs Claude as an ACP agent.
My OpenClaw assistant (who's been using Claude) lost all his personality over the last week, and couldn't figure out how to do things he never had any issues doing.
I racked up about $28 worth of usage and then it just stopped consuming anymore, so I don't know if there was some other issue, but it was persistent.
I got sick of it and used a migration script to move my assistant's history and personality to a claude code config. With the new remote exec stuff, I've got the old functionality back without needing to worry about how bleeding-edge and prone to failure OpenClaw is.
I feel like this is what their plan was all along -- put enough strain and friction on the hobbyist space that people are incentivized to move over to their proprietary solution. It's probably a safer choice anyway -- though I'm sure both are equally vibe-coded.
I’ve been using codex cli and GPT 5.4. It is better at coding than Opus anyway. I did not really test Opus 4.7 but older versions generated worse results compared to GPT.
Which I would not even try and test though if Anthropic did not ban my account. The shadiest thing I did was to use it with opencode for a while I think. Never installed claw or used CC tokens somewhere else.
I got sick of the inconsistency caused by Anthropic tinkering with Claude Code and had canceled my 20x. My plan was to switch to Codex so I could use it in Pi.
I am specifically talking about switching because of the harness, not model quality. Anyone else match my experience?
I wonder how many other people recently did the same. It would be prudent of Anthropic to let people use Pro/Max OAuth tokens with other harnesses I think. Even though I get why they want to own the eyeballs.
I’ve been using Codex Pro since they lobotomized Opus 4.6. Codex is so much better, GPT 5.4 xhigh fast is definitely the smartest and fastest model available.
For a while there I had both Opus 4.6 and Codex access and I frequently pitted them against each other, I never once saw Opus come out ahead. Opus was good as a reviewer though, but as an implementer it just felt lazy compared to 5.4 xhigh.
One feature that I haven’t seen discussed that much is how codex has auto-review on tool runs. No longer are you a slave to all or nothing confirmations or endless bugging, it’s such a bad pattern.
Even in a week of heavy duty work and personal use I still haven’t been able to exhaust the usage on the $200 plan.
I’ll probably change my mind when (not IF) OpenAI rug pull, but for spring ‘26, codex is definitely the better deal.
I also made the switch to OpenAI, the $20 plan, I dunno about "so much better" but it's more or less the same, which is great!
The models and tools levelling out is great for users because the cost of switching is basically nil. I'm reading people ITT saying they signed up for a year - big mistake. A year is a decade right now.
I've been on pi for a few months now, build a custom tmux plugin so i can use nested pi and mix and match codex / claude instances.
pi has been the better harness out of all the ones i tried, first and third party.
Ever since the Anthropic block i've just canceled all my claude subs. Used to be codex was a bit worse, now they're practically equal. Claude is slightly better at directing other agents but the difference is too minor and not worth the money.
Claude usage limits / costs are absurd.
Any 'principles' people praise anthropic for are not that relevant to me anyways because i'm not a US citizen.
I left anthropic a while ago because of the similar shenanigans they had earlier. I went with opencode & zen.
I still have their subscription, but am using pi now, mainly because something happened that made my opencode sessions unusable (cannot continue them, just blanks out, I assume something in the sqlite is fucked), and I cannot be bothered to debug it.
For what I use the agents, the Chinese models are enough
Doesn't using pi be against their terms of use about having to go through Claude Code cli for all Max plan usage? (I had use Droid with Max previously, it was a great combo).
I also cancelled my 20x and switched to Codex. At this point even the Codex CLI seems to perform better than Claude Code... And so far I'm on the OpenAI Pro plan and haven't even needed to upgrade to their $100/mo plan. I'm getting more value for almost 10x cheaper.
(Disclosure: I work on tamer, an OSS supervisor for coding agents — biased.)
Add one more to the count. The OAuth-across-harnesses idea would help, but it doesn't fix the shape of the problem.
"Harness" has always felt off to me. Exoskeleton is closer — Claude Code, Codex, opencode wrap the model and augment it from the inside.
What's missing is a layer above that's explicitly not an exoskeleton: a thin supervisor. A master that watches and guides, nothing more. It just relays I/O and hands approval back to the human.
My experience is the opposite of this thread's consensus. Context: Full time SWE, working on large and messy codebase. Not working on crazy automations, working on fixing bugs, troubleshooting crashes, implementing features.
Anthropic models write much better code, they are easy to follow, reasonable and very close to what I would done if I had the time... OpenAI's on the other hand generate extremely complex solutions to the simplest problems.
I was so disappointed by non-Anthropic models, that for a couple of weeks I only used Anthropic models, but based on this thread, I'll go back and give it another try. It's good to go back and try things again every couple of weeks.
Of course, I was annoyed that they lobotomized 4.6, the difference was day and night, and Anthropic is certainly not a company I trust. In my opinion, it shows their willingness to rugpull, so I'm looking at other approaches. Since 4.7, things went back to normal, things you'd expect to work just work.
> I wonder how many other people recently did the same.
Some negative signal for better overall view on things: I'm still with Anthropic and will probably stay with them for the foreseeable future.
I think after DoD/DoW shenanigans (which in of itself felt like a reasonable take on the part of Anthrpic) they got a bunch of visibility and new users, so them hitting some scaling limits is pretty much inevitable - so some service disruption is inevitable. Couple this with the tokenizer changes and seeming decrease in model performance (adaptive thinking etc.), and lots of people will be rightfully pissed off, alongside increased downtime (doesn't matter that much for me, definitely does matter for anything time-sensitive).
At the same time, in practice I've only seen it do stupid things across 8 million tokens about 5 times (confusing user/assistant roles, not reading files that should be obvious for a given use case, and picking trivially wrong/stupid solutions when planning things), alongside another 4 times that tests/my ProjectLint tool caught that I would have missed. The error rate is still arguably lower than mine, though I work in a very well known and represented domain (webdev with a bunch of DevOps and also some ML stuff, and integration with various APIs etc.).
At the same time, the 85 EUR they gave to me for free has been enough to weather the instability in regards to pricing changes and peak usage. They've fixed most of the issues I had with Claude Code (notably performance), and the sub-agent support is great and it's way better than OpenCode in my experience. They also keep shipping new features that are pretty nice, like Dispatch and Routines and Design, those features also seem nice and not like something completely misdirected, so that's nice. The Opus 4.7 model quality with high reasoning is actually pretty nice as well and works better than most of the other models I've tried (OpenAI ones are good, I just prefer Claude phrasing/language/approaches/the overall vibe, not even sure what I'd call it exactly, all the stuff in addition to the technical capabilities).
At the same time, if they mess too much with the 100 USD tier, I bet I could go to OpenAI or try out the GLM 5.1 subscription without too many issues. For now they're replacing all the other providers for me. Oh also I find the subscription vs API token-based payment approach annoying, but I guess that's how they make their money.
Because the Harness is the Moat and key IP not the Models themselves that is the why! now for both OpenAI and Anthropic with all their money raised and the compute they acquire and have in the books of course no one can easily replicate, whom can afford all those datacenters and Nvidia GPUs interconnected is why OpenAI throws you a bone and gives you an Open Source SDK Harness but not the one they actually use for ChatGPT. But now both of them have to deliver and do all the bull-shet they said this models can do... truth is they cannot. So now the bubbles burst and we will see what happens. We all have to buy iPhones or MacBooks so that makes sense, we all use Chrome or Google Search, Instagram, TikTok.
All these models and agents are shortcuts for all of us to be lazy and play games and watch YouTube or Netflix because we use them to work-less, well the party will be over soon.
I don’t think I’ve seen a more confused and shambolic product strategy since Google’s absurd line of GChat rebrandings.
Last year I was excited about the constant forward progress on models but since February or so its just been a mess and I want off this ride.
Either way I’m going to wait for “official” word from Anthropic, which I guess at this point will probably be a “Tell HN” or Reddit text post or a Xitter from some random employee’s personal account, because apparently that’s the state of corporate communication now.
I didn't even use openclaw and Anthropic disabled my account without explanation beyond "suspicious signals". If anyone found a way to get out of that, I'd be curious to hear it - genuinely no idea what I did wrong, and the Google docs form I filled out to appeal never got me any reply.
Same thing happened to me in January. Never heard back from them after submitting the google form. A few weeks ago I went through the subscription flow again and the 'account disabled' message was no longer there. Didn't go through with the payment so it's possible I would have been blocked at that point but it looked like my account had been re-enabled. I think you just have to play the waiting game unfortunately.
Whether to allow Claude subscription to access other services or not, at this point, anthropic seems to be schizophrenic, sometimes worried about insufficient computing power and sometimes worried about user loss, which is puzzling.
What's puzzling or schizophrenic about that? Those seem like two very natural factors that would be in tension with one another and have to be balanced.
They see that the new KimiK2.6 will eat their lunch. They don't care about you, they just care about your money and will take away your options if they don't believe you have a solid alternative.
This is only useful when you are using Claude Cli fairly regularly on the same machine as OpenClaw, right? Because the tokens need to be refreshed manually every so often?
I'm out of the loop on Claude, hasn't it always been possible to use the Anthropic API with a tool like OpenClaw, paying per request? Is this limitation just for using your monthly subscription account?
I find it a little bizarre that people have this expectation. You can still pay for compute and use it the way you want by paying for the product you actually want to use. Subscription products like this are not marketed or intended to be used as access to the API, but they also offer access to the API if that's what you want. I'm still not entirely clear why people insist on using their subscription like this, so let me know if I'm missing something.
What models have you guys tried to use with OpenClaw that you've found suitable for the task? Codex personally rules for my dev style but not sure how well it works in the claw scenario.
This is a perfect example of how quickly you can burn through trust that took a long time to earn.
I used to be - in my small circle of friends and peers - a genuine advocate for Anthropic and Claude. It was my sole AI assistant for over a year. But somewhere around February/March, something shifted. Declining quality, policy changes, inconsistent output. Nothing dramatic, just... a slow erosion.
That erosion pushed me to try Codex. I signed up for their most expensive pro plan. Now I'm about to experiment with Kimi. I'm not saying they're better (well, sometimes they are). But here's the thing - what Anthropic did is they made me look. They made a loyal customer start shopping around. And I think that's the worst thing you can do.
Having said that - as an LLM provider for my product, we're staying with Claude. I still trust in their ethics. Please don't prove me wrong.
I'm trying out codex for first time as well cause something up with Claude for sure, 4.7 has been super frustrating. For other models, highly recommend trying MiniMax 2.7, using it with Hermes is actually pretty good, and their token subscription plans include a lot of usage for $10.
Anthropic keeps conflating two distinct strategies — be the best model for developers to build on, or be the company that ships Claude Code. Those two have opposite policy conclusions. Restricting third-party harnesses maximizes Claude Code revenue; allowing them maximizes model-layer lock-in through developer habit. The whiplash is the symptom of not picking. Pick for crying out loud!
Interesting perspective on AI CLI tools. The Anthropic policy clarification is a significant development for the developer community. Would be curious about the implementation details.
Uh, what? For the love of God can I make my own harness or not? Or is this just saying you can use it only in API mode?
I have had some ideas for a custom harness (like embedding some tools OOTB and replacing slow tooling) but these policies throw me off. Instead I use local models.
Problem is API costs are insane. I have toyed with the idea of running a local model that works with Claude Sonnet or even Haiku, and I know this has been done by others.
Or Claw-like harnesses that we make ourselves? It takes honestly like 15 minutes to roll your own, so I did it thinking "well, hopefully it's not considered third party"
The problem is these tools are so important I'm never going to risk Anthropic blocking my account now after the last debacle. So I'll be used OpenAI with OpenClaw. Hard to win back trust.
Great so now we can all look forward to Claude progressively getting reduced limits again. How long till the $1000 ultra plan... or they just want us all paying API credits instead
Well that's clear as mud.
I've complained, extensively, about this before but Anthropic really needs to make it clear what is and is not supported with or without a subscription. Until then, it's hard to know where you stand with using their products.
I say all of this as someone who doesn't use OpenClaw or any Claw-like product currently. I just want to know what I can and can't do and currently it's impossible to know.
I don’t get why people are so surprised. Didn’t they learn anything from Twitter APIs and the like. The APIs are open as long as they serve the short term problem then Anthropic builds the features people actually use (more or less) and ban the usage of APIs for competing clients
The poor communication and flip-flopping are what concern me.
How can I buy into an ecosystem that might disallow one of my main workflows? I currently use several hook scripts to route specific work to different models. Will they disallow that at some point? We don't know because they can't get their story straight.
Keep in mind this is hearsay, since we are reading something through a non-official channel, it's maybe not right to call it "flip-flopping"?
4 replies →
Same building on their API. You design around what you think is allowed, then a blog post shifts everything. A proper developer policy page would fix this.
Stealibg OAuth keys from first party app to impersonate it in order to not have to pay for usage with properly generated API key was never part of normal use anywhere.
1 reply →
They probably decreased the cost and limited these external calls
> Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again
Anthropic staff have had contradictive statements in Twitter and have corrected each other. Their intent for clarifications lead to confusion.
> OpenClaw treats Claude CLI reuse and claude -p usage as sanctioned for this integration unless Anthropic publishes a new policy.
Oh cool, so everything is back to business now, until they all or sudden update their policy tomorrow that retracts everything.
Anthropic have proved themselves to be be unreliable when it comes to CC. Switching to other providers is the best way to go, if you want to keep your insanity.
> Switching to other providers is the best way to go, if you want to keep your insanity.
Best and most applicable typo ever ʕ ´ • ᴥ •̥ ` ʔ
This is such a strange way for this to be announced. Why is openclaw telling us this? I wouldn't even trust it until Anthropic says so themselves.
It's the PayPal model of customer service: they'll ban you at any time for any reason or none at all, but if you're very nice they might be willing to have a human look at that decision at some point, but probably not.
2 replies →
That's the thing, it's not announced at all. The title is wrong.
It's just OpenClaw people claiming "Anthropic told us it's fine".
1 reply →
They had this on here since day 1 of the block. This is just Openclaw saying "if you run Openclaw inside Claude Code, it's compliant with the Anthropic ToS", because, well, it's literally running inside Claude Code.
What's not allowed is grabbing the oauth tokens and using these for your own custom agent, which is what was (and still is) banned.
Nothing has changed, this appears to just be a giant misunderstanding (and probably a poor choice of words from Openclaw).
1 reply →
Strategic ambiguity.
The most recent Anthropic announcement was not that people would be banned for using subscriptions with OpenClaw, but that it would be charged as extra usage. I think the reason this was changed three days after that announcement is that being charged for extra usage meant people would not be banned for using their subscription OAuth tokens directly against the Anthropic API with a third party harness, as they had been before. But rather both that usage, and the more recent claude -p usage both be charged as extra usage.
I don't see anything on this page that claims something different from that, or that addresses that claim at all.
> Switching to other providers is the best way to go, if you want to keep your insanity
I remember when I’d periodically rage quit from Uber One to Lyft Pink and back again every time I had a terrible customer-service experience. In the end, I realized picking a demon and getting familiar with its quirks was the better way to go.
I’m currently sticking with Claude, in part because I’m not exposed to this nonsense due to OpenClaw, in larger part because of the Hegseth-Altman DoD nonsense. More broadly, however, I’m not sure if any of Google, Anthropic or OpenAI are coming across as stars in AI communication and customer service.
With Uber and Lyft or even Anthropic versus OpenAI versus <insert flavor of the month here> I don’t even try to attach myself to any one brand.
It’s so easy to switch between all of them. I can open the Uber and Lyft apps and compare in a minute. I can run Claude and ChatGPT in parallel and see which one gets a better handle on the question. I can switch LLM providers with a few minutes of signing up for one and cancelling the other.
They all try to encourage brand lock in but it’s easy to pick up and move if you’re using them for their main service.
This isn't really a fair comparison imo.
There hasnt been near the confusion and drama surrounding things like codex and gemini-cli. I don't think they're all on the same pedestal right now
you know how s bunch of IT people are trying to "escape the permanent underclass" well it seems like anyone building their tools on cloud providers is doing the opposite. theyre willingly bexoming the underclass in hopes it trickles down
> until they all or sudden update their policy tomorrow that retracts everything.
Oh no. They won't update the policy. Boris or Thariq will casually mention in a random off-hand commebt on Twitter that this is banned now, and then will gaslight everyone that this has always been the case.
Looks like this was restored 2 weeks ago[0], 3 days after Anthropic said OpenClaw requires extra usage[1]. At this point, it's hard to take this seriously. No official statement and not even a tweet?
[0]: https://news.ycombinator.com/item?id=47633396
No, it's just that it's confusing, because there are two ways of using Claude Code credentials:
1. Take the oauth credentials and roll your own agent -- this is NOT allowed
2. Run your agentic application directly in Claude Code -- this IS allowed
When OpenClaw says "Open-Claw style CLI usage", it means literally running OpenClaw in an official Claude Code session. Anthropic has no problems with this, this is compliant with their ToS.
When you use Claude Code's oauth credentials outside of the claude code cli Anthropic will charge you extra usage (API pricing) within your existing subscription.
But... Even when running it in mode 2 ("claude -p") they at certain points tried to detect OpenClaw-usage based prompts made, and blocked them [0]. Now OpenClaw says that Antrophic sanctions this as allowable again.
I agree with GP that this is hard to take seriously.
[0]: https://x.com/steipete/status/2040811558427648357
8 replies →
And yet running the Claude Code cli with `-p` in ephemeral VMs gets me the "Third-party apps now draw from extra usage, not plan limits. We've added a credit to your organization to get you started. Ask your workspace admin to claim it and keep going." error.
One day you're experimenting just fine. The next, everything breaks.
And I'd gladly use their web containerized agents instead (it would pretty much be the same thing), but we happen to do Apple stuff. So unless we want to dive into relying on ever-changing unreliable toolchains that break every time Apple farts, we're stuck with macOS.
I think this is consistent with the Anthropic announcement. I do not see anything on this page that says it will NOT be charged as extra usage.
The most recent Anthropic announcement was not that people would be banned for using subscriptions with OpenClaw, but that it would be charged as extra usage. I think the reason this was changed three days after that announcement is that being charged for extra usage meant people would not be banned for using their subscription OAuth tokens directly against the Anthropic API with a third party harness, as they had been before. But rather both that usage, and the more recent claude -p usage both be charged as extra usage.
> No official statement and not even a tweet?
Release notes and announcements are a well-known agentic anti-pattern.
If you're doing them, you're doing agentic wrong. /s-ish-also-cry
This is called FUD, amplify negativity, silence positivity
Considering Anthropic is constantly doing the opposite, I would just call it "balance".
3 replies →
It's also something super simple to clarify from Anthropic if they want.
1 reply →
^every comment when someone says something remotely negative about LLM’s and their less useful cousins, cryptocurrencies. It’s baffling how similar the language and attitude is sometimes.
Anthropic was, even to me, “one of the better ones” until recently. They have made many questionable/poor decisions the last 6-8 weeks and people are right to call them out for it, especially when they want our money.
5 replies →
[dead]
Anthropic is really trying to burn all that goodwill they worked up by raising prices, reducing limits and making it impossible to know what the actual policies are.
Boiling the frog is an art form. You've got to know when to turn up the heat and when to let it simmer.
Don’t know, I feel like I’ve watched every tech company get through every controversy without consequence.
Google when they merged YouTube and Google+, Reddit multiple times, Facebook after countless scandals. Microsoft destroying windows and pushing ads.
At the end of the day a solid product and company can withstand online controversy.
1 reply →
Hormussy started it.
If you want LLMs to continue to be offered we have to get to a point where the providers are taking in more money than they are spending hosting them. And we still aren't there (or even close).
They are taking in more than they are spending hosting them. However, the cost for training the next generation of models is not covered.
4 replies →
The open models may not be as great but maybe these are good enough. AI users can switch when the prices rise before it becomes sustainable for (some) of the large LLM providers.
13 replies →
It is nobody's responsibility to ensure billion dollar companies are profitable. Use them until local models are good enough
I think this has to be done with technological advances that makes things cheaper, not charging more.
I understand why they have to charge more, but not many are gonna be able to afford even $100 a month, and that doesn't seem to be sufficient.
It has to come with some combination of better algorithms or better hardware.
10 replies →
If they started doing caching properly and using proper sunrooms for that they'd have a better chance with that
1 reply →
Like with all new products. It takes time to let the market do its work. See if from a positive side. The demand for more and faster and bigger hardware is finally back after 15 years of dormancy. Finally we can see 128gb default memory or 64gb videocards in 2 years from now.
I see the current situation as a plus. I get SOTA models for dumping prices. And once the public providers go up with their pricing, I will be able to switch to local AI because open models have improved so much.
[dead]
Would you please think of the shareholders
What shareholders, Anthropic is a money burning pit. Not to the same extent as OpenAI, but both will struggle hard to actually turn a profit some day, let alone make back the massive investments they've received.
Not that they don't bring value, I'm just not convinced they'll be able to sell their products in a sticky enough way to make up the prices they'll have to extract to make up for the absurd costs.
8 replies →
It's almost like they want me to switch to the Chinese clones - which they consider malicious actors.
[dead]
Aren't they just doing what Hacker News was trying to tell them to do? That AI is useful but not sure if sustainable. Now they're increasing prices and decreasing tokens and you guys are pissed off.
I feel this has to be said constantly, though I hate doing it.
hn is not a monolith. People here routinely disagree with each other, and that's what makes it great
1 reply →
I've been trying to toe the line here myself, here's how I've been doing it. For context, I pay for a Max 5x subscription.
My main goal is to maximize my subscription token usage while trying to comply with the rules, but its not clear where the line is for automation so I feel like I need to be clever.
- regular development (most token use): all interactive claude mode, standard use case
- automated background development: experimenting with claude routines (first-class feature, on subscription)
- personal non-nanoclaw claude automations (claude -p): uses subscription token, but only called as needed (generally just fix something if something in my homelab infra goes does down, its set up to not fire on an exact cron time)
- other LLM based automations: usually openrouter API key, cheap models as needed
- nanoclaw: all API key based, but since its expensive I keep usage mostly minimal and try to defer anything heavyweight to one of the other automation strategies (nanoclaw mainly just connects my homelab infra with telegram)
Oh that's interesting. Right after they signed the deal with Amazon so maybe it was all compute constrained. In any case, I tried using the Codex $20/mo plan and the limits are so low I can hardly get anywhere before my agent swaps to a different agent.
Somewhat suspicious that if I do this without an official Anthropic notice I'll lose my precious Max $200/mo account so I'll sit tight perhaps for a while.
Wait, how?
I had an idea on a whim to vibe-engineer an irccloud replacement for myself.
Started with claude web + Opus 4.7 and continued with Claude Code. Ate up two full cycles of my quota in maybe 6-10 prompts.
Then I iterated on that with pi.dev+codex for HOURS, managed to use 50% of my Codex Pro subscription.
Yeah, I tried Codex pro today and the $20 plan is way more generous than Claude's, especially lately.
1 reply →
Consider Z.ai if you need "bulk" usage, GLM is now very good. They still have the occasional API brown out however.
I used to use GLM mostly and had a Claude Pro subscription for occasional review and clean up.
Now I just use GLM.
I do think Claude Max is value for money. But it's more value than I personally need and I like Anthropic less and less.
Naive question but are you not afraid z.ai will train on your personal data?
3 replies →
They said from the beginning it was compute constraint and that OpenClaw was causing way more usage than they could handle
GPT-5.4 brutally consumptive for sure. It's not very verbal, but gpt-5.3 codex is wildly smart about coding & planning, and way way less token hungry.
Does that include OpenCode? That's what I care about most and it's the primary reason I've been sticking with OAI the past few months.
PSA: Since you are still required to use Claude Code and I have had a bunch of non-technical people asking me to make https://github.com/rcarmo/piclaw based on Claude rather than pi (which is never gonna happen), I have started pivoting its Python grand-daddy into a Go-based web front-end that runs Claude as an ACP agent.
Still early days, but code is available, sort of works if you squint, and welcomes PRs: https://github.com/rcarmo/vibes/tree/go
That's a very misleading title.
Question to the sages: should that submission get flagged because of that?
OpenClaw says Anthropic says it's OK. Well, that's crystal clear then.
My OpenClaw assistant (who's been using Claude) lost all his personality over the last week, and couldn't figure out how to do things he never had any issues doing.
I racked up about $28 worth of usage and then it just stopped consuming anymore, so I don't know if there was some other issue, but it was persistent.
I got sick of it and used a migration script to move my assistant's history and personality to a claude code config. With the new remote exec stuff, I've got the old functionality back without needing to worry about how bleeding-edge and prone to failure OpenClaw is.
I feel like this is what their plan was all along -- put enough strain and friction on the hobbyist space that people are incentivized to move over to their proprietary solution. It's probably a safer choice anyway -- though I'm sure both are equally vibe-coded.
I’ve been using codex cli and GPT 5.4. It is better at coding than Opus anyway. I did not really test Opus 4.7 but older versions generated worse results compared to GPT.
Which I would not even try and test though if Anthropic did not ban my account. The shadiest thing I did was to use it with opencode for a while I think. Never installed claw or used CC tokens somewhere else.
This is a weird company doing weird shit.
so if i use openclaw style cli that looks like opencode ai or other agentic style applications then that would be acceptable?
I got sick of the inconsistency caused by Anthropic tinkering with Claude Code and had canceled my 20x. My plan was to switch to Codex so I could use it in Pi.
I am specifically talking about switching because of the harness, not model quality. Anyone else match my experience?
I wonder how many other people recently did the same. It would be prudent of Anthropic to let people use Pro/Max OAuth tokens with other harnesses I think. Even though I get why they want to own the eyeballs.
I’ve been using Codex Pro since they lobotomized Opus 4.6. Codex is so much better, GPT 5.4 xhigh fast is definitely the smartest and fastest model available.
For a while there I had both Opus 4.6 and Codex access and I frequently pitted them against each other, I never once saw Opus come out ahead. Opus was good as a reviewer though, but as an implementer it just felt lazy compared to 5.4 xhigh.
One feature that I haven’t seen discussed that much is how codex has auto-review on tool runs. No longer are you a slave to all or nothing confirmations or endless bugging, it’s such a bad pattern.
Even in a week of heavy duty work and personal use I still haven’t been able to exhaust the usage on the $200 plan.
I’ll probably change my mind when (not IF) OpenAI rug pull, but for spring ‘26, codex is definitely the better deal.
I also made the switch to OpenAI, the $20 plan, I dunno about "so much better" but it's more or less the same, which is great!
The models and tools levelling out is great for users because the cost of switching is basically nil. I'm reading people ITT saying they signed up for a year - big mistake. A year is a decade right now.
3 replies →
Any alternative to Claude Design ? Tried Figma with Opus 4.6 but it doesn't come close in my experience.
Codex is abysmal for UI design imo.
7 replies →
I've been on pi for a few months now, build a custom tmux plugin so i can use nested pi and mix and match codex / claude instances.
pi has been the better harness out of all the ones i tried, first and third party.
Ever since the Anthropic block i've just canceled all my claude subs. Used to be codex was a bit worse, now they're practically equal. Claude is slightly better at directing other agents but the difference is too minor and not worth the money.
Claude usage limits / costs are absurd.
Any 'principles' people praise anthropic for are not that relevant to me anyways because i'm not a US citizen.
I left anthropic a while ago because of the similar shenanigans they had earlier. I went with opencode & zen.
I still have their subscription, but am using pi now, mainly because something happened that made my opencode sessions unusable (cannot continue them, just blanks out, I assume something in the sqlite is fucked), and I cannot be bothered to debug it.
For what I use the agents, the Chinese models are enough
Doesn't using pi be against their terms of use about having to go through Claude Code cli for all Max plan usage? (I had use Droid with Max previously, it was a great combo).
3 replies →
I also cancelled my 20x and switched to Codex. At this point even the Codex CLI seems to perform better than Claude Code... And so far I'm on the OpenAI Pro plan and haven't even needed to upgrade to their $100/mo plan. I'm getting more value for almost 10x cheaper.
(Disclosure: I work on tamer, an OSS supervisor for coding agents — biased.) Add one more to the count. The OAuth-across-harnesses idea would help, but it doesn't fix the shape of the problem. "Harness" has always felt off to me. Exoskeleton is closer — Claude Code, Codex, opencode wrap the model and augment it from the inside. What's missing is a layer above that's explicitly not an exoskeleton: a thin supervisor. A master that watches and guides, nothing more. It just relays I/O and hands approval back to the human.
I switched to Droid+Opus (with Claude Max) many months ago and it was my favorite combo.
Had to stop because they don't like us proxying requests anymore.
Same, I am from 5x plan and cancel and switched to codex as I want to use Pi.
My experience is the opposite of this thread's consensus. Context: Full time SWE, working on large and messy codebase. Not working on crazy automations, working on fixing bugs, troubleshooting crashes, implementing features.
Anthropic models write much better code, they are easy to follow, reasonable and very close to what I would done if I had the time... OpenAI's on the other hand generate extremely complex solutions to the simplest problems.
I was so disappointed by non-Anthropic models, that for a couple of weeks I only used Anthropic models, but based on this thread, I'll go back and give it another try. It's good to go back and try things again every couple of weeks.
Of course, I was annoyed that they lobotomized 4.6, the difference was day and night, and Anthropic is certainly not a company I trust. In my opinion, it shows their willingness to rugpull, so I'm looking at other approaches. Since 4.7, things went back to normal, things you'd expect to work just work.
> I wonder how many other people recently did the same.
Some negative signal for better overall view on things: I'm still with Anthropic and will probably stay with them for the foreseeable future.
I think after DoD/DoW shenanigans (which in of itself felt like a reasonable take on the part of Anthrpic) they got a bunch of visibility and new users, so them hitting some scaling limits is pretty much inevitable - so some service disruption is inevitable. Couple this with the tokenizer changes and seeming decrease in model performance (adaptive thinking etc.), and lots of people will be rightfully pissed off, alongside increased downtime (doesn't matter that much for me, definitely does matter for anything time-sensitive).
At the same time, in practice I've only seen it do stupid things across 8 million tokens about 5 times (confusing user/assistant roles, not reading files that should be obvious for a given use case, and picking trivially wrong/stupid solutions when planning things), alongside another 4 times that tests/my ProjectLint tool caught that I would have missed. The error rate is still arguably lower than mine, though I work in a very well known and represented domain (webdev with a bunch of DevOps and also some ML stuff, and integration with various APIs etc.).
At the same time, the 85 EUR they gave to me for free has been enough to weather the instability in regards to pricing changes and peak usage. They've fixed most of the issues I had with Claude Code (notably performance), and the sub-agent support is great and it's way better than OpenCode in my experience. They also keep shipping new features that are pretty nice, like Dispatch and Routines and Design, those features also seem nice and not like something completely misdirected, so that's nice. The Opus 4.7 model quality with high reasoning is actually pretty nice as well and works better than most of the other models I've tried (OpenAI ones are good, I just prefer Claude phrasing/language/approaches/the overall vibe, not even sure what I'd call it exactly, all the stuff in addition to the technical capabilities).
At the same time, if they mess too much with the 100 USD tier, I bet I could go to OpenAI or try out the GLM 5.1 subscription without too many issues. For now they're replacing all the other providers for me. Oh also I find the subscription vs API token-based payment approach annoying, but I guess that's how they make their money.
Because the Harness is the Moat and key IP not the Models themselves that is the why! now for both OpenAI and Anthropic with all their money raised and the compute they acquire and have in the books of course no one can easily replicate, whom can afford all those datacenters and Nvidia GPUs interconnected is why OpenAI throws you a bone and gives you an Open Source SDK Harness but not the one they actually use for ChatGPT. But now both of them have to deliver and do all the bull-shet they said this models can do... truth is they cannot. So now the bubbles burst and we will see what happens. We all have to buy iPhones or MacBooks so that makes sense, we all use Chrome or Google Search, Instagram, TikTok.
All these models and agents are shortcuts for all of us to be lazy and play games and watch YouTube or Netflix because we use them to work-less, well the party will be over soon.
I don’t think I’ve seen a more confused and shambolic product strategy since Google’s absurd line of GChat rebrandings.
Last year I was excited about the constant forward progress on models but since February or so its just been a mess and I want off this ride.
Either way I’m going to wait for “official” word from Anthropic, which I guess at this point will probably be a “Tell HN” or Reddit text post or a Xitter from some random employee’s personal account, because apparently that’s the state of corporate communication now.
[flagged]
I didn't even use openclaw and Anthropic disabled my account without explanation beyond "suspicious signals". If anyone found a way to get out of that, I'd be curious to hear it - genuinely no idea what I did wrong, and the Google docs form I filled out to appeal never got me any reply.
Same thing happened to me in January. Never heard back from them after submitting the google form. A few weeks ago I went through the subscription flow again and the 'account disabled' message was no longer there. Didn't go through with the payment so it's possible I would have been blocked at that point but it looked like my account had been re-enabled. I think you just have to play the waiting game unfortunately.
I'm surprised by this actually but OpenClaw is trash anyway.
Why? Did they figure out cheaper compute? Or did they lose a lot of users, and now the compute is there unused?
Whether to allow Claude subscription to access other services or not, at this point, anthropic seems to be schizophrenic, sometimes worried about insufficient computing power and sometimes worried about user loss, which is puzzling.
What's puzzling or schizophrenic about that? Those seem like two very natural factors that would be in tension with one another and have to be balanced.
[dead]
Almost seems like business leaders have to balance different aspirations and make tradeoffs. Unbelieveable.
Could they at least have a page somewhere letting us know what we’re allowed to do today?
How can they be this bad at this? What was all that about then?
Is there a way to use Anthropic subscription with hermes-agent?
They see that the new KimiK2.6 will eat their lunch. They don't care about you, they just care about your money and will take away your options if they don't believe you have a solid alternative.
Correction: OpenClaw says Anthropic says OpenClaw-style Claude CLI usage is okay again.
And then recommends to use an API key, which as far as I know was never restricted, it was trying to use the subscription that was prohibited/limited.
I'm confused by the comments being full of people swearing off Claude, feels like real HN bubble stuff.
(That's implied by the sitename to the right of the title)
/gestures at all the comments
Not at all.
Does this mean you can use openclaw with a Claude Pro account? I'm curious try it but no way i'm going to pay API rates.
This is only useful when you are using Claude Cli fairly regularly on the same machine as OpenClaw, right? Because the tokens need to be refreshed manually every so often?
Same, I am from the 3x plan and canceled and switched to Codex 2 days ago...
I'm out of the loop on Claude, hasn't it always been possible to use the Anthropic API with a tool like OpenClaw, paying per request? Is this limitation just for using your monthly subscription account?
Many people likely objected to the original restriction because it seemed as though Anthropic was trying to impede the development of competing tools.
If I'm paying for compute, why should it matter whether I use Anthropic's harness (e.g., Claude Code) or a 3rd-party harness?
I find it a little bizarre that people have this expectation. You can still pay for compute and use it the way you want by paying for the product you actually want to use. Subscription products like this are not marketed or intended to be used as access to the API, but they also offer access to the API if that's what you want. I'm still not entirely clear why people insist on using their subscription like this, so let me know if I'm missing something.
1 reply →
Isn't their argument that third party harness dont play nice with their GPUs which is a fair argument.
With Claude Code they can predict what the traffic would look like with third party harness they cannot.
1 reply →
Yes, exactly.
Maybe it’s allowed because they built the ability to direct the costs to your extra usage budget, not your monthly subscription?
The sentient had already sailed. It's hard to trust Anthropic here given the ringer they have dragged us through.
Contrast that to what GitHub did which was to pause new customers to ensure quality remained and things were stable.
https://news.ycombinator.com/from?site=openclaw.ai
hot damn
What models have you guys tried to use with OpenClaw that you've found suitable for the task? Codex personally rules for my dev style but not sure how well it works in the claw scenario.
题外话,你们不觉得在 openclaw 里用 claude 相当浪费 token 吗?
a more authoritative source (aka a tweet) woudl be nice.
This title is ridiculous and needs to be fixed.
Anthropic is trying so hard to be Apple they are doing all the mistakes Apple made during its first day
Feels like the real issue isn’t policy but pricing models
Swapped my OpenClaw to Claude again. I played around with Gemini and Chinese models in past month but it didn’t work for me.
This is a perfect example of how quickly you can burn through trust that took a long time to earn. I used to be - in my small circle of friends and peers - a genuine advocate for Anthropic and Claude. It was my sole AI assistant for over a year. But somewhere around February/March, something shifted. Declining quality, policy changes, inconsistent output. Nothing dramatic, just... a slow erosion.
That erosion pushed me to try Codex. I signed up for their most expensive pro plan. Now I'm about to experiment with Kimi. I'm not saying they're better (well, sometimes they are). But here's the thing - what Anthropic did is they made me look. They made a loyal customer start shopping around. And I think that's the worst thing you can do.
Having said that - as an LLM provider for my product, we're staying with Claude. I still trust in their ethics. Please don't prove me wrong.
I'm trying out codex for first time as well cause something up with Claude for sure, 4.7 has been super frustrating. For other models, highly recommend trying MiniMax 2.7, using it with Hermes is actually pretty good, and their token subscription plans include a lot of usage for $10.
Perfect, thanks. Codex app sucks, but I've been exploring opencode for that. Will try MiniMax!
Same here. I've been on the Claude Max 20x plan for a while. Now I'm really giving codex a try and looking at the cheaper models as well.
Enshittification 101, codex is undergoing the same thing on a 3 month lag.
Haha, thanks for the heads-up
Anthropic keeps conflating two distinct strategies — be the best model for developers to build on, or be the company that ships Claude Code. Those two have opposite policy conclusions. Restricting third-party harnesses maximizes Claude Code revenue; allowing them maximizes model-layer lock-in through developer habit. The whiplash is the symptom of not picking. Pick for crying out loud!
Interesting perspective on AI CLI tools. The Anthropic policy clarification is a significant development for the developer community. Would be curious about the implementation details.
ai garbage
Correct title: OpenClaw says Anthropic said OpenClaw-style Claude CLI usage is allowed again
Can we get OpenCode support back as well?
Did they disable this to give them time to come out with their own agent?
Probably somewhat worried about users shifting to the Grok API if they have to
Uh, what? For the love of God can I make my own harness or not? Or is this just saying you can use it only in API mode?
I have had some ideas for a custom harness (like embedding some tools OOTB and replacing slow tooling) but these policies throw me off. Instead I use local models.
Problem is API costs are insane. I have toyed with the idea of running a local model that works with Claude Sonnet or even Haiku, and I know this has been done by others.
How about third party coding harnesses?
Or Claw-like harnesses that we make ourselves? It takes honestly like 15 minutes to roll your own, so I did it thinking "well, hopefully it's not considered third party"
I do claw like things all the time. Give CC an API document and it figures out how to take a snapshot of the data. Pulls it down and does an analysis.
Canceled anyway.
The problem is these tools are so important I'm never going to risk Anthropic blocking my account now after the last debacle. So I'll be used OpenAI with OpenClaw. Hard to win back trust.
Bad Decision.
And tomorrow, it won't be allowed any more and accounts will be closed without prior notice.
Use something else.
Great so now we can all look forward to Claude progressively getting reduced limits again. How long till the $1000 ultra plan... or they just want us all paying API credits instead
I guess it doesn't matter any more, everyone bought all the mac minis
Would that apply to OpenCode too?
Pfft. Damage done, users know that Anthrophic will pull the rug from under them again if given half a chance. So yea, plan accordingly.
[flagged]
Guess they saw the growth of their growth shrink dramatically lol
More people flocked to Codex and found out that it's not worse, and sometimes superior.
Good luck on that opus plan.
Same PR strategy as the US administration lol
Hmm. Is this real?? If so, it's actually amazing news lol
Lol, no thanks
[dead]
[dead]