Comment by omer_balyali
21 hours ago
Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.
Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.
Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.
As their ads say: "Keep thinking. There has never been a better time to have a problem."
I've been thinking since then, what was the problem. But I guess I will "Keep thinking".
Honestly its kind of horrifying that if "Frontier" LLM usage were to become as required as some people think just to operate as a knowledge worker, someone could basically be cast out of the workforce entirely through being access-banned by a very small group of companies.
Luckily, I happen to think that eventually all of the commercial models are going to have their lunch eaten by locally run "open" LLMs which should avoid this, but I still have some concerns more on the political side than the technical side. (It isn't that hard to imagine some sort of action from the current US government that might throw a protectionist wrench into this outcome).
There is also a big risk for employers' whole organisation to be completely blocked from using Anthropic services if one of their employees have a suspended/banned personal account:
From their Usage Policy: https://www.anthropic.com/legal/aup "Circumvent a ban through the use of a different account, such as the creation of a new account, use of an existing account, or providing access to a person or entity that was previously banned"
If an organisation is large enough and have the means, they MIGHT get help but if the organisation is small, and especially if the organisation is owned by the person whose personal account suspended... then there is no way to get it fixed, if this is how they approach.
I understand that if someone has malicious intentions/actions while using their service they have every right to enforce this rule but what if it was an unfair suspension which the user/employee didn't actually violate any policies, what is the course of action then? What if the employer's own service/product relies on Anthropic API?
Anthropic has to step up. Talking publicly about the risks of AI is nice and all, but as an organisation they should follow what they preach. Their service is "human-like" until it's not, then you are left alone and out.
A new phobia freshly born.
[dead]