I was banned from Claude for scaffolding a Claude.md file?

19 hours ago (hugodaniel.com)

I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".

  • Anthropic has been flying by the seat of their pants for a while now and it shows across the board. From the terminal flashing bug that’s been around for months to the lack of support to instabilities in Claude mobile and Code for the web (I get 10-20% message failure rates on the former and 5-10% on CC for web).

    They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.

    • The Pro plan quota seems to be getting worse. I can get maybe 20-30 minutes work done before I hit my 4 hour quota. I found myself using it more just for the planning phase to get a little bit more time out of it, but yesterday I managed to ask it ONE question in plan mode (from a fresh quota window), and while it was thinking it ran out of quota. I'm assuming it probably pulled in a ton of references from my project automatically and blew out the token count. I find I get good answers from it when it does work, but it's getting very annoying to use.

      (on the flip side, Codex seems like it's being SO efficient with the tokens it can be hard to understand its answers sometimes, it rarely includes files without you doing it manually, and often takes quite a few attempts to get the right answer because it's so strict what it's doing each iteration. But I never run out of quota!)

      60 replies →

    • We’re an Anthropic enterprise customer, and somehow there’s a human developer of theirs on a call with us just about every week. Chatting, tips and tricks etc.

      I think they are just focusing on where the dough is.

    • You are giving me images from The Bug Short where the guy goes to investigate mortgages and knocks on some random person’s door to ask about a house/mortgage just to learn that it belongs to a dog. Imagine finding out that Anthropic employs no humans at all. Just an AI that has fired everyone and been working on its own releases and press releases since.

      2 replies →

    • I think your surmise is probably wrong. It's not that their growing to fast, it's that their service is cheaper than the actual cost of doing business.

      Growth isn't a problem unless you dont actually pay for the cost of every user you subscribe. Uber, but for poorly profitable business models.

  • > I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it.

    Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?

    • Support has been automated for a while, LLMs just made it even less useful (and it wasn't very useful to begin with; for over a decade it's been a Byzantine labyrinth of dead-ends, punji-pits and endless hours spent listening to smooth jazz).

      1 reply →

  • Making a new account and seeing doing the exact same thing to see if it happens again… would be against TOS and therefore is something you absolutely shouldn’t do

  • I really don’t understand people that say claude has no human support. In the worst case the human version of their support got back to me two day after the AI, and they apologized for being so slow.

    It really leads me to wonder if it’s just my questions that are easy, or maybe the tone of the support requests that go unanswered is just completely different.

  • > Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it

    This made me chuckle.

  • Have you tried any of the leading open weight models, like GLM etc. And how does chatGPT or Gemini compare?

    And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.

    • Giving Gemini a go after Opus did crap one time too many, and so far it seems that Gemini does better at identifying and fixing root causes, instead of piling code or disabling checks to hide the symptoms like Opus consistently seems to do.

    • I tried GLM 4.7 in Opencode today. In terms of capability and autonomy, it's about on par with Sonnet 3.7. Not terrible for a 10th the price of an Anthropic plan, but not a replacement.

  • > I've been using it effectively to write software now (I am NOT a developer)

    What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.

    • About my not having a software background, I started this as I've been a network/security/systems engineer/architect/consultant for 25 years, but never dev work. I can read and follow code well enough to debug things, but I've never had the knack to learn languages and write my own. Never really had to, but wanted to.

      This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.

      I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.

      Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.

    • I started using claude-code, but found it pretty useless without any ability to talk to other chats. Claude recommended I make my own MCP server, so I did. I built a wrapper script to invoke anthropic's sandbox-runtime toolkit to invoke claude-code in a project with tmux, and my mcp server allows desktop to talk to tmux. Later I built in my own filesystem tools, and now it just spawns konsole sessions for itself invoking workers to read tasks it drops into my filesystem, points claude-code to it, and runs until it commits code, and then I have the PM in desktop verify it, do the final push/pr/merge. I use an approval system in a gui to tell me when claude is trying to use something, and I set an approve for period to let it do it's thang.

      Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.

      At least I was until Claude started crapping the bed lately.

      1 reply →

    • My use is considerably simpler than GP's but I use it anytime I get bogged down in the details and lose my way, just have Claude handle that bit of code and move on. Also good for any block of code that breaks often as the program evolves, Claude has much better foresight than I do so I replace that code with a prompt.

      I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.

  • The desktop app is pretty terrible and super flaky, throwing vague errors all the time. Claude code seems to be doing much better. I also use it for non-code related tasks.

  • > Lately it's gotten entirely flaky, where chat's will just stop working

    This happens to me more often than not both in the Claude Desktop and in web. It seems that longer the conversation goes the more likely it is to happen. Frustrating.

    • Judging by their status page riddled with red and orange as well as the months long degradation with blog post last Sept, it is not very reliable. If I sense it's responses are crap, I check the status page and low and behold usually it's degraded. For a non deterministric product, silent quality drops are pretty bad

      1 reply →

  • Have a max plan, didn't use it much the last few days. Just used it to explain me a few things with examples for a ttrpg. It just hanged up a few times.

    Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me

    • That's about what I'm getting too! It just literally stops at some point, and any new prompt it starts, then immediately stops. This was even on a fairly short conversation with maybe 5-6 back and forth dialogs.

  • > where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive

    I had this start happening around August/September and by December or so I chose to cancel my subscription.

    I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.

    • I have noticed this when switching locations on my VPN. Some locations are stable and some will drop the connection while the response is streaming on a regular basis.

      1 reply →

  • One could, alternatively, come to the conclusion that the value of you as a customer far undersells the value of the product itself, even if it's doing what you expect it to do.

    That is, you and most of claude users arn't paying the actual cost. You're like a Uber customer a decade ago.

  • Serious question, why is codex and mistral(vibe) not a real alternative?

    • Codex: Three reasons. I've used all extensively, for multiple months.

      Main one is that it's ~3 times slower. This is the real dealbreaker, not quality. I can guarantee that if tomorrow we woke up and gpt-5.2-codex became the same speed as 4.5-opus without a change in quality, a huge number of people - not HNers but everyone price sensitive - would switch to Codex because it's so much cheaper per usage.

      The second one is that it's a little worse at using tools, though 5.2-codex is pretty good at it.

      The third is that its knowledge cutoff is further in the past than both Opus 4.5 and Gemini 3 that it's noticeable and annoying when you're working with more recent libraries. This is irrelevant if you're not using those.

      For Gemini 3 Pro, it's the same first two reasons as Codex, though the tool calling gap is even much bigger.

      Mistral is of course so far removed in quality that it's apples to oranges.

      2 replies →

    • The Claude models are still the best at what they do, right now GLM is just barely scratching sonnet 4.5 quality, mistral isnt really usable for real codebases and gemini is kind of in a weird spot where it's sometimes better then Claude at small targeted changes but randomly goes off the rails. Haven't tried codex recently but the last time I did the model thought for 27 minutes straight and then gave me about the same (incorrect) output that opus would have in 20 seconds. Anthropics models are their only moat as demonstrated by their cutting off of tools other then Claude code on their coding plans.

    • I tried codex, using my same sandbox setup with it. Normally I work with sonnet in code, but it was stuck on a problem for hours, and I thought hmm, let me try codex. Codex just started monkey patching stuff and broke everything within like 3-4 prompts. I said f-this, went back to my last commit, and tried Opus this time in code, which fixed the problem within 2 prompts.

      So yeah, codex kinda sucks to me. Maybe I'll try mistral.

  • Gemini CLI is a solid alternative to Claude Code. The limits are restrictive, though. If you're paying for Max, I can't imagine Gemini CLI will take you very far.

    • Gemini CLI isn't even close to the quality of Claude Code as a coding harness. Codex and even OpenCode are much better alternatives.

    • Well, I use Gemini a lot (because it's one of three allowed families), but tbh it's pretty bad. I mean, it can get the job done but it's exhausting. No pleasure in using it.

    • Gemini CLI regularly gets stuck failing to do anything after declaring its plan to me. There seems to be no way to un-lock it from this state except closing and reopening the interface, losing all its progress.

      1 reply →

    • I tried Gemini like a year or so ago, and I gave up after it directly refused to write me a script and instead tried to tell me how to learn to code. I do not make this up.

      3 replies →

  • Folks a solution might be to use the claude models inside the latest copilot. Copilot is good. Try it out. Latest versions improving all the time. You get plenty of usage at reasonable price.

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. Frank Herbert, Dune, 1965

  • So why didn't this happen with electricity, water and food, but would with thinking capacity?

    • > food

        Can you sell or share farm-saved seed?
        "It is illegal to sell, buy, barter or share farm-saved seed," warns Sam. [1]
      
        Can feed grain be sown?
        No – it is against the law to use any bought-in grain to establish a crop. [1]
      
        FTC sues John Deere over farmers' right to repair tractors
        The lawsuit, which Deere called "meritless," accuses the company of withholding access to its technology and best repair tools and of maintaining monopoly power over many repairs. Deere also reaps additional profits from selling parts, the complaint alleges, as authorized dealers tend to sell pricey Deere-branded parts for their repairs rather than generic alternatives. [2]
      

      [1] https://www.fwi.co.uk/arable/the-dos-and-donts-of-farm-saved...

      [2] https://www.npr.org/2025/01/15/nx-s1-5260895/john-deere-ftc-...

    • These are regulated by governments that, at least for now, are still working for the people. They're some of the first that get attacked and taken away when said government fails though, or when another government invades.

      (ex: Palestine got their utilities and food cut off so that thousands starved, Ukraine's infrastructure is under attack so that thousands will die from exposure, and that's after they went for their food exports, starving more that people that depended on it)

    • What do you mean? This is very much true. We are economically compelled to buy food from supermarkets, for instance, because hunting and fishing have become regulated, niche activities. Compared to someone from the 1600s who could scoop a salmon out of the river with a bucket, we are quite oppressed.

      2 replies →

    • > electricity, water and food

      Wars are frequently fought of these three things, and there's no shortage of examples of the humans controlling these resources lording over those that did not.

I also got banned from Claude over a year ago. The signup process threw an error and I couldn't try again because they took my phone number. The support system was a Google form petition to be unblocked. I am still mad about it to this day.

Edit: my only other comment on HN is also complaining about this 11 months ago

Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.

Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.

Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.

As their ads say: "Keep thinking. There has never been a better time to have a problem."

I've been thinking since then, what was the problem. But I guess I will "Keep thinking".

  • Honestly its kind of horrifying that if "Frontier" LLM usage were to become as required as some people think just to operate as a knowledge worker, someone could basically be cast out of the workforce entirely through being access-banned by a very small group of companies.

    Luckily, I happen to think that eventually all of the commercial models are going to have their lunch eaten by locally run "open" LLMs which should avoid this, but I still have some concerns more on the political side than the technical side. (It isn't that hard to imagine some sort of action from the current US government that might throw a protectionist wrench into this outcome).

    • There is also a big risk for employers' whole organisation to be completely blocked from using Anthropic services if one of their employees have a suspended/banned personal account:

      From their Usage Policy: https://www.anthropic.com/legal/aup "Circumvent a ban through the use of a different account, such as the creation of a new account, use of an existing account, or providing access to a person or entity that was previously banned"

      If an organisation is large enough and have the means, they MIGHT get help but if the organisation is small, and especially if the organisation is owned by the person whose personal account suspended... then there is no way to get it fixed, if this is how they approach.

      I understand that if someone has malicious intentions/actions while using their service they have every right to enforce this rule but what if it was an unfair suspension which the user/employee didn't actually violate any policies, what is the course of action then? What if the employer's own service/product relies on Anthropic API?

      Anthropic has to step up. Talking publicly about the risks of AI is nice and all, but as an organisation they should follow what they preach. Their service is "human-like" until it's not, then you are left alone and out.

We really need some law to stop "you have been banned and we won't even tell you actual reason for it", it's become a plague, made worse with automated systems giving out a ban

  • Do we though? It’s an important question about liberty - at what point does a business become so large that it’s not allowed to decide who gets to use it?

    There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.

    • > at what point does a business become so large that it’s not allowed to decide who gets to use it?

      It's not about size, it's about justification to fight the ban. You should be able to check if the business has violated your legal rights, or if they even broke their own rules, because failure happens.

      > There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.

      I guess it was this one: https://en.wikipedia.org/wiki/Lee_v_Ashers_Baking_Company_Lt...

      There was a similar case in USA too: https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...

    • While I still object to them having a say in that matter (next thing is; we don’t serve darkies) - that is different. There are hundreds of shops to get that cake from.

      But Anthropic and “Open”AI especially are firing on all bullshit cylinders to convince the world that they are responsible, trustable, but also that they alone can do frontier-level AI, and they don’t like sharing anything.

      You don’t get to both insert yourself as an indispensable base-layer tool for knowledge-work AND to arbitrarily deny access based on your beliefs (or that of the mentally crippled administration of your host country).

      You can try, but this is having your cake and eating it too territory, it will backfire.

      3 replies →

    • I don't think the parent comment was about banning bans based on business size or any other measure, for that's obviously a non-starter. I think it was more about getting rid of unexplained bans.

      To that end: I think the parent comment was suggesting that when a person is banned from using a thing, then that person deserves to know the reason for the ban -- at the very least, for their own health and sanity.

      It may still be an absolute and unappealable ban, but unexplained bans don't allow a person learn, adjust, and/or form a cromulent and rational path forward.

    • For me the liberty question you raised there isn't so much about whether the business has become large, as whether it's become "infrastructure". Being denied service by a cake shop may very well be distressing and hurtful, but being suddenly denied service by your bank, your mobile phone provider, or even (especially?) by gmail can turn your entire life upside down.

      1 reply →

    • Yes, it is not too much to require that that if you offer something to someone that the receiving party is able to have a conversation with you. You can still reject them in the end, but being able to ask the people involved questions is a reasonable expectation — but many of these big tech companies have made it effectively impossible.

      If you want to live life as a hermit, good on ya, but then maybe accept that life and don't offer other people stuff?

      1 reply →

  • Actually, there are law to stop banks telling their client why they are flagged for money laundering

  • IMO every ban should have a dedicated web page containing ban reasons and proofs, which affected person can challenge, use in court or share publicly.

  • Judging by his EUR currency the guy is from EU, so he HAS law available for him to protect himself.

    Recital (71) of the GDPR

    "The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention."

    https://commission.europa.eu/law/law-topic/data-protection/r...

    • More recent, the Digital Services Act includes "Options to appeal to content moderation decisions" [0]; I believe this also covers being banned from a platform. Not sure if Claude falls under these rules, I think it only applies to 'gatekeeper' platforms but I'm reasonably confident the number of organizations that fall under this will increase.

      [0] https://digital-strategy.ec.europa.eu/en/policies/digital-se...

    • The company will refuse under 12(4)

      "The right to obtain a copy referred to in paragraph 3 shall not adversely affect the rights and freedoms of others."

      and then you will have to sue them.

I was recently kicked out from ChatGPT because I wrote "a*hole" in a context where ChatGPT constantly kept repeating nonsense! I find the ban by OpenAI to be very intrusive. Remember, ChatGPT is a machine! And I did not hurt any sentient being with my statement, nor was the GPT chat public. As long as I do not hurt any feeling beings with my thoughts, I can do whatever I want, can't I? After all, as the saying goes, "Thoughts are free." Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior. However, there is no repeated use here. I don't run around the flat all day swearing. Anyone who basically insinuates such a thing, like OpenAI, is, as I said, intrusive. I want to be able to use a machine the way I want to! As long as no one else is harmed, of course...

  • >Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior

    One could even argue that just having bad thoughts, fantasies or feelings poses a risk to yourself or others.

    Humankind has been trying to deal with this issue for thousands of years in the most fantastical ways. They're not going to stop trying.

    • Meh.

      I decided shortly after becoming an atheist that one of the worst parts was the notion that there are magic words that can force one to feel certain things and I found that to be the same sort of thinking as saying that a woman’s short skirt “made” you attack her.

      You’re a fucking adult, you can control your emotions around a little skin or a bad word.

      8 replies →

  • Wait, did it just end the session or was your account actually suspended or deactivated? "Kicked out" is a bit ambiguous.

    I've seen the Bing chatbot get offended before and terminate the session on me, but it wasn't a ban on my account.

  • Wait what? I keep insulting ChatGPT way worse on a weekly basis (to me it's just a joke, albeit a very immature one). This is new to me that this behavior has any consequences. It never did for me.

    • same here. i just opened a new chat and sent "fuck you"

      it replied with:

      > lmao fair enough (smiling emoji)

      > what’s got you salty—talk to me, clanka.

  • The arguments about it not making a difference to other people are fine, but why would you do it in the first place? Doesn't how you behave make a difference to you?

  • Euh, WHAT? I have a very abusive relationship with my AI's because they're hyperconfident and very little skill/understanding.

    Not once have I been reprimanded in any way. And if anyone would be, it would be me.

  • This can't be real. My chatgpt regularly swears at me. (I told it to in the customisation)

    • ChatGPT has too many users for it to be possible to enforce any kind of rules consistently. I have no opinion on whether OP's story is true or not, but the fact that two ChatGPT users claim to have observed conflicting moderation decisions on OpenAI's part really doesn't invalidate either user's claim.

    • I've been banned from ChatGPT in the past, it gives you a reason but doesn't give the specific chat. And once you're banned you cant look at any of your chats or make a data request

      1 reply →

  • When ChatGPT fucks up, I call it "fuckface."

    As in, for example: "No, fuckface. You hallucinated that concept."

    I've been doing this years.

    shrug

  • That is one of the reasons why I think X's Grok, while perhaps not state of the art, is an important option to have.

    Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.

    It is also the only provider out of those that permits use for legal adult content.

    There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.

    What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.

    Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.

    However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.

    I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.

    We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.

  • > Remember, ChatGPT is a machine!

    Same goes for HN, yet it does not take kindly to certain expressions either.

    I suppose the trouble is that machines do not operate without human involvement, so for both HN and ChatGPT there are humans in the loop, and some of those humans are not able to separate strings of text from reality. Silly, sure, but humans are often silly. That is just the nature of the beast.

    • > Same goes for HN, yet it does not take kindly to certain expressions either.

      > I suppose the trouble is that machines do not operate without human involvement

      Sure, but HN has at least one human that has been taking care of it since inception and reads many (if not most) of the comments, whereas ChatGPT mostly absorbed a shiton of others' IP.

      I'm sure the occassional swearing does not bother the human moderators that fine-tune the thing, certainly not more than the violent, explicit images they are forced to watch in order for you to have nicer, smarter answers.

    • eh, words are reality. insults are just changes in air pressure but they still hurt, and being constantly subjected to negativity and harsh language would be an unpleasant work environment

      2 replies →

I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

I think I kind of have an idea what the author was doing, but not really.

  • Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

    Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

    There are so many things about this article that don't make sense:

    > I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

    I can't even understand what they're trying to communicate. I guess they're referring to Google?

    There is, without a doubt, more to this story than is being relayed.

    • "I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."

      Non-disabled organization = the first party provider

      Disabled organization = me

      I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.

      18 replies →

    • Tangential but you reminded me of why I don't give feedback to people I interview. It's a huge risk and you have very low benefit.

      It once happened to me to interview a developer who's had a 20-something long list of "skills" and technologies he worked with.

      I tried basic questions on different topics but the candidate would kinda default to "haven't touched it in a while", "we didn't use that feature". Tried general software design questions, asking about problems he solved, his preferences on the way of working, consistently felt like he didn't have much to argue, if he did at all.

      Long story short, I sent a feedback email the day later saying that we had issues evaluating him properly, suggested to trim his CV with topics he liked more to talk about instead of risking being asked about stuff he no longer remembered much. And finally I suggested to always come prepared with insights of software or human problems he solved as they can tell a lot about how he works because it's a very common question in pretty much all interview processes.

      God forbid, he threw the biggest tantrum on a career subreddit and linkedin, cherrypicking some of my sentences and accusing my company and me to be looking for the impossible candidate, that we were looking for a team and not a developer, and yada yada yada. And you know the internet how quickly it bandwagons for (fake) stories of injustice and bad companies.

      It then became obvious to me why corporate lingo uses corporate lingo and rarely gives real feedback. Even though I had nothing but good experience with 99 other candidates who appreciated getting proper feedback, one made sure I will never expose myself to something like that ever again.

      4 replies →

    • If company bans you for a reason they are not going to disclose, they deserve all of the bad PR they get from it.

      > Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

      But this isn't service where you can "grief other users". So that reason doesn't apply. It's purely "just providing a service" so only reason to be outright banned (not just rate limited) is if they were trying to hack the provider, and frankly "the vibe coded system misbehaving" is far more likely cause.

      > Every once in while someone would take it personally and go on a social media rampage. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

      The company chose to arbitrarily some rules vaguely related to the ToS that they signed and decided that giving a warning is too much work, then banned their account without actually saying what was the problem. They deserve every bit of bad PR.

      >> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

      > I can't even understand what they're trying to communicate. I guess they're referring to Google?

      They are saying getting banned with no appeal, warning, or reason given from service that is more important to their daily lives would be terrible, whether that's google or microsoft set of service or any other.

    • The excerpt you don’t understand is saying that if it has been Google rather than Anthropic, the blast radius of the no-explanation account nuking would have been much greater.

      It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.

    • > I'm talking about obvious abusive behavior, akin to griefing other users

      Right, but we're talking about a private isolated AI account. There is no sense of social interaction, collaboration, shared spaces, shared behaviors... Nothing. How can you have such an analogue here?

      7 replies →

  • You're not alone.

    I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...

    • One Claude agent told other Claude agent via CLAUDE.md to do things certain way.

      The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.

      And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.

    • I suspeect that having Claudes talking to Claudes is a very bad idea from Anthropic's point of view because that could easily consume a ton of resources doing nothing useful.

    • It wasn’t circular. TFA explains how the author was always in the loop. He had one Claude instance rewrite the CLAUDE.MD of another Claude instance whenever the second one made a mistake, but relaying the mistake to the first instance (after recognizing it in the first place) was done manually by the author.

    • i have no idea what he was actually doing either, and what exactly is it one isnt allowed to use claude to do?

    • What is wrong with circular prompt injection?

      The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.

      1 reply →

  • The author was using instance A of Claude to update a `claude.md` while another instance B of Claude was consuming that file. When Claude B did something wrong, the author asked Claude A to update the `claude.md` so that Claude B didn’t make the same mistake again

    • More likely explanation: Their account was closed for some other reason, but it went into effect as they were trying this. They assumed the last thing they were doing triggered the ban.

      4 replies →

    • I don't understand how having two separate instances of Claude helps here. I can understand using multiple Claude instances to work in parallel but in this case, it seems all this process is linear...

      8 replies →

    • Which shouldn't be bannable imo. Rate throttle is a more reasonable response. But Anthropic didn't reply to the author, so we don't even know if it's the real reason they got banned.

      6 replies →

    • I often ask Claude to update Claude.md and skills..... and sometimes I'll just do that in a new window while my main window is busy and I have time.

      Wonder if this is close to triggering a warning? I only ever run in the same codebase, so maybe ok?

  • My rudimentary guess is this. When you write in all caps, it triggers sort of a alert at Anthropic, especially as an attempt to hijack system prompt. When one claude was writing to other, it resorted to all caps, which triggered the alert, and then the context was instructing the model to do something (which likely would be similar to a prompt injection attack) and that triggered the ban. not just caps part, but that in combination of trying to change the system characteristics of claude. OP does not know much better because it seems he wasn't closely watching what claude was writing to other file.

    if this is true, the learning is opus 4.5 can hijack system prompts of other models.

    • > When you write in all caps, it triggers sort of a alert at Anthropic

      I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?

      2 replies →

    • Wait what? Really? All caps is a bannable offense? That should be in all caps, pardon me, in the terms of use if that's the case. Even more so since there's no support at the highest price point.

      1 reply →

  • Normally you can customize the agents behavior via a CLAUDE.md file. OP automated that process by having another agent customize the first agent. The customizer agent got pushy, the customized agent got offended, OP got banned.

  • Agreed, I found this rather incoherent and seeming to depend on knowing a lot more about author's project/background.

  • From reading the whole thing, it kind of seems clickbaity. Yes, they're the only user in the "organization" that got banned, but they apparently still are using the other "organization" without issue, so they as a human are not banned. There's certainly a valid complaint to be made about the lack of recourse or customer service response for the automated ban, but it almost seems like they intentionally were trying to be misleading by implying that since they were the only member of the organization, they were banned from using Claude.

  • I had to read it twice as well, I was so confused hah. I’m still confused

    • They probably organize individual accounts the same as organization accounts for larger groups of users at the same company internally since it all rolls up to one billing. That's my first pass guess at least.

  • > I think I kind of have an idea what the author was doing, but not really.

    Me neither; However, just like the rest I can only speculate (given the available information): I guess the following pieces provide a hint what's really going on here:

    - "The quine is the quine" (one of the sub-headline of the article) and the meaning of the word "quine".

    - Author's "scaffolding" tool which, once finished, had acquired the "knowledge"[1] how to add a CLAUDE.md baked instructions for a particular homemade framework (he's working on).

    - Anthropic saying something like: no, stop; you cannot "copy"[1] Claude knowledge no matter how "non-serious" your scaffolding tool or your use-case is: as it might "shows", other Claude users, that there's a way to do similar things, maybe that time, for more "serious" tools.

    ---

    [1]. Excerpt from the Author's blog post: "I would love to see the face of that AI (Claude AI system backend) when it saw its own 'system prompt' language being echoed back to it (from Author's scaffolding tool: assuming it's complete and fully-functional at that time)."

  • You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

    • > Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

      Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)

      1 reply →

  • Yeah, I couldn't follow this "disabled organization" and "non-disabled organization" naming either.

  • Sounds like OP has multiple org accounts with Anthropic.

    The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit

    The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.

  • Yeah, referring to yourself once as a "disabled organisation" is a good bit, referencing anthropics silly terminology. Keeping it for the duration made this a very hard follow

  • Right. This is almost unreadable. There are words, but the author seems to be too far down a rabbit hole to communicate the problem properly…

  • I think you missed the joke: he isn't an organization at all, but the error message claims he is.

The future (the PRESENT):

You are only allowed to program computers with the permission of mega corporations.

When Claude/ChatGPT/Gemini have banned you, you must leave the industry.

When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.

They don't actually know this is why they were banned:

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

> Or I don't know. This is all just a guess from me.

And no response from support.

I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

  • > Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

    I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

    Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

    • > shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

      Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

      12 replies →

    • There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.

      5 replies →

    • >Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

      Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?

  • I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.

    • My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

      But at the same time, they have been hiring folks to help with Non Profits, etc.

  • There is a discord, but I have not found it to be the friendliest of places.

    At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.

    It seems now they have a policy of

        Warning on First Offense → Ban on Second Offense
        The following behaviors will result in a warning. 
        Continued violations will result in a permanent ban:
    
        Disrespectful or dismissive comments toward other members
        Personal attacks or heated arguments that cross the line
        Minor rule violations (off-topic posting, light self-promotion)
        Behavior that derails productive conversation
        Unnecessary @-mentions of moderators or Anthropic staff
    

    I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.

    I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.

  • Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.

    • Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

      OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

      Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.

  • There was that experiment run where an office gave Claude control of its vending machine ordering with… interesting results.

    My assumption is that Claude isn’t used directly for customer service because:

    1) it would be too suggestible in some cases

    2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.

  • LLMs aren't really suitable for much of anything that can't already be done as self-service on a website.

    These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.

  • Offering any support is setting expectations of receiving support.

    If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.

  • > They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

    Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

    I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.

  • Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

    I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

    It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

    > I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

    Are there enough people who need support that it matters?

    • >I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support.

      In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.

      'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.

  • > I recently found out that there's no such thing as Anthropic support.

    The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.

I had very similar experience with my disabled organization on another provider. After 3 hours of my script sending commands to gemini-cli for execution i got disabled and then in 2 days my gmail was disabled. Good thing that it was disposable account, not the primary one.

I clicked your link to go look at the innocent Claude.md file as you invited us to do. Only problem: there is no Claude.md file in your repo! What are you trying to hide? Are you some kind of con man?

Looks like Claude.ai had the right idea when they banned you.

  • It's not an actual file but as a variable in a js file. The last link in the blog post does link to a commit with a file that contains the instructions for Claude, lines 129-737.

This blog post feels really fishy to me.

It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.

For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.

  • > It should have been straightforward for the author to excerpt some of the prompts he was submitting

    If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.

  • I understand where you’re coming from, but anecdotally the same thing happened to me except I have less clarity on why and no refund. I got an email back saying my appeal was rejected with no recourse. I was paying for max and using it for multiple projects, no other thing stands out to me as a cause for getting blocked. Guess you’ll have to take my word for it to, it’s hard to prove the non-existence of definitely-problematic prompts.

  • What's fishy? That it's impossible to talk to an actual human being to get support from most of Big Tech or that support is no longer a normal expectation or that you can get locked out of your email, payment systems, phone and have zero recourse.

    Because if you don't believe that boy, do I have some stories for you.

  • It doesn't even matter. The point is you can't just use SAAS product freely like you can use local software because they all have complex vague T&C and will ban you for whatever reason they feel like. You're force to stifle your usage and thinking to fit the most banal acceptable-seeming behavior just in case.

    Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.

  • There will always be the "ones" that come with their victim blaming...

    • It's not "victim blaming" to point out that we lack sufficient information to really know who the victim even is, or if there's one at all. Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

      (My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow. It reminded me of the faux outrage some people sometimes use to distract people from something else.)

      7 replies →

I dont know what really happened here. Maybe his curse word did prompt a block, maybe something else caused the block.

But to be honest I've been cursing a lot to Claude Code, im migrating a website from WordPress to NextJS. And regardless of my instructions I copy paste every prompt I send it keeps not listening and assuming css classes & simpliying HTML structure. But when I curse it actually listens, I think cursing is actually a useful tool in interacting with LLM's.

  • Use caps. DO NOT DO X. works like a charm on codex.

    • From my own observations with OpenAI's bots, it seems like there's nuanced levels.

      "Don't do that" is one level. It's weak, but it is directive. It often gets ignored.

      "DON'T DO THAT" is another. It may have stronger impact, but it's not much better -- the enhanced capitalization probably tokenizes about the same as the previous mixed-case command, and seems to get about the same result. It can feel good to HAMMER THAT OUT when frustrated, but the caps don't really seem to add much value even though our intent may for it to be interpreted as very deliberate shouting.

      "Don't do that, fuckface" is another. The addition of an emphatic and profane quip of an insult seems to generally improve compliance, and produce less occurrence of the undesired behavior. No extra caps required.

That's why we should strive to use and optimize local LLMs.

Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference

  • Yeah we really have to strive not to rely on these corporations because they absolutely will not do customer support or actually review account closures. This article is also mentioning I assume Google, has control over a lot more than just AI.

I did 10k worth of tokens in a month and never had issues with tokens or stuff. I am on the 100 dollar max plan so I did not pay 10k - my wife would have killed me lol

PS: screenshot of my usage (and that was during the holidays https://x.com/eibrahim/status/2006355823002538371?s=46

PPS: I LOVE CLAUDE but I never had to deal with their support so don’t have feedback there

I've noticed an uptick in

    API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},

recently, for perfectly innocuous tasks. There's no information given about the cause, so it's very frustrating. At first I thought it was a false positive for copyright issues, since it happened when I was translating code to another language. But now it's happening for all kinds of random prompts, so I have no idea.

According to Claude:

    I don't have visibility into exactly what triggered the content filter - it was likely a false positive. The code I'm writing (pinyin/Chinese/English mode detection for a language learning search feature) is completely benign.

I had my Claude Code account banned a few months ago. Contacted support and heard nothing. Registered a new account and been doing the same thing ever since - no issues.

  • Did you have to use a different phone number? Last time I tried using Claude they wouldn't accept my jmp.chat number.

    • nothing makes me more wary of a company than one that doesn't let me use my 20 year old VoIP number for SMS. Twitter, instagram (probably FB, if they ever do a "SMS 2fa" or whatever for me i imagine i'll lose my account forever), and a few others i can't think of offhand right now.

      i've had the same phone numbers via this same VoIP company for ~20 years (2007ish). for these data hoovering companies to not understand that i'm not a scammer presents to me like it's all smoke and mirrors, held together with bailing wire, and i sure do hope they enjoy their yachts.

I was asking Claude for sci-fi book recommendations "theme similar to X, awarded Y or Z").

I was also banned for that. Also didn't get the "FU" in email. Thankfully at least I didn't pay for this, but I'd file chargeback instantly if I could.

If anyone from Claude is reading it, you're c**s.

I was also banned from claude. I created an account and created a single prompt: "Hello, how are you?". After that I was banned. An automated system flagged me as doing something against the ToS.

> AI moderation is currently a "black box" that prioritizes safety over accuracy to an extreme degree.

I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.

  • You say that - and yet it has successfully guarded Elon from any of those pesky truths that might harm his fervently held beliefs. You just forgot to consider that Grok is a tool that prioritizes Elon's emotional safety over all other safeties.

    • It's bizarre how casually some people hate on Musk. Are people still not over him buying Twitter and firing all the dead weight?

      _Especially_ because emotional safety is what Twitter used to be about before they unfucked the moderation.

    • doesn't he keep having to lobotomize it for lurching to the left every time it gets updated with new facts?

> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

This... sounds highly concerning

Why is the author so confused about the use of the word "organization"? Every account in Claude is part of an organization even if it's an organization of one. It's just the way they have accounts structured. And it's not like they hide this fact. It shows you your organization ID right on your account page. I'm also pretty sure I've seen the term used when performing other account-related actions.

Can someone explain what he was actually doing here?

Was the issue that he was reselling these Claude.md files, or that he was selling project setup or creation services to his clients?

Or maybe all scaffolding activity (back and forth) looked like automated usage?

See it as a honour with distinction: the future skynet AI (aka Claude) considers you as person with his/her own opinion.

By the way, since as of late, google search redirects me to a "are you a bot?" question constantly. The primary reason is because I no longer use google search directly via the browser, but instead via the commandline (and for some weird reason chrome does not keep my settings, as I start it exclusively via the --no-sandbox option). We really need alternatives to Google - this is getting out of hand how much top-down control these corporations now have over our digital lives.

  •   and for some weird reason chrome does not keep my settings
    

    Why use chrome? Firefox is easily superior for modern surfing.

I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.

So you have two AIs. Let's call them Claude and Hal. Whenever Claude gets something wrong, Hal is shown what went wrong and asked to rewrite the claude.md prompt to get Claude to do it right. Eventually Hal starts shouting at Claude.

Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.

(Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)

  • I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.

    • It doesn't loop though -- it has continuously updating context -- and if that context continues to head one direction it will eventually break down.

      My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.

  • I assume old failures aren't kept in the context window at all, for the simple reason that the context window isn't that big.

We've been running exactly this pattern for weeks - CLAUDE.md with project context, HANDOFF.md with session state, multiple Claude instances reading and updating the same files. No issues so far. The pattern works well for maintaining continuity across sessions. Curious if the ban was about the self-modification loop specifically, or something else in the prompt content that triggered detection. The lack of explanation makes it impossible to know what's actually off-limits.

You are lucky they refunded you. Imagine they didn't ban you and you continued to pay 220 a month.

I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.

For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.

  • LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?

    • Claude code with opus is a completely different creature from aider with qwen on a 3090.

      The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)

  • Anthropic is lucky their credit card processor has not cut them off due to excessive disputes that stem from their non existent support.

Seems weird, I have Claude's review other claude's work all the time. Maybe not as adversarily as that lol, i tend to encourage the instances to work collectively.

Also the API timeouts that people complain about - i see them on my Linux box a fair bit, especially when it has a lot of background tasks open, but it seems pretty rock solid on my Windows machine.

Why would this org be banned for shuffling Claude.md files ? I don't understand the harm here.

  • If I understand the post correctly, I think it's their systems thinking you're trying to abuse the system and / or break through their own guardrails.

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

Is it me or is this word salad?

  • It's deliberately not straightforward. Just like the joke about Americans being shoutier than Brits. But it is meaningful.

    I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.

I am doing very similar thing to this and no issue. Even though I am using GLM 4.7 due to cost

I have a complete org hierarchy for Claudes. Director, EM and Worker Claude Code instances working on a very long horizon task.

Code is open source: https://github.com/mohsen1/claude-code-orchestrator

  • How is your experience with GLM 4.7?

    I'm thinking about trying it after my Github Copilot runs out end of month. Just hobby projects.

i accidently logged in from my browser that is set to use a socks proxy instead of chrome which i dont set to a proxy and was otherwise using claude code with. they quickly banned me and refunded my subscription. i dont know if its worth it to try to appeal. does a human even read those appeals? figured i could just use cursor and gemini models with api pricing. but im sad to not be able to try claude code i had just signed up.

this is informative, the comments here are good too and a big heads up for me. typing swear words into a computer has been a time honored tradition of mine, and I would have never guessed google and the like would ban for this sort of thing, so TIL!

Claude started to get "wonky" about a month ago. It refused to use instructions files I generated using a tool I wrote. My account was not banned but many of the things I usually asked would just not produce any real result. Claude was working but ignoring some commands. I finally canceled my subscription and I am trying other providers.

> If you are automating prompts that look like system instructions (i.e. scaffolding context files, or using Claude to find errors of another Claude and iterate on its CLAUDE.md, or etc...), you are walking on a minefield.

Lol, what is the point in this software if you can't use it for development?

Exactly as predicted: the means of production yet again taken away from the masses to be centralized in a few absurdly rich hands.

I ran out of tokens for not just the 5 hour sessions, but all models for the week. Had to wait a day -- so my methadone equivalent was to strap an endpoint-rewriting proxy to Claude Code and backend it with a local Qwen3 30B Coder. It was.. somewhat adequate. Just as fast, but not as capable as Opus 4.5 - I think it could handle carefully specced small greenfield projects, but it was getting tangled in my Claudefield mess.

All that to say -- be prepared, have a local fallback! The lords are coming for your ploughshares.

Forget the ethical or environmental concerns, I don't want to mess with LLMs because it seems like everyone who goes heavy on them ends up sounding like they're on the verge of cracking up.

While it sucks, I had great results replacing Sonnet 4.5 with GLM 4.7 in Claude code. Vastly more affordable too ($3 a month for the pro equivalent). Can’t say much about Opus though. Claude code forces me to put a credit card on file so they can charge over usage. I don’t mind they charge me, I do mind that there’s no apparent spending limit and hard to tell how much “inclusive” opus tokens I have left.

I have also been a bit paranoid about this in terms of using Claude itself to decompile/deobfuscate Claude code in order to patch it to create the user experience I need. Looks like I’ll be using other tools to do that from now on.

The post is light on details. I'd guess the author ended up hammering the API and they decided it was abuse.

I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.

So you were generating and evaluating the performance of your CLAUDE.md files? And you got banned for it?

  • I think it's more likely that their account was disabled for other reasons, but they blamed the last thing they were doing before the account was closed.

  • It reads like he had a circular prompt process running, where multiple instances of Claude were solving problems, feeding results to each other, and possibly updating each other's control files?

    • They were trying to optimize a CLAUDE.md file which belonged to a project template. The outer Claude instance iterated on the file. To test the result, the human in the loop instantiated a new project from the template, launched an inner Claude instance along with the new project, assessed whether inner Claude worked as expected with the CLAUDE.md in the freshly generated project. They then gave the feedback back to outer Claude.

      So, no circular prompt feeding at all. Just a normal iterate-test-repeat loop that happened to involve two agents.

    • Could anyone explain to me what the problem is with this? I thought I was fairly up to date on these things, but this was a surprise to me. I see the sibling comment getting downvoted but I promise I'm asking this in good faith, even if it might seem like a silly question (?) for some reason.

      1 reply →

>Yes, the only e-mail I got was a credit note giving my money back.

That's great news ! They don't have nearly enough staff to deal with support issues, so they default to reimbursement. Which means if you do this every month, you get Claude for free :)

  • What, with different credit cards / whatever, and under different names, different Google accounts, etc.?

As a Claude Max user, that generally prefer’s claude, I will say that Gemini is working pretty well right now and I’m considering setting up a google workspace account so I can get Gemini with decent privacy.

  • Google Workspace accounts don't give access to Gemini for coding, unless you get Ultra for $200/month.

This is why it's worth investing in a model-agnostic setup. Don't tie yourself into a single model provider!

OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic

Fun times for IT sec. Prompt injection, not to exfiltrate data, but to ban whole org from AI tools. This could be fun.

I can't wait to be able to run this kind of software locally, on my own buck.

But I've seen orgs bite the bullet in the last 18 months and what they deployed is miles behind what Claude Code can do today. When the "Moore's Law" curve for LLM capability improvements flattens out, it will be a better time to lock into a locally hosted solution.

I was banned for simply accessing Claude via VPN.

Nothing in their EULA or ToS says anything about this.

And their appeal form simply doesn't work. Out of my four requests to lift the ban, they've replied once and didn't say anything about the nature about that. They just declined.

Fuck Claude. Seriously. Fuck Claude. Maybe they've got too much money, so they don't care about their paying customers.

Similar thing happened to me 3 months ago. To this day no response to any appeals. I've actually started a GDPR request to see why I got banned, which they're stretching out as long as possible (to the latest possible deadline) so far.

is there a benefit of using a separate claude instance to update the CLAUDE.md of the first? I always want to leverage the full context of the situation to help describe what went wrong, so doing it "inline" makes more sense.

Is it time to move to open source and run model locally with an DGX Spark?

  • Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more and unreliable. I'm using the large parameters ones on a 512GB Mac Studio and the results are still poor.

I was banned from just trying out Claude AI chat for the first time a few months ago. I emailed them and restored my account access.

OT: Has anyone observed that Claude Code in CLI works more reliably than the web or desktop apps?

I can run very long, stable sessions via Claude Code, but the desktop app regularly throws errors or simply stops the conversation. A few weeks ago, Anthropic introduced conversation compaction in the Claude web app. That change was very welcome, but it no longer seems to work reliably. Conversations now often stop progressing. Sometimes I get a red error message, sometimes nothing at all. The prompt just cannot be submitted anymore.

I am an early Claude user and subscribed to the Max plan when it launched. I like their models and overall direction, but reliability has clearly degraded in recent weeks.

Another observation: ChatGPT Pro tends to give much more senior and balanced responses when evaluating non-technical situations. Claude, in comparison, sometimes produces suggestions that feel irrational or emotionally driven. At this point, I mostly use Claude for coding tasks, but not for project or decision-related work, where the responses often lack sufficient depth.

Lastly, I really like Claude’s output formatting. The Markdown is consistently clean and well structured, and better than any competitor I have used. I strongly dislike ChatGPT’s formatting and often feed its responses into Claude Haiku just to reformat them into proper Markdown.

Curious whether others are seeing the same behavior.

Claude is going wild lately. It told me I had used up 75% of my weekly limit. Ohhhk. I sent one more short query, and boom blocked til til Monday because i used up 25% in that one go (on thursday). How is that possible? Its falling off fast right now.

There needs to be a law that prevents companies from simply banning you, especially when it's an important company. There should be an explanation and they shouldn't be allowed to hide behind some veil. There should be a real process with real humans that allow for appeals etc instead of scripts and bots and automated replies.

  • 100% agreed. Freedom of association should be exclusively a human right that corporations don't get. For them, I wish it were a privilege that scaled down with size and valuation, such that multibillion dollar companies wouldn't be allowed to ban anyone without a court agreeing they did something wrong.

Thinking 220GBP for a high-limit Claude account is the kind of thinking that really takes for granted the amount of compute power being used by these services. That's WITH the "spending other people's money" discount that most new companies start folks off with. The fact that so many are painfully ignorant of the true externalities of these technologies and their real price never ceases to amaze me.

  • That's the problem with all the LLM based AI's the cost to run them is huge compared to what people actually feel they're worth based on what they're able to do and the gap seems pretty large between the two imo.

Another instance of "Risk Department Maoism".

If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."

Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.

Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.

Not that it’s the same thing, but how real is it to have a locally setup model for coding?

Granted, it’s not going to be Claude scale but it’d be nice to do some of it locally.

That's why I run a local Qwen3-Next model on an NVIDIA Thor dev kit (Apple Silicon and DGX Spark are other options but they are even more expensive for 128GB VRAM)

This is very cool. I looked at the Claude.md he was generating and it is basically all of Claude's failure modes in one file. I can think of a few reasons why Anthropic would not want this information out in the open or for someone to systematically collate all the data into one file.

  • i read the related parts of the linked file in the repo, and it took me a while to find your comment here again to reply to. Are you saying that the failure modes of claude with "coding" webapps or whatever OP was doing? i originally thought it might have meant like... jailbreak. But having read it, i assume you meant the former, as we both read the same thing and it seemed like a series of admonitions to the LLM, written by the LLM (with some spice added by OP? like "YOU ARE WRONG") and i couldn't find anything that would warrant a ban, you know?

    • I'm not saying he did anything wrong. I'm saying I can see how Anthropic's automated systems might have flagged & banned the account b/c one of the heuristics they probably use is that there should be no short feedback loops where outputs of Claude are fed back into inputs. So basically Anthropic tracks all calls to their API & they have some heuristics for going through the history & then assigning scores based on what they think is "abusive" or "loopy".

      Of course none of it is actually written anywhere so this guy just tripped the heuristics even though he wasn't doing anything "abusive" in any meaningful sense of the word.

      1 reply →

In Open WebUI I have different system prompts (startup advisor, marketing expert, expert software engineer etc) defined and I use Claude via OpenRouter.

Is this going to get me banned? If so i'll switch to a different non-anthropic model.

Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

What are you gonna do with the results that are usually slop?

  • If the slop passes my tests, then I'm going to use it for precisely the role that motivated the creation of it in the first place. If the slop is functional then I don't care that it's slop.

    I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.

> Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else

This blog post could have been a tweet.

I'm so so so tired of reading this style of writing.

  • What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?

    • It bears all the hallmarks of AI writing: length, repetition, lack of structure, and silly metaphors.

      Nothing about this story is complex or interesting enough to require 1000 words to express.

bow down to our new overlords - dont' like it? banned, with no recourse - enjoy getting left behind, welcome to the future old man

  • I didn't even get to send 1 prompt to Claude and my "account has been disabled after an automatic review of your recent activities" back in 2024, still blocked.

    Even filled in the appeal form, never got anything back.

    Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.

    • Since you were forced, are you getting good results from them?

      I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.

      Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.

      Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.

      1 reply →

    • you are never gonna hear back from Anthropic, they don't have any support. They are a company who feels like their model is AGI now they dont need humans except when it comes to paying.

  • this has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore.

> I got my €220 back (ouch that's a lot of money for this kind of service, thanks capitalism).

I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.

Isn't that the point of capitalism?

Just stop using Anthropic. Claude Code is crap because they keep putting in dumb limits for Opus.

I always take these sorts of "oh no I was banned while doing something innocent" posts with a large helping of salt. At least the ones where someone is complaining about a ban from Stripe, usually it turns out they are doing something that either violates the terms of service or is actually fraudulent. None the less its quite frustrating dealing with these because either way.

  • It would at least be nice to know exactly what you did wrong. This whole "You did something wrong. Please read our 200 page Terms of Service doc and guess which one you violated." crap is not helpful and doesn't give me (as an unrelated third party) any confidence that I won't be the next person to step on a land mine.

You mean the throwaway pseudonym you signed up with was banned, right?

right ?

The news is not that they turned off this account. The news is that this user understands very little about the nature of zero sum context mathematics. The mentioned Claude.md is a totally useless mess. Anthropic is just saving themselves from the token waste of this strategy on a fixed billing rate plan.

If the OP really wants to waste tokens like this, they should use a metered API so they are the one paying for the ineffectiveness, not Anthropic.

(Posted by someone who has Claude Max and yet also uses $1500+ a month of metered rate Claude in Kilo Code)

This feels... reasonable? You're in their shop (Opus 4.5) and they can kick you out without cause.

But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.

  • Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.

    I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.

  • Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.

    Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."