Comment by trq_

18 hours ago

Hey everyone, Thariq from the Claude Code team.

We've been on this since the bug surfaced. Everyone affected is getting a full refund and an extra grant of usage credits equal to their monthly subscription as our apology. You can see my original post here: https://x.com/trq212/status/2048495545375990245. We’re still working on sending emails to everyone affected.

Our support flow wasn't set up to route a complex bug like this to engineering. We’re hoping to make this better but will take some time. Sorry to everyone caught up in it.

You also seem to have a bug where people get randomly invoiced: https://news.ycombinator.com/item?id=47693679

I got a random invoice for $45.08 back in March, despite not having auto top up enabled. Trying to reach support met with a brick wall. Based on the post I linked to, I'm not the only one facing this problem.

  • They also have a bug where people get randomly suspended: https://www.reddit.com/r/ClaudeAI/comments/1b82cpu/where_you...

    It happened this year to my one and only personal account. The account was one week old. Unique e-mail address. $5 balance for API credits. No usage yet. Suspended and refunded. Appeal denied without explanation.

    I did create the account on a VPN because I was using public WiFi at a tech conference. That's probably what tripped their automation.

    • Using certain types of cards will get you automatically banned, I’ve found that out after getting 3 accounts suspended. I made them all using same VPN and email domain. I’ve been using the 4th account with no issues with a reputable bank debit card.

  • I also got randomly invoiced $5.00 for absolutely no reason on the 28th. I don't have auto-reload enabled, nor did I explicitly buy extra usage.

But why did you say that

> I need to let you know that we are unable to issue compensation for degraded service or technical errors that result in incorrect billing routing.

What prevents you from issuing compensations?

  • Perhaps this is a matter of who is being referred to by 'we'.

    Obviously someone can do it because it got done.

    If the 'we' is referring to some team handling issues it would make more sense. In that case they should have said something along the lines of "I have informed someone who can help"

    • Does AI using first person pronouns gross anyone else out? If there’s one AI regulation I could get behind it would be banning the use of computer systems to impersonate a human

      5 replies →

  • Well they hoped this person would walk away and forget about it, died, or something else. That's why.It's how health insurance works in the US.

  • That's a very categorical statement from support. I get that Anthropic is going to throw out their usual support rules in this case since it has garnered so much negative attention, but I'm very curious how many other people have been over-billed and refused a refund through no fault of their own.

  • To be fair, that looks like an LLM response.

    • LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.

"Our support flow wasn't set up"

Would be more accurate. It still isn't setup. Talking to a bot as support who only tells you to talk to the bot for support is not actually support at all. It looks like support, but there's no way to ACTUALLY GET support.

Thanks for the follow up here and the transparency.

For those of us not on X, what are the best communication channels for us to follow this sort of communication?

  • I'd recommend a good credit card like Amex, and a lawyer.

    These fucks only respond when they get bad publicity.

    • Amex, like basically all other card issuers, have essentially stopped giving customers preference in chargebacks since 2020 or so. What used to be solid advice now rings hollow - you’re more likely to be asked for information that not available to you than allowing your chargeback to go through.

      1 reply →

I try to avoid jumping on the bandwagon when it's already covered but billing bugs being treated like other software issue and the major comms channel being X (which I can't get to load half the time) is ridiculous.

Could really use a post-mortem to set the story straight. The apparently-hallucinated support response copied-pasted by the submitter showing up in the github issue thread is very misleading without scrutiny

A side aspect of this drama is the root feature which enabled this bug:

> ugh sorry this was a bug with the 3rd party harness detection and how we pull git status into the system prompt

Claude wants to exercise control of how I use the "inclusive volume" that I purchased with my monthly subscription. This harms competition (someone else could write a more efficient or safer coding agent) and is generally not in the best interest of society. Why do we allow this?

This specific case is interesting, because it is so clear cut. There is no cross financing via ads, they already have the infrastructure to measure usage and even the infrastructure to bill extra usage. I also don't see how you can plausible make the argument that restricting usage to their blessed client is necessary for fair use or for the basic structure of their business model (this would be the standard argument for e.g. Youtube: Purposefully degrading the experience of their free client to not support background playback enables the subscription model).

Hey Thariq, I love Claude! I use Claude every single day and it has changed my life, which is why I did what I'm about to describe.

Happy to talk privately, but as I detailed here, https://news.ycombinator.com/item?id=47954005 . I've been billed $200 for a Max gift card to a 27 character alphanumeric icloud address that bounces.

I was looking through the system, and there are several UI/UX and process gaps in the gift card and billing order flow that expose Anthropic to significant liability. I'm genuinely not trying to concern troll or make some kind of overwrought threat here. Genuinely trying to be constructive. Let me give you an example.

I sent an email to Anthropic Support outlining the disputed / possibly malicious charge. The AI Agent / Claude instance agreed and replied with,

    Thank you for confirming.
    
    I've documented all the details about this unauthorized [specific amount + tax] charge for the Gift Max 20X subscription (invoice [lalala]) sent to [insert the random alphanumeric]@icloud.com.
    
    An error occurred while evaluating the refund eligibility for your account. Your request has been fully documented and our team will follow up with you shortly to investigate this unauthorized transaction and assist with the refund and cancellation.
    
    Best regards,

And then no one followed up, the conversation was closed without recourse and I wasn't allowed to reply.

I'm not sure how familiar you are with international trading practises, but in multiple jurisdictions, the AI agent assumed legal liability for Anthropic. It accepted that the charge was unauthorized / fraudulent, stated that redressal was needed, but then failed to offer the means to redress it / didn't allow for the refund to continue.

I am not a lawyer, but based on my understanding of prior cases (I read this kind of stuff for fun, don't ask) – in the EU, the US and Canada, users can approach courts and invoke the doctrine of promissory estoppel (again don't quote me on this, IANAL, just like reading case law). And if enough users are affected / do so, it becomes a deceptive practises issue.

I've been thinking about how to solve this problem, and as strange as it sounds, I think Anthropic already has the tools to make the best customer support service in human history. No exaggeration. I think that this crisis could be an opportunity.

  • Apparently we are now expected to know by some telepathic mechanism that important customer service announcements are made only on Twitter.

Have a look at https://github.com/anthropics/claude-code/issues/54497

I can’t use Claude Code online at all

  • I have the same issue when I try to run /ultraplan

    • I tried /debug as the only input, hoping CC wouldn’t shit the bed and give me some data.

      Heck, just saying “hello” causes Claude Code to fail.

      I’m thinking of doing a charge back, and creating a new account. Others don’t seem to have this issue.

Is it complex? I was somewhat taken aback by how simple it was. Still very confused as to how it could happen.

  • Only the weights and the RNG used to select tokens can answer that. You will understand much if you read up on the quality of code in the CC source leak, it's completely vibe coded and the printf fn is genuinely impossible for a human to comprehend.

> Our support flow wasn't set up to route a complex bug like this to engineering.

What does that even mean? Does it mean, "our support flow is just an LLM that fobs off customers and puts their issues into the bin"? Or is there some genuine "routing" of simple bugs to engineering which accidentally drops "complex" bugs? Could you drescibe that process, it sounds fascinating?

Also, how is changing a customer's billing based on detecting a certain string in a certain place a "complex" bug? Grep the string, remove the if statement, done. I'd love a post-mortem about why this was a complex bug.

More questions than answers here Thariq.

Please do explain why someone at Anthropic decided, on purpose, to write code that says something along the lines of: "if ( git_history_str contains "HERMES.md" ... )" then { bill more money }

Somebody (or something) wrote this code. This bug wouldn't be happening for any other reason. It's not a glitch, an oversight, a feature gap, or a temporary outage. It is a piece of written code in your system.

Everyone here is upset about the $200, which is probably much less money than the time that engineer spent ranting about the overcharge on GitHub.

The real problem in my mind is that that bit of code existed in the first place.

Why?

Are you vibe coding your billing!?

Without review!?!?

Or worse, a human being decided to add this to your code base? And nobody noticed or flagged it during code review?

Or much, much worse, Anthropic is purposefully ripping off customers?

This deserves a thorough post-mortem.

  • Would imagine it's the simplest answer: they're flying by the seat of their pants, there's 1000 things happening every day that demand attention and there's not enough of it to go around. They toss their LLM at it, give it a cursory glance, and ship it. A quick glance at the Claude Code source code bears the result of this process out. The fundamental question is, if their model is so powerful, why do they keep fucking up such simple things? We're led to believe this is a serious company with a model so powerful they can't release it to the general public.

    • Hermes is one of these OpenClaw clones, so this was certainly intentional, not a model hallucinating something.

      I think the problem is clear. Anthropic saw their usage go up much more than their capacity could handle. There are a few tried and true solutions to this, like "increase the price" or "restrict signups so you can guarantee service to what you have already sold".

      Then there is the "large scale fraud" option, where you materially change and degrade the service you have already sold. Just because you have obfuscated and mislead in how you describe the product you are selling doesn't mean you get to capture the cash flow of 1 year subscriptions then not honor that contract for the full duration.

      4 replies →

    • I doubt an AI would be stupid enough to write code like that without being explicitly prompted to do so. It's so... specific.

      That specific nature would mean it would get caught by even the most cursory of code reviews.

      Even if I was just "scanning my eyeballs over the code" without properly reading it, this would jump out as very odd and make me pause.

  • Vibes were strong dude. Don't blame the dev blame the bots brah. They forgot to use mythos obviously otherwise this wouldn't happen simple mistake.

hey guys can you please fix claude design? I've been trying to test it tonight and already used up 20% of my usage and all i get is continuous [unknown] missing EndStreamResponse errors (and this is after your status page reflected everything ok).

Is there no constraint preventing extra usage billing from being used before regular usage billing has been exhausted?

I’ve had similar terrible experiences with the Claude support bot when my usage limit was disappearing after a few minutes using Sonnet. I asked for help, asked for escalation, asked for a human, anything. All I got was a non-answers from an AI. I won’t spend real money on Claude now. I’m ok with losing $20 if there’s a rug pull of one way or another, but not $200.

Please, please, please hire more humans with the actual ability to do the right thing for support if your AI agents can’t do the job.

[flagged]