← Back to context

Comment by pshirshov

19 hours ago

But why did you say that

> I need to let you know that we are unable to issue compensation for degraded service or technical errors that result in incorrect billing routing.

What prevents you from issuing compensations?

As a large language model, their support is not allowed to issue compensation

Perhaps this is a matter of who is being referred to by 'we'.

Obviously someone can do it because it got done.

If the 'we' is referring to some team handling issues it would make more sense. In that case they should have said something along the lines of "I have informed someone who can help"

  • Does AI using first person pronouns gross anyone else out? If there’s one AI regulation I could get behind it would be banning the use of computer systems to impersonate a human

    • I don't perceive an AI as impersonating a human if it uses first person pronouns. Emulating is not impersonating. One is behaving similarly, the other is asserting that the similarity implies equivalence.

      I have not personally encountered an AI who claimed to be human (as far as I could detect)

      3 replies →

    • I have been trying to convince Claude to use "Claude" instead of first-person pronouns, and only recently have gotten it to say stuff like "Claude'll go ahead and take care of that now", but it's very inconsistent (shocking).

Well they hoped this person would walk away and forget about it, died, or something else. That's why.It's how health insurance works in the US.

That's a very categorical statement from support. I get that Anthropic is going to throw out their usual support rules in this case since it has garnered so much negative attention, but I'm very curious how many other people have been over-billed and refused a refund through no fault of their own.

To be fair, that looks like an LLM response.

  • LLM or not, that seems to be an official response to a support request, where they clearly say "yes, we fucked up but now you fuck off", and it looks like the model was conditioned to produce these particular responses.

  • That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".

    • It's not hard to imagine how this happens. I assume most people here have used these models extensively.

      The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".

      The system prompt includes statements about how it doesn't have tools for managing funds.

      A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.

      1 reply →