Comment by ImPostingOnHN
18 hours ago
That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".
18 hours ago
That may be true (and likely is), but it doesn't explain why that initial answer from Anthropic was "we can't" instead of the truth, which is "we can".
It's not hard to imagine how this happens. I assume most people here have used these models extensively.
The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".
The system prompt includes statements about how it doesn't have tools for managing funds.
A little bit of A and a bit of B and you get a message from Haiku telling you that you can't get your money back said as though this isn't a trivial customer service thing to do.
> The help bot system prompt probably includes some statement about how Claude should phrase everything as "we".
Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?
> The system prompt includes statements about how it doesn't have tools for managing funds.
Yes, why did Anthropic do that when everyone knew it could result in this situation we're discussing?
What you've been describing are all effects of the cause, which is poor management decisions to have poor support and poor customer service. Clearly those decisions resulted in poor support bot system prompts, too.
To wit: this would likely not have happened if the prompt included something like "in a scenario like this, or any scenario where the customer asks, simply transfer them to a human", and if Anthropic had not decided to have dysfunctional support and customer service.
The feedback from folks here is not that poor decisions can have poor effects. It's 'for the love of god, please stop making poor decisions that repeatedly, invariably, lead to unforced errors like the one in TFA'.