Comment by root_axis
18 hours ago
More likely its just an LLM hallucination, not a real policy that Anthropic has. Unfortunately for them, it's a bad look to showcase one of the main failure modes of their product in their own business process.
18 hours ago
More likely its just an LLM hallucination, not a real policy that Anthropic has. Unfortunately for them, it's a bad look to showcase one of the main failure modes of their product in their own business process.
If they've let their AI write the policy, and then they repeat that as policy, how exactly is this an "LLM hallucination" and not a real policy?
It's both, isn't it? If the AI writes the policy and is also responsible for enforcing it (by handling tickets and acting as a gatekeeper for which issues are escalated to humans who can do something about them), then the hallucination becomes real.
It's the same thing. Whether it was hallucinated upstream or in situ, the point is that it's not a real policy that the business adheres to, just something the LLM spat out.
Sure, it’s a real policy. It came from their website, from the official means of support.
1 reply →
These hallucinations keep killing my vibes brah