Comment by arb_

1 year ago

Empirically, they have reduced hallucinations. Where do OpenAI / Anthropic claim that their models won't hallucinate?

One example:

https://www.theverge.com/2024/3/28/24114664/microsoft-safety...

> Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI.