Comment by lolinder
16 hours ago
This is the wrong response. It doesn't matter whether you've asked it to summarize or to produce new information, hallucinations are always a question of when, not if. LLMs don't have a "summarize mode", their mode of operation is always the same.
A better response would have been "we run all responses through a second agent who validates that no content was added that wasn't in the original source". To say that you simply don't believe hallucinations apply to you tells me that you haven't spent enough time with this technology to be selling something to safety-critical industries.
No comments yet
Contribute on Hacker News ↗