Comment by grey-area
6 months ago
I really don’t think that’s doable because why do you the majority output is correct? It’s just as likely to be a hallucination.
If he problem is the system has no concept of correctness or world model.
6 months ago
I really don’t think that’s doable because why do you the majority output is correct? It’s just as likely to be a hallucination.
If he problem is the system has no concept of correctness or world model.
Assuming that hallucinationd are relatively random it's true. I do believe that they happen less often when you feed the model decent context though.