← Back to context

Comment by kvetching

13 hours ago

https://artificialanalysis.ai/evaluations/omniscience?omnisc...

AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).

Grok 4.2 which was just released in the API just benched the best at this benchmark.