Comment by mepian

2 years ago

They should handle the problem of hallucinations then.

They are working on it. And current large language models via eg transformers aren't the only way to do AI with neural networks nor are they the only way to do AI with statistical approaches in general.

Cyc also has the equivalent of hallucinations, when their definitions don't cleanly apply to the real world.

Bigger models hallucinate less.

and we don't call it hallucinations but gofai mispredicts plenty.

  • > Bigger models hallucinate less.

    I'm skeptical. Based on what research?

    • GPT-4 hallucinates a lot less than 3.5. Same with the Claude Models. This is from personal experience. There are also benchmarks (like TruthfulQA) that try to measure hallucinations that show the same thing.