Comment by tormeh
8 months ago
To me, the issue seems to be that we're training transformers to predict text, which only forces the model to embed limited amounts of logic. We'd have to find something different to train models on in order for them to stop hallucinating.
Modern neuroscience suggests that everything the human brain might be doing is basically a kind of predictive processing, i.e. hallucination based on inductive biases. I do not think this is the main bottleneck.