Comment by Animats

15 hours ago

This is encouraging. The title is a bit much. "Potential points of attack for understanding what deep learning is really doing" would be more accurate but less attention-grabbing.

It might lead to understanding how to measure when a deep learning system is making stuff up or hallucinating. That would have a huge payoff. Until we get that, deep learning systems are limited to tasks where the consequences of outputting bullshit are low.

The field is massively hampered by the wishful mnemonics and anthropomorphization of LLMs. For example, even the hallucination idea arbitrarily assigns human semantics to LLM results. By the actual mathematical principles by which LLMs work, any hallucination is another output, with no clear definition between it and every other output.

> measure when a deep learning system is making stuff up or hallucinating

That's a great problem to solve! (Maybe biased, because this is my primary research direction). One popular approach is OOD detection, but this always seemed ill-posed to me. My colleagues and I have been approaching this from a more fundamental direction using measures of model misspecification, but this is admittedly niche because it is very computationally expensive. Could still be a while before a breakthrough comes from any direction.

  • > Could still be a while before a breakthrough comes from any direction.

    It would be valuable enough that getting significant funding to work on it is probably possible. Especially with all the money being thrown at AI.

  • Could you elaborate on what you mean by OOD detection seeming ill-posed?