Comment by Tenoke

4 days ago

> It just so happens that sometimes that non-deterministic text aligns with reality, but you don’t really know when and neither does the model.

This is overly simplistic and demonstratably false - there's plenty of scenarios where a model will sell something false on purpose (e.g. when joking) and will tell you it was false with high probability correctly whether it was false or not after that.

However you want to frame it - there's clearly a more accurate than chance evaluation of truthfulness.

I don’t see how A follows from B. Being able to lie on purpose doesn’t in my mind mean that it’s also able to tell when a statement is true or false. The first one is just telling a tale which they are good at

  • But it is able to tell if a statement is true or false, as in it can predict whether it is true or false with much above 50% accuracy.

The model has only a linguistic representation of what is "true" or "false"; you don't. This is a limitation of LLMs, human minds have more to it than NLP

  • LLMs are also more than NLP. They're deep learning models.

    • What? Yes the modelling technique falls under "deep learning" but it still very much processes language and language only, making it NLP.

      Yes yes, language modelling ends up being surprisingly powerful at scale, but that doesn't make it not language modelling.

      4 replies →