Comment by themafia

2 months ago

The difference being that you can ask a human to prove it and they'll actually discover the illusion in the process. They've asked the model to prove it and it has just doubled down on nonsense or invented a new spelling of the word. These are not even remotely comparable.

Indeed, we are able to ask counterfactuals in order to identify it as an illusion, even for novel cases. LLMs are a superb imitation of our combined knowledge, which is additionally curated by experts. It's a very useful tool, but isn't thinking or reasoning in the sense that humans do.