← Back to context

Comment by Frieren

5 days ago

> Models are still very bad at determining truth from opinion

Models are not bad at it. Models are not even trying. As you point out, it is about what is the most common text. It has nothing to do with logic.

On that I disagree. LLMs are not simple Markov chains.

They may fail at a lot of logical tasks, but I don't think that is the same as exhibiting no logic.

Getting even slightly respectable performance on the ARC-AGI test set, I think shows that there is at least some logical processing going on. General intelligence is another issue entirely, but there's definitely more than nothing.