Comment by Lerc
5 days ago
On that I disagree. LLMs are not simple Markov chains.
They may fail at a lot of logical tasks, but I don't think that is the same as exhibiting no logic.
Getting even slightly respectable performance on the ARC-AGI test set, I think shows that there is at least some logical processing going on. General intelligence is another issue entirely, but there's definitely more than nothing.
No comments yet
Contribute on Hacker News ↗