Comment by trick-or-treat
6 days ago
> all LLM output is based on likelihood of one word coming after the next word based on the prompt.
Right but it has to reason about what that next word should be. It has to model the problem and then consider ways to approach it.
No, it does not reason anything. LLM "reasoning" is just an illusion.
When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go.
This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions.
This exactly. The proof is in the pudding. If AI pudding is as good as (or better than) human pudding, and you continue to complain about it anyway... You're just being biased and unreasonable.
And by the way, I don't think it's surprising that so many people are being unreasonable on this issue, there is a lot at stake and it's implications are transformative.
Chess engines are not a comparable thing. Chess is a solved game. There is always a mathematically perfect move.
3 replies →
Is that so different from brains?
Even if it is, this sounds like "this submarine doesn't actually swim" reasoning.