Comment by PeterStuer
5 hours ago
"does not think and is not intelligent. It just statistically predict next token in a sequence. It is all statistics"
Technically correct, but pretty useless as a working model. Like sayin humans are not intelligent. It's just biochemical and bioelectric reactions. It's all physics.
How would you, from a Searlian perspective argue against "humans are just statistical next token predictors"?
We don't know what humans are because they are a black box, we use some imperfect models that have limited usability in specific contexts.
LLM is white box that we know for sure is just a statistical next token predictor and nothing more. It's not a just a model of some black box we are trying to understand but the whole actual thing. That people think it's something more or could be something more is on them. If you understand that then you understand the flaws, limitations and vulnerabilities which is very useful.