Comment by vitaelabitur
1 day ago
Aren't LLMs just super-powerful pattern matchers? And guessing "taps" a pattern recognition task? I am struggling to understand how your experiment relates to intelligence in any way.
Also, commercial LLMs generally have system instructions baked on top of the core models, which intrinsically prompt them to look for purpose even in random user prompts.
LLMs are pattern matchers, but every model is given specific instructions and response designs that influence what to do given unclear prompts. This is hugely valuable to understand since you may ask an LLM an invalid question and it is important to know if it is likely to guess at your intent, reject the prompt or respond randomly.
Understanding how LLMs fail differently is becoming more valuable than knowing that they all got 100% on some reasoning test with perfect context.
There's definitely more than "just" pattern matching in there - for example, current SOTA models 'plan ahead' to simultaneously process both rough outlines of an answer and specific subject details to then combine internally for the final result (https://www.anthropic.com/research/tracing-thoughts-language...).
Eh that is still encompassed by the term “pattern matching” in this context. Sure it’s complicated, but it’s still just a glorified spell checker.
And we're just glorified oxidation. At some point the concept of "emergent systems" comes into play.
I'm an LLM naysayer, and even I have no trouble seeing, or accepting, that they're much more than glorified spell checkers.