Comment by GMoromisato
9 days ago
I remember reading Douglas Hofstadter's Fluid Concepts and Creative Analogies [https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...]
He wrote about Copycat, a program for understanding analogies ("abc is to 123 as cba is to ???"). The program worked at the symbolic level, in the sense that it hard-coded a network of relationships between words and characters. I wonder how close he was to "inventing" an LLM? The insight he needed was that instead of hard-coding patterns, he should have just trained on a vast set of patterns.
Hofstadter focused on Copycat because he saw pattern-matching as the core ability of intelligence. Unlocking that, in his view, would unlock AI. And, of course, pattern-matching is exactly what LLMs are good for.
I think he's right. Intelligence isn't about logic. In the early days of AI, people thought that a chess-playing computer would necessarily be intelligent, but that was clearly a dead-end. Logic is not the hard part. The hard part is pattern-matching.
In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.
I honestly can't come up with a situation that calls for intelligence that can't be solved by pattern-matching.
In my opinion, LeCun is moving the goal-posts. He's saying LLMs make mistakes and therefore they aren't intelligent and aren't useful. Obviously that's wrong: humans make mistakes and are usually considered both intelligent and useful.
I wonder if there is a necessary relationship between intelligence and mistakes. If you can solve a problem algorithmically (e.g., long-division) then there won't be mistakes, but you don't need intelligence (you just follow the algorithm). But if you need intelligence (because no algorithm exists) then there will always be mistakes.
I been thinking about something similar for a long time now. I think abstraction of patterns is at the core requirement of intelligence.
But whats critical, and I think is what's missing is a knowledge representation of events in space-time. We need something more fundamental than text or pixels, we need something that captures space and transformations in space itself.
> In fact, pattern-matching is all there is: That's a bear, run away; I'm in a restaurant, I need to order; this is like a binary tree, I can solve it recursively.
This is not correct. It does not explain creativity at all. It cannot solely be based on pattern matching. I'm not saying no AI is creative, but this logic does not explain creativity
Is creativity not just the application of a pattern in an adjacent space?
No, lol
1 reply →
I wouldn't call pattern matching intelligence, I would call it something closer to "trainability" or "educatable" but not intelligence. You can train a person to do a task without understanding why they have to do it like that, but when confronted with a new never-before-seen situation they have to understand the physical laws of the universe to find a solution.
Ask ChatGPT to answer something that no one on the internet has done before and it will struggle to come up with a solution.
Pattern matching leads to compression- once you identified a pattern you can compress the original information by some amount by replacing it with the identified pattern. Patterns are symbols of the information that was there originally; so manipulating patterns is the same as manipulating symbols. Compressing information by finding hidden connections, then operating on abstract representations of the original information, reorganising this information according to other patterns... this sounds a lot like intelligence.
Exactly! And once you compress a pattern, it can became a piece of a larger pattern.
What precludes pattern matching from understanding the physical laws? You see a ball hit a wall, and it bounces back. Congratulations, you learned the abstract pattern:
x->|
x|
x<-|
What is Dark Matter? How to eradicate cancer? How to have world peace? I don't quite see how pattern-matching, alone, can solve questions like these.
Cancer eradication seems like a clear example of where highly effective pattern matching could be a game changer. That's where cancer research starts: pattern matching to sift through the incredibly large space of potential drugs and find the ones worth starting clinical trials for. If you could get an LLM to pattern-match whether a new compound is likely to work as a BTK inhibitor (https://en.wikipedia.org/wiki/Bruton%27s_tyrosine_kinase), or screen them for likely side effects before even starting synthesis, that would be a big deal.
So, how do we solve questions like these? How about collecting a lot of data and looking for patterns in that data? In the process, scientists typically produce some hypotheses, test them by collecting more data and finding more patterns, and try to correlate these patterns with some patterns in existing knowledge. Do you agree?
If yes, it seems to me that LLMs should be much better at that than humans, and I believe the frontier models like o3 might already be better than humans, we are just starting to use them for these tasks. Give it a couple more years before making any conclusions.
Pattern-matching can produce useful answers within the confines of a well-defined system. However, the hypothetical all-encompassing system for such a solver to produce hypothetical objective ground truth about an arbitrary question is not something we have—such a system would be one which we ourselves are part of and hence unavailable to us (cf. the incompleteness conundrum, map vs. territory, and so forth).
Your unsolved problems would likely involve the extremes of maps that you currently think in terms of. Maps become less useful as you get closer to undefined extreme conditions within them (a famous one is us humans ourselves, and why so many unsolved challenges to various degrees of obviousness concern our psyche and physiology—world peace, cancer, and so on), and I assume useful pattern matching is similarly less effective. Data to pattern-match against is collected and classified according to a preexisting model; if the model is wrong (which it is), the data may lead to spurious matches with wrong or nonsensical answers. Furthermore, if the answer has to be in terms of a new system, another fallible map hitherto unfamiliar to human mind, pattern-matching based on preexisting products of that very mind is unlikely to produce one.
My premise is that pattern-matching unlocks human-level artificial intelligence. Just because LLMs haven't cured cancer yet doesn't mean that LLMs will never be as intelligent as humans. After all, humans haven't cured cancer yet either.
What is intelligence?
Is it reacting to the environment? No, a thermostat can do that.
Is being logical? No, the simplest program can do that.
Is it creating something never seen before? No, a random number generator can do that.
We can even combine all of the above into a program and it still wouldn't be intelligent or creative. So what's the missing piece? The missing piece is pattern-matching.
Pattern-matching is taking a concrete input (a series of numbers or a video stream) and extracting abstract concepts and relationships. We can even nest patterns: we can match a pattern of concepts, each of which is composed of sub-patterns, and so on.
Creativity is just pattern matching the output of a pseudo-random generator against a critique pattern (is this output good?). When an artist creates something, they are constantly pattern matching against their own internal critic and the existing art out there. They are trying to find something that matches the beauty/impact of the art they've seen, while matching their own aesthetic, and not reproducing an existing pattern. It's pattern-matching all the way down!
Science is just a special form of creativity. You are trying to create a model that reproduces experimental outcomes. How do you do that? You absorb the existing models and experiments (which involves pattern-matching to compress into abstract concepts), and then you generate new models that fit the data.
Pattern-matching unlocks AI, which is why LLMs have been so successful. Obviously, you still need logic, inference, etc., but that's the easy part. Pattern-matching was the last missing piece!