← Back to context

Comment by dernett

21 hours ago

This is crazy. It's clear that these models don't have human intelligence, but it's undeniable at this point that they have _some_ form of intelligence.

If LLMs weren't created by us but where something discovered in another species' behaviour it would be 100% labelled intelligence

  • Yes, same for the case where the technology would have been found embodied in machinery aboard a crashed UFO.

My take is that a huge part of human intelligence is pattern matching. We just didn’t understand how much multidimensional geometry influenced our matches

  • Yes, it could be that intelligence is essentially a sophisticated form of recursive, brute force pattern matching.

    I'm beginning to think the Bitter Lesson applies to organic intelligence as well, because basic pattern matching can be implemented relatively simply using very basic mathematical operations like multiply and accumulate, and so it can scale with massive parallelization of relatively simple building blocks.

    • Intelligence is almost certainly a fundamentally recursive process.

      The ability to think about your own thinking over and over as deeply as needed is where all the magic happens. Counterfactual reasoning occurs every time you pop a mental stack frame. By augmenting our stack with external tools (paper, computers, etc.), we can extend this process as far as it needs to go.

      LLMs start to look a lot more capable when you put them into recursive loops with feedback from the environment. A trillion tokens worth of "what if..." can be expended without touching a single token in the caller's context. This can happen at every level as many times as needed if we're using proper recursive machinery. The theoretical scaling around this is extremely favorable.

      1 reply →

  • I don't think it's accurate to describe LLMs as pattern matching. Prediction is the mechanism they use to ingest and output information, and they end up with a (relatively) deep model of the world under the hood.

    • The "pattern matching" perspective is true if you zoom in close enough, just like "protein reactions in water" is true for brains. But if you zoom out you see both humans and LLMs interact with external environments which provide opportunity for novel exploration. The true source of originality is not inside but in the environment. Making it be all about the model inside is a mistake, what matters more than the model is the data loop and solution space being explored.

    • > I don't think it's accurate to describe LLMs as pattern matching

      I’m talking about the inference step, which uses tensor geometry arithmetic to find patterns in text. We don’t understand what those patterns are but it’s clear it’s doing some heavy lifting since llm inference is expressing logic and reasoning under the guise of our reductive “next token prediction”

    • "Pattern matching" is not sufficiently specified here for us to say if LLMs do pattern matching or not. E.g. we can say that an LLM predicts the next token because that token (or rather, its embedding) is the best "match" to the previous tokens, which form a path ("pattern") in embedding space. In this sense LLMs are most definitely pattern matching. Under other formulations of the term, they may not be (e.g. when pattern matching refers to abstraction or abstracting to actual logical patterns, rather than strictly semantic patterns).

    • Yes, the world model building is achieved via pattern matching and happens during ingestion and training, but that is also part of the intelligence.

  • Intelligence is hallucination that happens to produce useful results in the real world.

I don't think they will ever have human intelligence. It will always be an alien intelligence.

But I think the trend line unmistakably points to a future where it can be MORE intelligent than a human in exactly the colloquial way we define "more intelligent"

The fact that one of the greatest mathematicians alive has a page and is seriously bench marking this shows how likely he believes this can happen.

Well, Alpha Go and Stockfish can beat you at their games. Why shouldn't these models beat us at math proofs?

  • Chess and Go have very restrictive rules. It seems a lot more obvious to me why a computer can beat a human at it. They have a huge advantage just by being able to calculate very deep lines in a very short time. I actually find it impressive for how long humans were able to beat computers at go. Math proofs seem a lot more open ended to me.

  • Alpha go and stockfish were specifically designed and trained to win at those games.

    • And we can train models specifically at math proofs? I think only difference is that math is bigger....

It's pattern matching. Which is actually what we measure in IQ tests, just saying.

  • There's some nuance. IQ tests measure pattern matching and, in an underlying way, other facets of intelligence - memory, for example. How well can an LLM 'remember' a thing? Sometimes Claude will perform compaction when its context window reaches 200k "tokens" then it seems a little colder to me, but maybe that's just my imagination. I'm kind of a "power user".

  • I call it matching. Pattern matching had a different meaning.

    • what are you referring to? LLMs are neural networks at their core and the most simple versions of neural networks are all about reproducing patterns observed during training

      1 reply →

As someone who doesn't understand this shit, and how it's always the experts who fiddle the LLMs to get good outputs, it feels natural to attribute the intelligence to the operator (or the training set), rather than the LLM itself.