← Back to context

Comment by CamperBob2

1 year ago

85 IQ: LLMs are magic

110 IQ: LLMs are "just" next token predictors that can output structured content to interface with tools all wrapped in a nice UI

140 IQ: LLMs are magic

The word "just" can trivialize so much. Rockets are "just" explosives pointed in one direction. Computers are "just" billions of transistors in a single package. Humans are "just" a protein shell for DNA.

There is a lot of that LLMs are just(x) running around, but it seems to me like that is missing the point, in the extreme.

The “magic” is that yes, LLMs are “just” statistical next token predictors.

And as code only, LLMs produce garbage.

When you feed them human cultural-linguistic data, they “magically” can communicate useful ideas, reason, maintain an internal world state, and use tools.

The llm architecture is just a mechanism for imprinting and representing human cultural data. Human cultural data is the “magic”, somehow embodying the ability to reason, maintain state, use tools, and communicate.

Learning how to represent language data in vector-space allowed us to actually encode the meaning embedded in cultural data, since written language is just a shorthand.

Actually representing meaning allows us to run culture as code. Transformer boxes are a target for that code.

The magic is human culture.

Culture matters. We should be curating our culture.

140 IQ: LLMS are magic ... token predictors that can output structured content

  • Which is all we are. Next-token prediction isn't just all you need, it's all there is.

    That's the most interesting part of what we're learning now, I think. So many people refused to accept that for any number of reasons -- religious, philosophical, metaphysical, personal -- and now they have no choice.