← Back to context

Comment by K0balt

1 year ago

There is a lot of that LLMs are just(x) running around, but it seems to me like that is missing the point, in the extreme.

The “magic” is that yes, LLMs are “just” statistical next token predictors.

And as code only, LLMs produce garbage.

When you feed them human cultural-linguistic data, they “magically” can communicate useful ideas, reason, maintain an internal world state, and use tools.

The llm architecture is just a mechanism for imprinting and representing human cultural data. Human cultural data is the “magic”, somehow embodying the ability to reason, maintain state, use tools, and communicate.

Learning how to represent language data in vector-space allowed us to actually encode the meaning embedded in cultural data, since written language is just a shorthand.

Actually representing meaning allows us to run culture as code. Transformer boxes are a target for that code.

The magic is human culture.

Culture matters. We should be curating our culture.