Comment by armcat
3 months ago
How I see LLMs (which have roots in early word embeddings like word2vec) is not as statistical machines, but geometric machines. When you train LLMs you are essentially moving concepts around in a very high dimensional space. If we take a concept such as “a barking dog” in English, in this learned geometric space we have the same thing in French, Chinese, hex and Morse code, simply because fundamental constituents of all of those languages are in the training data, and the model has managed to squeeze all their commonalities into same regions. The statistical part really comes from sampling this geometric space.
That part I understand and it is quite easy to imagine, but that mental model means that novel data, not present in dataset in a semantical sense, can not be mapped to any exact point in that latent space except to just random one, because quite literally this point does not exist in that space, so no clever statistical sampling would be able to produce it from other points. Surely, we can include hex-encoded knowledge base into dataset, increase dimensionality, then include double-hex encoding and so on, but it would be enough to do (n+1) hex encoding and model would fail. Sorry that I repeat that hex-encoding example, you can substitute it with any other example. However, it seems that our minds do not have any built-in limit on indirection (rather than time & space).
> novel data, not present in dataset in a semantical sense
This is your error, afaik.
The idea of the architecture design / training data is to produce a space that spans the entirety of possible input, regardless of whether it was or wasn't in the training data.
Or to put it another way, it should be possible to infer a lot of things about cats, trained on the entirety of human knowledge, even if you leave out every definition of cats.
See other comments about pre-decoding though, as expect there are some translation-like layers, especially for hardcodable transforms (e.g. common, standard encodings).
People seem to get really hung up on the fact that words have meaning to them, in regards to thinking about what an LLM is doing.
It creates all sorts of illusions about the model having a semantic understanding of the training data or the interaction with the users. It's fascinating really how easily people suspend disbelief just because the model can produce output that is meaningful to them and semantically related to the input.
It's a hard illusion to break. I was discussing usage of LLM by professors with a colleague who teaches at a top European university, and she was jarred by my change in tone when we went from "LLMs are great to shuffle exam content" (because it's such a chore to do it manually to preclude students trading answers with people who have already taken a course) to "LLMs could grade the exam". It took some back and forth for me to convince her that language models have no concept of factuality and that some student complaining about a grade and resulting in "ah ok I've reviewed it and previously I had just used an LLM to grade it" might be career ending.
2 replies →
> not as statistical machines, but geometric machines. When you train LLMs you are essentially moving concepts around in a very high dimensional space.
That's intriguing, and would make a good discussion topic in itself. Although I doubt the "we have the same thing in [various languages]" bit.
Mother/water/bed/food/etc easily translates into most (all?) languages. Obviously such concepts cross languages.
In this analogy they are objects in high dimensional space, but we can also translate concepts that don’t have a specific word associated with them. People everywhere have a way to refer to “corrupt cop” or “chess opening” and so forth.
> Mother/water/bed/food/etc easily translates into most (all?) languages. Obviously such concepts cross languages.
See also: Swadesh List and its variations (https://en.wikipedia.org/wiki/Swadesh_list), an attempt to make a list of such basic and common concepts.
"Bed" and "food" don't seem to be in those lists though, but "sleep" and "eat" are.
What do you mean, exactly, about the doubting part? I thought it was fairly well known that LLMs possess superior translation capabilities.
Sometimes you do not have the same concepts - life experiences are different.