Comment by famouswaffles
10 days ago
1. LLMs are transformers, and transformers are next state predictors. LLMs are not Language models (in the sense you are trying to imply) because even when training is restricted to only text, text is much more than language.
2. People need to let go of this strange and erroneous idea that humans somehow have this privileged access to the 'real world'. You don't. You run on a heavily filtered, tiny slice of reality. You think you understand electro-magnetism ? Tell that to the birds that innately navigate by sensing the earth's magnetic field. To them, your brain only somewhat models the real world, and evidently quite incompletely. You'll never truly understand electro-magnetism, they might say.
LLMs are language models, something being a transformer or next-state predictor does not make it a language model. You can also have e.g. convolutional language models or LSTM-based language models. This is a basic point that anyone with any proper understanding of these models would know.
Even if you disagree with these semantics, the major LLMs today are primarily trained on natural language. But, yes, as I said in another comment on this thread, it isn't that simple, because LLMs today are trained on tokens from tokenizers, and these tokenizers are trained on text that includes e.g. natural language, mathematical symbolism, and code.
Yes, humans have incredibly limited access to the real world. But they experience and model this world with far more tools and machinery than language. Sometimes, in certain cases, they attempt to messily translate this messy, multimodal understanding into tokens, and then make those tokens available on the internet.
An LLM (in the sense everyone means it, which, again, is largely a natural language model, but certainly just a tokenized text model) has access only to these messy tokens, so, yes, far less capacity than humanity collectively. And though the LLM can integrate knowledge from a massive amount of tokens from a huge amount of humans, even a single human has more different kinds of sensory information and modality-specific knowledge than the LLM. So humans DO have more privileged access to the real world than LLMs (even though we can barely access a slice of reality at all).
>LLMs are language models, something being a transformer or next-state predictors does not make it a language model. You can also have e.g. convolutional language models or LSTM-based language models. This is a basic point that anyone with any proper understanding of these models would know.
'Language Model' has no inherent meaning beyond 'predicts natural language sequences'. You are trying to make it mean more than that. You can certainly make something you'd call a language model with convolution or LSTMs, but that's just a semantics game. In practice, they would not work like transformers and would in fact perform much worse than them with the same compute budget.
>Even if you disagree with these semantics, the major LLMs today are primarily trained on natural language.
The major LLMs today are trained on trillions of tokens of text, much of which has nothing to do with language beyond the means of communication, millions of images and million(s) of hours of audio.
The problem as I tried to explain is that you're packing more meaning into 'Language Model' than you should. Being trained on text does not mean all your responses are modelled via language as you seem to imply. Even for a model trained on text, only the first and last few layers of a LLM concerns language.
You clearly have no idea about the basics of what you are talking about (as do almost all people that can't grasp the simple distinctions between transformer architectures vs. LLMs generally) and are ignoring most of what I am saying.
I see no value in engaging further.
1 reply →
> People need to let go of this strange and erroneous idea that humans somehow have this privileged access to the 'real world'.
This is irrelevant, the point is that you do have access to a world which LLMs don't, at all. They only get the text we produce after we interact with the world. It is working with "compressed data" at all times, and have absolutely no idea what we subconsciously internalized that we decided not to write down or why.
All of the SOTA LLMs today are trained on more than text.
It doesn't matter whether LLMs have "complete" (nothing does) or human-like world access, but whether the compression in text is lossy in ways that fundamentally prevent useful world modeling or reconstruction. And empirically... it doesn't seem to be. Text contains an enormous amount of implicit structure about how the world works, precisely because humans writing it did interact with the world and encoded those patterns.
And your subconscious is far leakier than you imagine. Your internal state will bleed into your writing, one way or another whether you're aware of it or not. Models can learn to reconstruct arithmetic algorithms given just operation and answer with no instruction. What sort of things have LLMs reconstructed after being trained on trillions of tokens of data ?
> 2. People need to let go of this strange and erroneous idea that humans somehow have this privileged access to 'the real world'. You don't.
You are denouncing a claim that the comment you're replying to did not make.
They made it implicitly, otherwise this:
>(2) language only somewhat models the world
is completely irrelevant.
Everyone is only 'somewhat modeling' the world. Humans, Animals, and LLMs.
Completely relevant, because LLMs only "somewhat model" humans' "somewhat modeling" of the world...
9 replies →
You're wrong about this: "People need to let go of this strange and erroneous idea that humans somehow have this privileged access to the 'real world'. You don't."
People do have a privileged access to the 'real world' compared to, for example, LLMs and any future AI. It's called: Consciousness and it is how we experience and come to know and understand the world. Consciousness is the privileged access that AI will never have.
4 replies →
A language model in computer science is a model that predicts the probability of a sentence or a word given a sentence. This definition predates LLMs.
A 'language model' only has meaning in so far as it tells you this thing 'predicts natural language sequences'. It does not tell you how these sequences are being predicted or any anything about what's going on inside, so all the extra meaning OP is trying to place by calling them Language Models is well...misplaced. That's the point I was trying to make.