Comment by mort96
1 day ago
Before I start typing, I think abstractly about the topic and decide on what I shall write in response. Due to the linear nature of time, typing necessarily happens one word at a time, but I am never producing a probability distribution of words (at least not in a way that my conscious self can determine), I consider an entire idea and then decide what tokens to enter into the computer in order to communicate the idea to you.
And while I am typing, and while I am thinking before I type, I experience an array of non-textual sensory input, and my whole experience of self is to a significant extent non-lingual. Sometimes, I experience an inner monologue, sometimes I think thoughts which aren't expressed in language such as the structure of the data flow in a computer program, sometimes I don't think and just experience feelings like a kiss or the sun on my skin or the euphoria of a piece of music which hits just right. These experiences shape who I am and how I think.
When I solve difficult programming problems or other difficult problems, I build abstract structures in my mind which represents the relevant information and consider things like how data flows, which parts impact which other parts, what the constraints are, etc. without language coming in to play at all. This process seems completely detached from words. In contrast, for a language model, there is no thinking outside of producing words.
It seems self-evident to me that at least parts of the human experience fundamentally can not be reduced to next token prediction. Further, it seems plausible to me that some of these aspects may be necessary for what we consider general intelligence.
Therefore, my position is: it is plausible that next token prediction won't give rise to general intelligence, and I do not find your argument convincing.
But a LLM shows similiar effects.
COCONUT, PCCoT, PLaT and co are directly linked to 'thinking in latent space'. yann lecun is working on this too, we have JEPA now.
Also how do you describe or explain how an LLM is generating the next token when it should add a feature to an existing code base? In my opinion it has structures which allows it to create a temp model of that code.
For sure a LLM lack the emotional component but what we humans also do, which indicates to me, that we are a lot closer to LLMs that we want to be, if you have a weird body feeling (stress, hot flashes, anger, etc.) your 'text area/llm/speech area' also tries to make sense of it. Its not always very good in doing so. That emotional body feeling is not that aligned with it and it takes time to either understand or ignore these types of inputs to the text area/llm/speech part of our brain.
I'm open for looking back in 5 years and saying 'man that was a wild ride but no AGI' but at the current quality of LLMs and all the other architectures and type of models and money etc. being thrown at AGI, for now i don't see a ceiling at all. I only see crazy unseen progress.
I don't understand what part of what I said you disagree with.
You state how you think and plan and have thoughts on how to do things etc. and i assumed you mention your way of thinking because you assume a LLM is not doing any of it.
I showed than counter examples.
6 replies →
> I am never producing a probability distribution of words (at least not in a way that my conscious self can determine)
Inability to introspect your own word selections does not mean it’s meaningfully different from what an LLM does. There is plenty of evidence that humans do a lot of things that are not driven by conscious choice and we rationalize it after the fact.
> I consider an entire idea and then decide what tokens to enter into the computer in order to communicate the idea to you.
And how is that different? You are not so subtly implying that an LLM can’t consider an idea but you haven’t established this as fact. i.e. You are starting with the assumption that an LLM cannot possibly think and therefore cannot be intelligent, but this is just begging the question.
> sometimes I don't think and just experience feelings like a kiss or the sun on my skin or the euphoria of a piece of music which hits just right. These experiences shape who I am and how I think.
You cannot spin experience as intelligence. LLMs have the experience of reading the entire internet, something you cannot conceive of. Certainly your experiences shape who you are. This is a different axis from intelligence, though.
> This process seems completely detached from words. In contrast, for a language model, there is no thinking outside of producing words.
Both sides of this claim seem dubious. The second half in particular seems to be founded on nothing. Again, you are asserting with no support that there is no thinking going on.
> It seems self-evident to me that at least parts of the human experience fundamentally can not be reduced to next token prediction. Further, it seems plausible to me that some of these aspects may be necessary for what we consider general intelligence.
I don’t think anyone sane is claiming an LLM can have a human experience. But it is not clear that a human experience is necessary for intelligence.
> Inability to introspect your own word selections does not mean it’s meaningfully different from what an LLM does. There is plenty of evidence that humans do a lot of things that are not driven by conscious choice and we rationalize it after the fact.
This is correct and also completely irrelevant. I am describing what I experience, and describing how my experience seems very different to next token prediction. I therefore conclude that it's plausible that there is more involved than something which can be reduced to next token prediction.
> And how is that different? You are not so subtly implying that an LLM can’t consider an idea but you haven’t established this as fact. i.e. You are starting with the assumption that an LLM cannot possibly think and therefore cannot be intelligent, but this is just begging the question.
Language models can't think outside of producing tokens. There is nothing going on within an LLM when it's not producing tokens. The only thing it does is taking in tokens as input and producing a token probability distribution as output. It seems plausible that this is not enough for general intelligence.
> You cannot spin experience as intelligence.
Correct, but I can point out that the only generally intelligent beings we know of have these sorts of experiences. Given that we know next to nothing about how a human's general intelligence works, it seems plausible that experience might play a part.
> LLMs have the experience of reading the entire internet, something you cannot conceive of.
I don't know that LLMs have an experience. But correct, I cannot conceive of what it feels like to have read and remembered the entire Internet. I am also a general intelligence and an LLM is not, so there's that.
> Certainly your experiences shape who you are. This is a different axis from intelligence, though.
I don't know enough about what makes up general intelligence to make this claim. I don't think you do either.
> Both sides of this claim seem dubious. The second half in particular seems to be founded on nothing. Again, you are asserting with no support that there is no thinking going on.
I'm telling you how these technologies work. When a language model isn't performing inference, it is not doing anything. A language model is a function which takes a token stream as input and produces a token probability distribution as output. By definition, there is no thinking outside of producing words. The function isn't running.
> I don’t think anyone sane is claiming an LLM can have a human experience. But it is not clear that a human experience is necessary for intelligence.
I 100% agree. It is not clear whether a human experience is necessary for intelligence. It is plausible that something approximating a human-like experience is necessary for intelligence. It is also plausible that something approximating human-like experience is completely unnecessary and you can make an AGI without such experiences.
It's plausible that next token prediction is sufficient for AGI. It's also plausible that it isn't.
> I don't know enough about what makes up general intelligence to make this claim. I don't think you do either.
This is the fundamental issue. No one seems capable of defining general intelligence. Ten years ago most scientists would probably have agreed that The Turing Test was sufficient but the goalposts shifted when ChatGPT passed that.
If it’s not clear what AGI even means, it’s hard to say whether an LLM can achieve it, because it devolves into pointing out that an LLM is not a human.
2 replies →
I'm telling you how these technologies work. When a language model isn't performing inference, it is not doing anything. A language model is a function which takes a token stream as input and produces a token probability distribution as output. By definition, there is no thinking outside of producing words. The function isn't running.
If what you are saying is true, then LLMs wouldn't be able to handle out-of-distribution math problems without resorting to tool use. Yet they can. When you ask a current-generation model to multiply some 8-digit numbers, and forbid it from using tools or writing a script, it will almost certainly give you the right answer. That includes local models that can't possibly cheat. LLMs are stochastic, but they are not parrots.
At the risk of sounding like an LLM myself, whatever process makes this possible is not simply next-token prediction in the pejorative sense you're applying to it. It can't be. The tokens in a transformer network are evidently not just words in a Markov chain but a substrate for reasoning. The model is generalizing processes it learned, somehow, in the course of merely being trained to predict the next token.
Mechanically, yes, next-token prediction is what it's doing, but that turns out to be a much more powerful mechanism than it appeared at first. My position is that our brains likely employ similar mechanism(s), albeit through very different means.
It is scarcely believable that this abstraction process is limited to keeping track of intermediate results in math problems. The implications should give the stochastic-parrot crowd some serious cognitive dissonance, but...
(Edit: it occurs to me that you are really arguing that the continuous versus discrete nature of human thinking is what's important here. If so, that sounds like a motte-and-bailey thing that doesn't move the needle on the argument that originally kicked off the subthread.)
(Edit 2, again due to rate-limiting: it does sound like you've fallen back to a continuous-versus-discrete argument, and that's not something I've personally thought much about or read much about. I stand by my point that the ability to do arithmetic without external tools is sufficient to dispense with the stochastic-parrot school of thought, and that's all I set out to argue here.)
4 replies →
> I consider an entire idea and then decide what tokens to enter into the computer in order to communicate the idea to you.
This overestimates introspective access.
The brain is very good at producing a coherent story after the fact. Touch the hot stove and your hand moves before the conscious thought of "too hot" arrives. The hot message hits your spinal cord and you move before it reaches your brain. Your conscious mind fills in the rest afterwards.
I don't think that means that conscious thought is fake. But it does make me skeptical of the claim that we first possess a complete idea and only then does it serialize into words. A lot of the "idea" may be assembled during the act of expression, with consciousness narrating the process as if it had the whole thing in advance.
With writing, as in this comment, there's also a lot a backtracking and rewording that LLMs don't have the ability to do, so there's that.
Before I start typing, I think abstractly about the topic
Before you start typing, an fMRI machine can tell you which finger you'll lift first, before you know it yourself.
We are not special. Consciousness is literally a continuous hallucination that we make up to explain what we do and what we think, after the fact. A machine can be trained to behave identically, but it's not clear if that's the best way forward or not.
Edit due to rate limiting: to answer your question, the substrate your mind uses to drive this process can be considered an array of tokens that, themselves, can be considered 'words.'
It's hard to link sources -- what am I supposed to do, send you to Chomsky and other authorities who have predicted none of what's happening and who clearly understand even less?
> (Edit: to answer your question, the substrate your mind uses to drive this process can be considered an array of tokens that, themselves, can be considered 'words.')
This seems like a factual claim. Can you link a source?
(Also why respond in the form of an edit?)
What's your argument? An fMRI can tell which finger I will lift first before that information makes its way to my consciousness, ergo next word prediction is sufficient for general intelligence? Do you hear yourself?
The statement is that your perception of your own cognition isn’t necessarily reality. That isn’t a statement that token prediction is sufficient for general intelligence. It’s a statement that your subjective experience is misleading you.