Comment by lukev
2 days ago
This works for people.
A LLM is stateless. Even if you believe that consciousness could somehow emerge during a forward pass, it would be a brief flicker lasting no longer than it takes to emit a single token.
2 days ago
This works for people.
A LLM is stateless. Even if you believe that consciousness could somehow emerge during a forward pass, it would be a brief flicker lasting no longer than it takes to emit a single token.
> A LLM is stateless
Unless you mean by that something entirely different than what most people specifically on Hacker News, of all places, understand with "stateless", most and myself included, would disagree with you regarding the "stateless" property. If you do mean something entirely different than implying an LLM doesn't transition from a state to a state, potentially confined to a limited set of states through finite immutable training data set and accessible context and lack of PRNG, then would you care to elaborate?
Also, it can be stateful _and_ without a consciousness. Like a finite automaton? I don't think anyone's claiming (yet) any of the models today have consciousness, but that's mostly because it's going to be practically impossible to prove without some accepted theory of consciousness, I guess.
So obviously there is a lot of data in the parameters. But by stateless, I mean that a forward pass is a pure function over the context window. The only information shared between each forward pass is the context itself as it is built.
I certainly can't define consciousness, but it feels like some sort of existence or continuity over time would have to be a prerequisite.
Continuity over time comes from adding the generated token to the context.
An agent is notably not stateless.
Yes, but the state is just the prompt and the text already emitted.
You could assert that text can encode a state of consciousness, but that's an incredibly bold claim with a lot of implications.
It's a bold claim for sure, and not one that I agree with, but not one that's facially false either. We're approaching a point where we will stop having easy answers for why computer systems can't have subjective experience.
You're conflating state and consciousness. Clawbots in particular are agents that persist state across conversations in text files and optionally in other data stores.
3 replies →
While I'm definitely not in the "let's assign the concept of sentience to robots" camp, your argument is a bit disingenuous. Most modern LLM systems apply some sort of loop over previously generated text, so they do, in fact, have state.