← Back to context

Comment by tovej

2 months ago

This one is even easier. LLMs record objective data about n-gram distribution, there is no room for any "subjective state" in their working set.

Or another way: an LLM will respond the same way if you wait for one second or a decade between prompts. The only way it is interacted with is through a stream of tokens. There is exactly one stream at all times, and each time the stream is input again, it is barely different from the previous input. The LLM does not behave differently depending on the contents of the stream. It may produce the exact same token for two different streams. It may also encounter the same stream twice, and will act the same in both cases. If it were a "subject" "experiencing" say, a discussion, it would use its pasts "experience". But it does not.