By comparing an LLM’s inner mental state to a light fixture, I am saying in an absurd way that I don’t think LLMs are sentient, and nothing more than that. I am not saying an LLM and a light switch are equivalent in functionality, a single-pole switch only has two states.
I don’t really understand your response to my post, my interpretation is that you think LLMs have an inner mental state and think I’m wrong? I may be wrong about this interpretation.
Deep neural networks are weird and there is a lot going on in them that makes them very different from the state machines we're used to in binary programs.
The honest position seems underrepresented here: we don't have tools to know what "counts" as experience. Brains are physics too — the question is which configurations matter, and we can't explain why we ourselves aren't philosophical zombies.
The paper is interesting not for proving LLMs suffer, but for showing they have structured internal dynamics that correlate with psychological constructs. Whether correlation implies anything about subjective states is the hard problem. We can't skip it by declaring "just linear algebra" or "obviously sentient."
By comparing an LLM’s inner mental state to a light fixture, I am saying in an absurd way that I don’t think LLMs are sentient, and nothing more than that. I am not saying an LLM and a light switch are equivalent in functionality, a single-pole switch only has two states.
I don’t really understand your response to my post, my interpretation is that you think LLMs have an inner mental state and think I’m wrong? I may be wrong about this interpretation.
https://arxiv.org/abs/2304.13734
LLMs have an inner/internal state.
Deep neural networks are weird and there is a lot going on in them that makes them very different from the state machines we're used to in binary programs.
The honest position seems underrepresented here: we don't have tools to know what "counts" as experience. Brains are physics too — the question is which configurations matter, and we can't explain why we ourselves aren't philosophical zombies.
The paper is interesting not for proving LLMs suffer, but for showing they have structured internal dynamics that correlate with psychological constructs. Whether correlation implies anything about subjective states is the hard problem. We can't skip it by declaring "just linear algebra" or "obviously sentient."