Comment by baq
1 month ago
> qualia, which we do not currently know how to precisely define, recognize or measure
> which could house qualia.
I postulate this is a self-negating argument, though.
I'm not suggesting that LLMs think, feel or anything else of the sort, but these arguments are not convincing. If I only had the transcript and knew nothing about who wiped the drive, would I be able to tell it was an entity without qualia? Does it even matter? I further postulate these are not obvious questions.
Unless there is an active sensory loop, no matter how fast or slow, I don't see how qualia can enter the picture
Transformers attend to different parts of their input based on the input itself. Currently, if you want to tell an LLM it is sad, potentially altering future token prediction and labeling this as "feelings" which change how the model interprets and acts on the world, you have to tell the model that it is sad or provide an input whose token set activates "sad" circuits which color the model's predictive process.
You make the distribution flow such that it predicts "sad" tokens, but every bit of information affecting that flow is contained in the input prompt. This is exceedingly different from how, say, mammals process emotion. We form new memories and brain structures which constantly alter our running processes and color our perception.
It's easy to draw certain individual parallels to these two processes, but holistically they are different processes with different effects.
It's crazy how strong the Eliza effect is. Seemingly half or more of tech people (who post online, anyway) are falling for it, yet again.
A lot of tech people online also don't know how to examine their own feelings, and so think they are mysterious and un-defined.
When really they are an actual feedback mechanism, that can totally be quantified just like any control loop. This whole 'unknowable qualia' argument is bunk.
9 replies →