← Back to context

Comment by soulofmischief

1 month ago

The point of qualia is that we seem to agree that these certain neuronal states "feel" like something. That being alive and conscious is an experience. Yes, it's exceedingly likely that all of the necessary components for "feeling" something is encoded right in the neuronal state. But we still need a framework for asking questions such as, "Does your red look the same as my red?" and "Why do I experience sensation, sometimes physical in nature, when I am depressed?"

It is absolutely an ill-defined concept, but it's another blunt tool in our toolbox that we use to better explore the world. Sometimes, our observations lead to better tools, and "artificial" intelligence is a fantastic sandbox for exploring these ideas. I'm glad that this discussion is taking place.

What’s stopping people from also describing LLM systems with “qualia”?

  • Empirical evidence, for one. And the existence of fine-tuning, which allows you to artificially influence how a model responds to questions. This means we can't just ask an LLM, "do you see red?" I can't really even ask you that. I just know that I see red, and that many other philosophers and scientists in the past seem to agree with my experience, and that it's a deep, deep discussion which only shallow spectators are currently drawing hard conclusions from.