Comment by HarHarVeryFunny
6 months ago
As noted, consciousness seems to just be the ability to self-observe, which is useful as another predictive input.
I would expect that all intelligent animals are conscious, and any AI we build with a roughly brain-like architecture in terms of connections, looping, and being prediction based would also report itself to be conscious and describe a similar subjective experience. LLMs seem much too simple (just layer-wise pass-thru data flow) to be conscious.
It's possible that some of the neural connections supporting consciousness may have evolved, or been enhanced, due to the evolutionary value of enhanced self-prediction (i.e. this is the reason), but as noted I expect it basically "comes for free" with any complete enough cognitive/sensory architecture.
> As noted, consciousness seems to just be the ability to self-observe, which is useful as another predictive input.
As far as I know, consciousness is referring to something other than self-referential systems, especially with regards to the hard problem of consciousness.
The [philosophical zombie](https://en.wikipedia.org/wiki/Philosophical_zombie) thought experiment is well-known for imagining something with all the structural characteristics that you mention but without conscious experience as in "what-its-like" to be someone.
It seems entirely possible that the "philosophical zombie" is an impossible/illogical construct, and that in fact anything with all the structure necessary for consciousness will of necessity be conscious.
When considering the structural underpinnings of consciousness, it's interesting to note the phenomena of "blindsight", which is essentially a loss of visual consciousness without an actual loss of vision!
Note that anything with mental access to it's own deliberations and sensory inputs will by definition always be able to report "what it's like" to be themselves - what they are experiencing (what's in their mind). If something reports to you their quales of vision or hearing, isn't this exactly what we mean by "what it's like" to be them - how they feel they are experiencing the world?!
> It seems entirely possible that the "philosophical zombie" is an impossible/illogical construct, and that in fact anything with all the structure necessary for consciousness will of necessity be conscious.
Yes, and that's pretty much exactly the point: we don't know of any way of determining whether someone is a p-zombie or a being with conscious phenomenal experience. We can certainly have an opinion or belief or assume that sufficient structure means consciousness, which is a perfectly reasonable stance to take and one that many would take, but we have to be careful to understand that's not a scientific stance since it isn't testable or falsifiable, which is why it's been called the "hard problem" of consciousness. It's an unfounded belief we choose out of reasons like psychological comfort.
With regards to your latter point, I think you are making some sophisticated distinctions regarding the "map and territory" relation, and it seems you've hit upon the crux of the matter: how can we report "what its like" for us to experience something the other person hasn't experienced, if its not deconstructible to phenomenal states they've already experienced (and therefore constructible for them based off of our report)? The landmark paper here is "What Is It Like to Be a Bat?" by Josh Nagel, and if you're ever curious it's a pretty short read.
With regards to "blindsight" since I'm not familiar with it and curious, how do we distinguish between loss of visual consciousness and loss of information transfer between conscious regions, or loss of memory about conscious experience?
7 replies →