Comment by root_axis
12 hours ago
There are a lot of people vulnerable to AI psychosis.
As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.
No, LLMs are fundamentally designed as probabilistic engines for next-token prediction, from which intelligence-like functions have emerged as a byproduct. Such emergence is not guaranteed, given that the underlying mechanisms are not fully understood. Consequently, one cannot dismiss the possibility of consciousness arising.
If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.
As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.
So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.
I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.
What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.
Why would current AI be an argument for panpsycism? I don’t understand the connection.
AI is stochastic, not static and deterministic.
As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems
> AI is stochastic, not static and deterministic.
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
3 replies →
I think it's the opposite argument
IF current AI is conscious, so are trees, rocks, turbulent flows, etc.
The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.
2 replies →
There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.
These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.
I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.
How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?
1 reply →
Sensory input is nothing but data.
9 replies →
Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.
3 replies →
LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.
What is different about the human neural network? People have given LLMs sensors and they respond to stimuli. The sense of self can be expressed as a linguistic artifact that results in an emergent pattern recognition of distinct entities. For example, merely my saying I am sitting under the tree with a friend I have encountered the self as a pointer to me as the speaker. There is evidence from early childhood development that language acquisition correlates to awareness of the self as distinct from other. And there is evidence from anthropology indicating that language structures shape exactly what the self is perceived to be.
Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.
What you’re missing is a “self” to have the “experience”.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?
> the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
Can such an algorithm reason about itself in relation to others?
4 replies →
How do I know you have this "self"?
How do you know other humans do?
12 replies →