Comment by ofjcihen
12 hours ago
What you’re missing is a “self” to have the “experience”.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
12 hours ago
What you’re missing is a “self” to have the “experience”.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?
> the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
Can such an algorithm reason about itself in relation to others?
> Can such an algorithm reason about itself in relation to others?
No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.
LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.
> LLMs do it by ...
That they do it at all is the point and is what separates then from MP3 encoding algorithms. The "how" doesn't seem to me to be as important as you're suggesting.
You asked a hypothetical above about a different algorithm and now we've ascertained the reasons why that was reductive.
> LLMs never experienced ...
What is experience beyond taking input from the world around you and holding an understanding of it?
Toddlers learn over the course of several years of observing training data and for the first few years misspeak about themselves and others. What’s the difference?
How are you sure it doesn’t reason about itself? The grammar of languages encode the concepts of self and others. LLMs operate with those grammar structures and do so in increasingly accurate ways. Why would we say humans that exhibit the same behavior are inherently more likely to be conscious?
How do I know you have this "self"?
How do you know other humans do?
By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.
Hard disagree. We reliably operate with the concept of a self that’s distinct from others. The chemical and physical processes change in response to stimulus.
Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.
4 replies →
[flagged]
Ad hominems are always a nice way of getting out of answering something you have no answer to.
3 replies →