Comment by siglesias
2 months ago
The argument doesn’t have to go that far. I think most people have the intuitive, ha, understanding that “understanding” is grounded in some kind of conscious certainty that words have meanings, associations, and even valences like pleasantness or unpleasantness. One of the cruxes of the Chinese Room is that this grounding has physical causes (as all biological phenomena do) rather than computational, purely abstract causes.
There has to be a special motivation to instead cast understanding as “competent use of a given word or concept,” (judged by whom btw?). The practical upshot here is that without this grounding, we keep seeing AI, even advanced AI make trivial mistakes and requires the human to give an account of value (good/bad, pleasant/unpleasant) because these programs obviously don’t have conscious feelings of goodness and badness. Nobody had to teach me that delicious things include Oreos and not cardboard.
> Nobody had to teach me that delicious things include Oreos and not cardboard.
Well, no, that came from billions of years of pre-training that just got mostly hardcoded into us, due to survival / evolutionary pressure. If anything, the fact that AI is as far as it is, after less than 100 years of development, is shocking. I recall my uncle trounce our C64 in chess, and go on to explain how machines don't have intuition, and the search space explodes combinatorically, which is why they will never beat a competent human. This was ~10 years before Deep Blue. Oh, sure, that's just a party trick. 10 years ago, we didn't have GPT-style language understanding, or image generation (at least, not widely available nor of middling quality). I wonder what we will have in 10, 20, 100 years - whatever it is, I am fairly confident that architectural improvements will lead to large capability improvements eventually, and that current behavior and limitations are just that, current. So, the argument is that somehow, intuitively they can't ever be truly intelligent or conscious because it's somehow intuitively obvious? I disagree with this argument; I don't think we have any real, scientific idea of what consciousness really is, nor do we have any way to differentiate "real" from "fake".
On the other end of the spectrum, I have seen humans with dementia not able to make sense of the world any more. Are they conscious? What about a dog, rabbit, cricket, bacterium? I am pretty sure at their own level, they certainly feel like they are alive and conscious. I don't have any real answers, but it certainly seems to be a spectrum, and holding on to some magical or esoteric differentiator, like emotions or feelings, seems like wishful thinking to me.
Your vocabulary presupposes the categories you’re asserting are equivalent. The process of evolution and AI training are vastly different. One confers a survival advantage and is suffused with values that are essential to humans, such as morality, the primacy of vision, taste and smell, etc. AI training is an attempt to transfer functions that allow for human survival and flourishing to objects that are not human. AI training, and especially the Turing Test featured in the Chinese room is about mimicking humans and human evolution is about survival and forms the basis of our aesthetic and moral judgments. One is simply a simulation of the other. Consciousness might not matter to what you concern yourself with as somebody amazed with AI (I am as well), but surely you believe that there is a moral difference between harming a human and harming an LLM, even verbally. What do you think accounts for that, if not consciousness?
> but surely you believe that there is a moral difference between harming a human and harming an LLM, even verbally.
I'm becoming less sure of this over time. As AI becomes more capable, it might start being more comparable to smaller mammals or birds, and then larger ones. It's not a boolean function, but rather a sliding scale.
Despite starting out from very skeptical roots, over time Ethology has found empirical evidence for some form of intelligence in more and more different species.
I do think that this should also inform our ethics somewhat.
2 replies →