Comment by mgraczyk
2 days ago
Do you believe that AI systems could be conscious in principle? Do you think they ever will be? If so, how long do you think it will take from now before they are conscious? How early is too early to start preparing?
2 days ago
Do you believe that AI systems could be conscious in principle? Do you think they ever will be? If so, how long do you think it will take from now before they are conscious? How early is too early to start preparing?
I firmly believe that we are not even close and that it is pretty presumptuous to start "preparing" when such metal energy could be much better spent on the welfare of our fellow humans.
Such mental energy could have always been spent on the welfare of our fellow humans, and yet we find this as a fight throughout the ages. The same goes for welfare and treatment of animals.
So yea, humans can work on more than one problem at a time, even ones that don't fully exist yet.
> Do you believe that AI systems could be conscious in principle?
Yes.
> Do you think they ever will be?
Yes.
> how long do you think it will take from now before they are conscious?
Timelines are unclear, there's still too many missing components, at least based on what has been publicly disclosed. Consciousness will probably be defined as a system which matches a set of rules, whenever we figure out what how that set of rules is defined.
> How early is too early to start preparing?
It's one of those "I know it when I see it" things. But it's probably too early as long as these systems are spun up for one-off conversations rather than running in a continuous loop with self-persistence. This seems closer to "worried about NPC welfare in video games" rather than "worried about semi-conscious entities".
We haven't even figured out a good definition of consciousness in humans, despite thousands of years of trying.
Whether or not a non-biological system is conscious is a red herring. There is no test we could apply that would not be internally inconsistent or would not include something obviously not conscious or exclude something obviously conscious.
The only practical way to deal with any emergent behavior which demonstrates agency in a way which cannot be distinguished from a biological system which we tautologically have determined to have agency is to treat it as if it had a sense of self and apply the same rights and responsibilities to it as we would to a human of the age of majority. That is, legal rights and legal responsibilities as appropriately determined by a authorized legal system. Once that is done, we can ponder philosophy all day knowing that we haven't potentially restarted legally sanctioned slavery.
AI systems? Yes, if they are designed in ways that support that development. (I am as I have mentioned before a big fan of the work of Steve Grand).
LLMs? No.