← Back to context

Comment by DennisP

2 hours ago

Since well before LLMs, people have been talking about "philosophical zombies," hypothetical objects that could emulate human behavior perfectly but had no inner experience.

Some philosophers (one modern example being Kastrup) point out that the only thing we really know is our own conscious experience. We don't go full-on solipsist because other people appear to be built the same way as ourselves, so it's a small jump to think they're conscious as well. Over the past few decades scientists have found that other animals' brains are quite similar to our own in important ways, mammals especially, and are more willing to credit them with consciousness.

But AIs run on completely different hardware with different algorithms. It's entirely possible that they're philosophical zombies. It's a bigger leap to say they're conscious like us, because they're more different from us.