← Back to context

Comment by 0_____0

1 day ago

Looking at this thread, it's pretty obvious that most folks here haven't really given any thought as to the nature of consciousness. There are people who are thinking, really thinking about what it means to be conscious.

Thought experiment - if you create an indistinguishable replica of yourself, atom-by-atom, is the replica alive? I reckon if you met it, you'd think it was. If you put your replica behind a keyboard, would it still be alive? Now what if you just took the neural net and modeled it?

Being personally annoyed at a feature is fine. Worrying about how it might be used in the future is fine. But before you disregard the idea of conscious machines wholesale, there's a lot of really great reading you can do that might spark some curiosity.

this gets explored in fiction like 'Do Androids Dream of Electric Sheep' and my personal favorite short story on this matter by Stanislaw Lem [0]. If you want to read more musings on the nature of consciousness, I recommend the compilation put together by Dennet and Hofstader[1]. If you've never wondered about where the seat of consciousness is, give it a try.

Thought experiment: if your brain is in a vat, but connected to your body by lossless radio link, where does it feel like your consciousness is? What happens when you stand next to the vat and see your own brain? What about when the radio link fails suddenly fails and you're now just a brain in a vat?

[0] The Seventh Sally or How Trurl's Own Perfection Led to No Good https://home.sandiego.edu/~baber/analytic/Lem1979.html (this is a 5 minute read, and fun, to boot).

[1] The Mind's I: Fantasies And Reflections On Self & Soul. Douglas R Hofstadter, Daniel C. Dennett.

You don't have to "disregard the idea of conscious machines" to believe it's unlikely that current LLMs are conscious.

As such, most of your comment is beside any relevant point. People are objecting to statements like this one, from the post, about a current LLM, not some imaginary future conscious machine:

> As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm.

I suppose it's fitting that the company is named Anthropic, since they can't seem to resist anthropomorphizing their product.

But when you talk about "people who are thinking, really thinking about what it means to be conscious," I promise you none of them are at Anthropic.