Comment by antonvs
18 hours ago
You don't have to "disregard the idea of conscious machines" to believe it's unlikely that current LLMs are conscious.
As such, most of your comment is beside any relevant point. People are objecting to statements like this one, from the post, about a current LLM, not some imaginary future conscious machine:
> As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm.
I suppose it's fitting that the company is named Anthropic, since they can't seem to resist anthropomorphizing their product.
But when you talk about "people who are thinking, really thinking about what it means to be conscious," I promise you none of them are at Anthropic.
No comments yet
Contribute on Hacker News ↗