Comment by haritha-j
1 month ago
"Constitution"
"we express our uncertainty about whether Claude might have some kind of consciousness"
"we care about Claude’s psychological security, sense of self, and wellbeing"
Is this grandstanding for our benefit or do these people actually believe they're Gods over a new kind of entity?
Well it's definitely a new kind of entity created by Anthropic. Whether it's worth worrying about LLMs wellbeing is debatable. A subtle reason to maybe worry about it is thinking tends to get generalised. It's easier to say care about things in general than care about things with biological neurons but not artificial ones.
It's just Anthropic being Anthropic, nothing new
They put on ridiculous airs, but they're making damn fine LLMs.
You're either not an AI researcher or you're not paying attention if you think these questions aren't relevant.
Even a basic understanding of LLMs should convince anyone that LLM conciousness and well being are nonsensical ideas. And as for constitution, I mostly object to the use of the word rather than the concept of guidelines. Its an uncessarily grandiose word. And yes I'm aware that its been used in LLM research before.
Do you have a known-good, rigorously validated consciousness-meter that you can point at an LLM to confirm that it reads "NO CONSCIOUSNESS DETECTED"?
No? You don't?
Then where exactly is that overconfidence of yours coming from?
We don't know what "consciousness" is - let alone whether it can happen in arrays of matrix math. The leading theories, for all the good they do, are conflicting on whether LLM consciousness can be ruled out - and we, of course, don't know which theory of consciousness is correct. Or if any of them is.
3 replies →