Comment by fc417fc802
1 day ago
In the context of the linked article the discourse seems reasonable to me. These are experts who clearly know (link in the article) that we have no real idea about these things. The framing comes across to me as a clearly mentally unwell position (ie strong anthropomorphization) being adopted for PR reasons.
Meanwhile there are at least several entirely reasonable motivations to implement what's being described.
All of the posts in question explicitly say that it's a hard question and that they don't know the answer. Their policy seems to be to take steps that have a small enough cost to be justified when the chance is tiny. In this case it's a useful feature in any case, so should be an easy decision.
The impression I get about Anthropic culture is that they're EA types who are used to applying utilitarian calculations against long odds. A miniscule chance of a large harm might justify some interventions that seem silly.
> These are experts who clearly know (link in the article) that we have no real idea about these things
Yep!
> The framing comes across to me as a clearly mentally unwell position (ie strong anthropomorphization) being adopted for PR reasons.
This doesn't at all follow. If we don't understand what creates the qualities we're concerned with, or how to measure them explicitly, and the _external behaviors_ of the systems are something we've only previously observed from things that have those qualities, it seems very reasonable to move carefully. (Also, the post in question hedges quite a lot, so I'm not even sure what text you think you're describing.)
Separately, we don't need to posit galaxy-brained conspiratorial explanations for Anthropic taking an institutional stance re: model welfare being a real concern that's fully explained by the actual beliefs of Anthropic's leadership and employees, many of whom think these concerns are real (among others, like the non-trivial likelihood of sufficiently advanced AI killing everyone).