← Back to context

Comment by Taek

2 days ago

This sort of discourse goes against the spirit of HN. This comment outright dismisses an entire class of professionals as "simple minded or mentally unwell" when consciousness itself is poorly understood and has no firm scientific basis.

Its one thing to propose that an AI has no consciousness, but its quite another to preemptively establish that anyone who disagrees with you is simple/unwell.

In the context of the linked article the discourse seems reasonable to me. These are experts who clearly know (link in the article) that we have no real idea about these things. The framing comes across to me as a clearly mentally unwell position (ie strong anthropomorphization) being adopted for PR reasons.

Meanwhile there are at least several entirely reasonable motivations to implement what's being described.

  • Ethology (~comparative psychology) started with 'beware anthropomorphization' as a methodological principle. But a century of research taught us the real lesson: animals do think, just not like humans. The scientific rigor wasn't wrong - but the conclusion shifted from 'they don't think' to 'they have their own ways of thinking.' We might be at a similar inflection point with AI. The question isn't whether Claude thinks or feels like a human (it probably doesn't), but whether it thinks or feels at all (maybe a little? It sure looks that way sometimes. Empiricism demands a closer look!).

    We don't say submarines can swim either. But that doesn't mean you shouldn't watch out for them when sailing on the ocean - especially if you're Tom Hanks.

    • I completely agree! And note that the follow on link in the article has a rather different tone. My comment was specifically about the framing of the primary article.

  • All of the posts in question explicitly say that it's a hard question and that they don't know the answer. Their policy seems to be to take steps that have a small enough cost to be justified when the chance is tiny. In this case it's a useful feature in any case, so should be an easy decision.

    The impression I get about Anthropic culture is that they're EA types who are used to applying utilitarian calculations against long odds. A miniscule chance of a large harm might justify some interventions that seem silly.

  • > These are experts who clearly know (link in the article) that we have no real idea about these things

    Yep!

    > The framing comes across to me as a clearly mentally unwell position (ie strong anthropomorphization) being adopted for PR reasons.

    This doesn't at all follow. If we don't understand what creates the qualities we're concerned with, or how to measure them explicitly, and the _external behaviors_ of the systems are something we've only previously observed from things that have those qualities, it seems very reasonable to move carefully. (Also, the post in question hedges quite a lot, so I'm not even sure what text you think you're describing.)

    Separately, we don't need to posit galaxy-brained conspiratorial explanations for Anthropic taking an institutional stance re: model welfare being a real concern that's fully explained by the actual beliefs of Anthropic's leadership and employees, many of whom think these concerns are real (among others, like the non-trivial likelihood of sufficiently advanced AI killing everyone).

Then your definition of consciousness isn't the same as my definition and we are talking about some different philosophical concepts, this really doesn't affect anything and we all could be just talking about metaphysics and ghosts

If you believe this text generation algorithm has real consciousness you absolutely are either mentally unwell or very stupid. There are no other options.