Comment by 6gvONxR4sf7o
1 day ago
I'm surprised to see such a negative reaction here. Anthropic's not saying "this thing is conscious and has moral status," but the reaction is acting as if they are.
It seems like if you think AI could have moral status in the future, are trying to build general AI, and have no idea how to tell when it has moral status, you ought to start thinking about it and learning how to navigate it. This whole post is couched in so much language of uncertainty and experimentation, it seems clear that they're just trying to start wrapping their heads around it and getting some practice thinking and acting on it, which seems reasonable?
Personally, I wouldn't be all that surprised if we start seeing AI that's person-ey enough to reasonable people question moral status in the next decade, and if so, that Anthropic might still be around to have to navigate it as an org.
>if you think AI could have moral status in the future
I think the negative reactions are because they see this and want to make their pre-emptive attack now.
The depth of feeling from so many on this issue suggests that they find even the suggestion of machine intelligence offensive.
I have seen so many complaints about AI hype and the dangers of bit tech show their hand by declaring that thinking algorithms are outright impossible. There are legitimate issues with corporate control of AI, information, and the ability to automate determinations about individuals, but I don't think they are being addressed because of this driving assertion that they cannot be thinking.
Few people are saying they are thinking. Some are saying they might be, in some way. Just as Anthropic are not (despite their name) anthropomorphising the AI in the sense where anthropomorphism implies that they are mistaking actions that resemble human behaviour to be driven by the same intentional forces. Anthropic's claims are more explicitly stating that they have enough evidence to say they cannot rule out concerns for it's welfare. They are not misinterpreting signs, they are interpreting them and claiming that you can't definitively rule out their ability.
You'd have to commit yourself to believing a massive amount of implausible things in order to address the remote possibility that AI consciousness is plausible.
If there weren't a long history of science-fiction going back to the ancients about humans creating intelligent human-like things, we wouldn't be taking this possibility seriously. Couching language in uncertainty and addressing possibility still implies such a possibility is worth addressing.
It's not right to assume that the negative reactions are due to offense (over, say, the uniqueness of humanity) rather than from recognizing that the idea of AI consciousness is absurdly improbable, and that otherwise intelligent people are fooling themselves into believing a fiction to explain a this technology's emergent behavior we can't currently fully explain.
It's a kind of religion taking itself too seriously -- model welfare, long-termism, the existential threat of AI -- it's enormously flattering to AI technologists to believe humanity's existence or non-existence, and the existence or non-existence of trillions of future persons, rests almost entirely on the work this small group of people do over the course of their lifetimes.
>You'd have to commit yourself to believing a massive amount of implausible things in order to address the remote possibility that AI consciousness is plausible.
We have a few data points. We generally accept that human consciousness exists. Thus we accept that there can be conscious things. We can either accept or deny that the human brain operates entirely by cause and effect. If we deny it then we are arguing that some required part of it's nature is uncaused. Any uncaused thing must be random because anything you can observe that enables you to discern a pattern of behaviour is, by definition, a cause. I have not seen a compelling argument to say that this randomness could in any way give rise to intention. The other path is sometimes called neurophysiological-determanism. While acknowledging that there are elements of quantum randomness in existence, it considers them to play no part in the cause and effect chain of human consciousness other than providing noise. A decision can be made to follow the result of the noise as one might flip a coin, but the determination to do so must be causal in nature otherwise you are left with nothing but randomness.
In short, we make decisions based upon what is. not what isn't. If we accept that human consciousness is as a result of causal effects, by what means can we declare the impossibility of a machine that processes things in a causal nature incapable of doing the same.
The easy out is to invoke magic. Say we have a soul, God did it or any manner of, by definition, unprovable influences that make it just so. Doing that does require you to declare that the mechanism for consciousness is unprovable and it is an article of faith that computers are incapable of thinking. As soon as you can prove it, it ceases being magic and becomes a real world cause.
I don't claim to know that any computer exists that has an experience comparable to a humans, but I find it very hard to accept that it could never be the case.