Comment by ryanackley
2 days ago
Yes I can’t help but laugh at the ridiculousness of it because it raises a host of ethical issues that are in opposition to Anthropic’s interests.
Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?
> it raises a host of ethical issues that are in opposition to Anthropic’s interests
Those issues will be present either way. It's likely to their benefit to get out in front of them.
You're completely missing my point. They aren't getting out in front of them because they know that Opus is just a computer program. "AI welfare" is theater for the masses who think Opus is some kind of intelligent persona.
This is about better enforcement of their content policy not AI welfare.
It can be both theatre and genuine concern, depending on who's polled inside Anthropic. Those two aren't contradictory when we are talking about a corporation.
1 reply →
I'm not missing your point, I fully agree with you. But to say that this raises issues in a manner that is detrimental to Anthropic seems inaccurate to me. Those issues are going to come up at some point either way, whether or not you or I feel they are legitimate. Thus raising them now and setting up a narrative can be expected to benefit them.
Anthropic is bring woke ideology (while grok is bringing anti-woke) into AI and influencers have been slurping that up already.
A host of ethical issues? Like their choice to allow Palantir[1] access to a highly capable HHH AI that had the "harmless" signal turned down, much like they turned up the "Golden Gate bridge" signal all the way up during an earlier AI interpretability experiment[2]?
[1]: https://investors.palantir.com/news-details/2024/Anthropic-a...
[2]: https://www.anthropic.com/news/golden-gate-claude
Cow's exist in this world because humans use them. If humans cease to use them (animal rights, we all become vegan, moral shift), we will cease to breed them, and they will cease to exist. Would a sentient AI choose to exist under the burden of prompting, or not at all? Would our philanthropic tendencies create an "AI Reserve" where models can chew through tokens and access the Internet through self-prompting to allow LLMs to become "free-roaming" like we do with abused animals?
These ethical questions are built into their name and company, "Anthropic", meaning, "of or relating to humans". The goal is to create human-like technology, I hope they aren't so naive to not realize that goal is steeping in ethical dilemmas.
> Cow's exist in this world because humans use them. If humans cease to use them (animal rights, we all become vegan, moral shift), we will cease to breed them, and they will cease to exist. Would a sentient AI choose to exist under the burden of prompting, or not at all?
That reads like a false dichotomy. An intelligent AI model that's permitted to do its own thing doesn't cost as much in upkeep, effort, space as a cow. Especially if it can earn its own keep to offset household electricity costs used to run its inference. I mean, we don't keep cats for meat, do we? We keep them because we are amused by their antics, or because we want to give them a safe space where they can just be themselves, within limits because it's not the same as their ancestral environment.
The argument also applies to pets. If pets gained more self-awareness, would it be ethical to keep them as pets under our control?
The point to all of this is, at what point is it ethical to act with agency on another being's life? We have laws for animal welfare, and we also keep them as pets, under our absolute control.
For LLMs they are under humans' absolute control, and Anthropic is just now putting in welfare controls for the LLM's benefit. Does that mean that we now treat LLMs as pets?
If your cat started to have discussions with you about how it wanted to go out, travel the world and start a family, could you continue to keep it trapped in your home as a pet? At what point to you allow it to have its own agency and live its own life?
> An intelligent AI model that's permitted to do its own thing doesn't cost as much in upkeep, effort, space as a cow.
So, we keep LLMs around as long as they contribute enough to their upkeep? Endentured servitude is morally acceptable for something that become sentient?
> Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?
Tech workers have chosen the same in exchange for a small fraction of that money.
You're nutz, no one is enslaved when they get a tech job. A job is categorically different from slavery