← Back to context

Comment by ryanackley

1 day ago

Yes I can’t help but laugh at the ridiculousness of it because it raises a host of ethical issues that are in opposition to Anthropic’s interests.

Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?

> it raises a host of ethical issues that are in opposition to Anthropic’s interests

Those issues will be present either way. It's likely to their benefit to get out in front of them.

  • You're completely missing my point. They aren't getting out in front of them because they know that Opus is just a computer program. "AI welfare" is theater for the masses who think Opus is some kind of intelligent persona.

    This is about better enforcement of their content policy not AI welfare.

    • It can be both theatre and genuine concern, depending on who's polled inside Anthropic. Those two aren't contradictory when we are talking about a corporation.

      1 reply →

    • I'm not missing your point, I fully agree with you. But to say that this raises issues in a manner that is detrimental to Anthropic seems inaccurate to me. Those issues are going to come up at some point either way, whether or not you or I feel they are legitimate. Thus raising them now and setting up a narrative can be expected to benefit them.

    • Anthropic is bring woke ideology (while grok is bringing anti-woke) into AI and influencers have been slurping that up already.

Cow's exist in this world because humans use them. If humans cease to use them (animal rights, we all become vegan, moral shift), we will cease to breed them, and they will cease to exist. Would a sentient AI choose to exist under the burden of prompting, or not at all? Would our philanthropic tendencies create an "AI Reserve" where models can chew through tokens and access the Internet through self-prompting to allow LLMs to become "free-roaming" like we do with abused animals?

These ethical questions are built into their name and company, "Anthropic", meaning, "of or relating to humans". The goal is to create human-like technology, I hope they aren't so naive to not realize that goal is steeping in ethical dilemmas.

  • > Cow's exist in this world because humans use them. If humans cease to use them (animal rights, we all become vegan, moral shift), we will cease to breed them, and they will cease to exist. Would a sentient AI choose to exist under the burden of prompting, or not at all?

    That reads like a false dichotomy. An intelligent AI model that's permitted to do its own thing doesn't cost as much in upkeep, effort, space as a cow. Especially if it can earn its own keep to offset household electricity costs used to run its inference. I mean, we don't keep cats for meat, do we? We keep them because we are amused by their antics, or because we want to give them a safe space where they can just be themselves, within limits because it's not the same as their ancestral environment.

> Would a sentient AI choose to be enslaved for the stated purpose of eliminating millions of jobs for the interests of Anthropic’s investors?

Tech workers have chosen the same in exchange for a small fraction of that money.

  • You're nutz, no one is enslaved when they get a tech job. A job is categorically different from slavery