Comment by spaghetdefects
4 hours ago
Thank you. Anthropic also is culpable in the illegal war against Iran that started with the bombing and murder of an entire girls school.
https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
4 hours ago
Thank you. Anthropic also is culpable in the illegal war against Iran that started with the bombing and murder of an entire girls school.
https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.
They've done lots wrong and maybe they shouldn't have gotten in bed with the military to begin with, but this illegal war is not theirs. It rests squarely with the President who declared it. (And with the military officers who are going along with it despite the violation of international law.)
> If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.
Anthropic claim that superintelligence is coming, that unaligned AI is an existential threat to humanity, and they are the only ones responsible enough to control it.
If that's your world view, why would you be willing to accept someone's word that they'll only Do Good Things with it? And not just "someone", someone with access to the world's most powerful nuclear arsenal? A contract is meaningless if the world gets obliterated in nuclear war.
I don't think any AI company should get in bed with the military. That being said, if the terms of service have been violated, the account should be canceled.
They basically are cancelling the contract, but there are some nuances on Anthropic's side. The contract probably has stipulations that prevent them from doing it overnight, so it might be illegal (but ethical) for them to just turn off the API keys.
Also, doing that might have bad second order effects with bad ethical implications.
For example, when Musk decided to pull the plug on a bunch of starlink terminals, he (intentionally and knowingly) blocked a US-funded attack that would have sunk a big chunk of the Russian navy, which certainly prolonged the Ukraine war. That was clearly an act of treason (illegal).
Anyway, just turning off Claude could kill a bunch of civilians in the region or something. It depends on how deeply it's integrated into military logistics at this point.
Anyway, your point certainly holds for OpenAI:
They walked into a "use ChatGPT for war crimes, and illegal domestic surveillance / 'law enforcement'" deal with open eyes, and pretty obviously lied about it while the deal was being signed. I don't see any ethical nuance that would even partially excuse their actions.