Comment by itsyonas
6 days ago
> but I don’t think violence is funny or justified
Well, that's okay, because even Sam Altman disagrees with you. He absolutely believes that violence, including deadly violence, is justified - hence his contract with the US Department of War to use their systems in kill chains.
Perhaps the problem is that whoever threw the cocktail didn't use AI to select him as a target, or maybe he didn't receive payment for throwing it? Because what other difference is there?
I mostly agree with you - he seemed happy for the chance to play the victim. When the system is working, war is different because it has democratic process behind approval (Iran is obviously showing the system is breaking down)
But just because horrible people exist in positions of power doesn’t mean I have to become horrible myself. I accept that there is a threshold where that changes, but I think we would disagree that we’ve hit that threshold. If anything violence now just gives more excuse to justify further consolidation of power (look I got attacked! The anti AI people are crazy, any criticism of me is just encouraging them!) Imagine if it was a serious attack on sama, they could spin it into some serious gains for them.
[flagged]
Could you explain how the Vietnamese were involved in the US democratic process that resulted in around 3 million of their people dying? Similarly, how are the Iranians currently involved in the US democratic process to veto the use of AI targeting against them? As a German citizen, how can I object to being surveilled by OpenAI products used by US agencies?
It turns out that those affected by this are actually excluded from the process by design.
One of the more curious perks of being a democracy seems to be that you can also democratically (within your own country) decide about the fate of people in other, nondemocratic countries and then get to enforce those decisions by military...
I don't think that OpenAI necessarily enforces or fundamentally respects the democratic process. After the recent Pentagon spat with Anthropic, OpenAI did not change their stance to conditionally demand lawful usage of their product.
OpenAI can market democratic values very easily, I'm sure the White House loves that kind of dog-and-pony show. But it's pretty clear that OpenAI does not genuinely care about Rule of Law, let alone preventing humanitarian disasters from citing ChatGPT as their abettor.
There isn't anybody who wants to solve problems for people to vote for anymore.