OpenAI just took a major US military contract from Anthropic because Anthropic had morals and wouldn't let the US military use Claude to surveil or attack US citizens ...
... and OpenAI didn't. The military said (effectively) "we need to be able to use AI illegally against our own citizens", and OpenAI said "we'll help!"
OpenAI just took a major US military contract from Anthropic because Anthropic had morals and wouldn't let the US military use Claude to surveil or attack US citizens ...
... and OpenAI didn't. The military said (effectively) "we need to be able to use AI illegally against our own citizens", and OpenAI said "we'll help!"