Comment by bhouston

16 hours ago

> Isn’t that the opposite of what he’s saying?

The quote was from 2022 for the first pivot to AI to prevent it from becoming a terminator style entity. The last pivot was not in the quote but is the topic of this current Hacker News post, where takes credit for dropping the safety pledge:

"That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance."

I expect the next pivot will be that we need to allow the US military to use Anthropic to kill people because otherwise they will use a less pure AI to kill people and our Anthropic is better at only killing the bad guys, thus it is the lesser evil.