← Back to context

Comment by cyrusradfar

5 days ago

Something is deeply troubling when a company proclaims: "We want to protect people" and the government response is "we can't work with you"

The fact that there are countless use cases for real government efficiency to help the people they would sacrifice because Anthropic wanted to refuse killer robots is baffling.

Note that the threat in the Axios reporting OP is based on is no longer "we can't work with you" but now "invoke the Defense Production Act to force the company to tailor its model to the military's needs"

On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."

https://www.axios.com/2026/02/24/anthropic-pentagon-claude-h...

  • These AI companies and billionaires won't learn any lessons, but I hope the likes of Marc Andreessen stop getting treated intelligent well reasoned actors in the media and on podcasts when they bitched and moaned about the Biden administration overstepping. If someone thinks the pragmatic approach to resisting reporting requirements and export controls is to cozy up with the devil who will force you into worse capitulations or just seize your whole company, then that someone (looking at you Marc) is a fucking moron.

In a way its a testament that the safeguards are working for someone because it seems like the internet at large is full of bypasses.

The military is about killing people tho

  • Isn't there a saying about the US military being a logistics firm that sometimes carries guns? There are a lot of military activities that don't involve violence.

    • But all the activities pursue the threat of violence, although almost all of it never materializes. It's logistics to build things so you might do violence.