← Back to context

Comment by buppermint

8 hours ago

Anthropic has already has lower guardrails for DoD usage: https://www.theverge.com/ai-artificial-intelligence/680465/a...

It's interesting to me that a company that claims to be all about the public good:

- Sells LLMs for military usage + collaborates with Palantir

- Releases by far the least useful research of all the major US and Chinese labs, minus vanity interp projects from their interns

- Is the only major lab in the world that releases zero open weight models

- Actively lobbies to restrict Americans from access to open weight models

- Discloses zero information on safety training despite this supposedly being the whole reason for their existence

This comment reminded me of a Github issue from last week on Claude Code's Github repo.

It alleged that Claude was used to draft a memo from Pam Bondi and in doing so, Claude's constitution was bypassed and/or not present.

https://github.com/anthropics/claude-code/issues/17762

To be clear, I don't believe or endorse most of what that issue claims, just that I was reminded of it.

One of my new pastimes has been morbidly browsing Claude Code issues, as a few issues filed there seem to be from users exhibiting signs of AI psychosis.

Both weapons manufacturers like Lockheed Martin (defending freedom) and cigarette makers like Philip Morris ( "Delivering a Smoke-Free Future.") also claim to be for the public good. Maybe don't believe or rely on anything you hear from business people.

> Releases by far the least useful research of all the major US and Chinese labs, minus vanity interp projects from their interns

From what I've seen the anthropic interp team is the most advanced in the industry. What makes you think otherwise?

Military technology is a public good. The only way to stop a russian soldier from launching yet another missile at my house is to kill him.

  • I don't think U.S.-Americans would be quite so fond of this mindset if every nation and people their government needlessly destroyed thought this way.

    Doesn't matter if it happened through collusion with foreign threats such as Israel or direct military engagements.

    • Somehow I don’t get the impression that US soldiers killed in the Middle East are stoking American bloodlust.

      Conversely, russian soldiers are here in Ukraine today, murdering Ukrainians every day. And then when I visit, for example, a tech conference in Berlin, there are somehow always several high-powered nerds with equal enthusiasm for both Rust and the hammer and sickle, who believe all defence tech is immoral, and that forcing Ukrainian men, women, and children to roll over and die is a relatively more moral path to peace.

      1 reply →

  • It's not the only way.

    An alternative is to organize the world in a way that makes it not just unnecessary but even more so detrimental to said soldier's interests to launch a missle towards your house in the first place.

    The sentence you wrote wouldn't be something you write about (present day) German or French soldiers. Why? Because there are cultural and economic ties to those countries, their people. Shared values. Mutual understanding. You wouldn't claim that the only way to prevent a Frenchmen to kill you is to kill them first.

    It's hard to achieve. It's much easier to just mark the strong man, fantasize about a strong military with killing machines that defend the good against the evil. And those Hollywood-esque views are pushed by populists and military industries alike. But they ultimately make all our societies poorer, less safe and arguably less moral.

You just need to hear the guy stance on china open models to understand They not the goods guys.

Do you think dod would use Anthropic even with lower guardrails?

How can I kill this terrorist in the middle on civilians with max 20% casualties?

If Claude will answer: “sorry can’t help with that “ won’t be useful, right?

Therefore the logic is they need to answer all the hard questions.

Therefore as I’ve been saying for many times already they are sketchy.