Comment by bambax

9 hours ago

This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.

What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?

> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?

Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.

  • More importantly, Anthropic has the best model by a golden country mile and the US military complex wants it.

    • This administration^Wregime has a lot of experience pressuring publicly with high stakes followed up by making backroom deals that would even make Jared Kutcher blush.

      This is protection racketeering 101! So much so, that if any form of a functioning US judicial systems makes it past 2028, I’m willing to put money on that more than a handful of people in the upper echelons of today’s administration will end up getting slapped with the RICO Act.

I'm a bit underwhelmed tbh. Here is Anthropic's motto:

"At Anthropic, we build AI to serve humanity’s long-term well-being."

Why does Anthropic even deal with the Department of @#$%ing WAR?

And what does Amodei mean by "defeat" in his first paragraph?

  • DoD and American exceptionalists also believe American foreign policy is in service of humanity’s long term well being

    • It is all for the benefit of man. We even get to see the man himself daily on television.

    • Yeah, I don't think so any more. The sort of lofty Cold War rhetoric about leading the world, if it was ever legitimately believed by the people spouting it, is gone. A very different attitude has taken hold, which puts a zero sum ethnonationalism at the core.

    • I think the last few months have shown pretty clearly in whose service this policy is. If China went to attack Taiwan, west has no moral high ground left.

    • One of the hallmarks of fascist thinking is the dehumanizing of opponents and minorities, so within their own messed up framework, they might even mean it.

  • There was a time (1943?) when dealing with the US department of war meant serving for humanity's long-term well being.

    • Look I'm not going to disagree, obviously - but even in those times, you could argue that helping the department of war in some ways will contribute to deaths you might not necessarily want to be a part of. Bombing of Hiroshima and Nagasaki is still widely discussed today for a myriad of reasons, as is conventional bombing of cities in both Nazi Germany and Japan. We can both agree that fighting nazis is a good thing while at the same time have a moral objection to participating in the war effort.

      And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).

      Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.

  • Look up when Anthropic signed a contract with Palantir and then look up what Palantir does if you want an even better reality check on following the ideals. I chuckle every time.

    And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.

    Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.

Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.

Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.

Anthropic is already cooperating with the DoD, presumably fulfilling all the conditions and the DoD likes their stuff so much it wants to use it more broadly, so they want to change the terms of the agreement(s). Anthropic disagrees on some points; DoD wants to force them to agree.