← Back to context

Comment by NickAndresen

1 day ago

"They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." from Dario's statement (https://www.anthropic.com/news/statement-department-of-war)

Supply chain risk ? Seems the risk here is the US Gov't wanting free reign to do whatever they want - - when they want.

Look no further than the famous expose by Mark Klein, the former AT&T technician and whistleblower who exposed the NSA's mass surveillance program in 2006, revealing the existence of "Room 641A" in San Francisco. He discovered that AT&T was using a "splitter" to copy and divert internet traffic to the NSA, proving the government was monitoring massive amounts of domestic communication.

  • And I think on big difference between <2006 and now is that back then nobody knew about it - now they just request it in public.

  • I served on the eboard of CWA local 9410 when all of that was going down.

    Words cannot describe how crazy things were at that time.

    I feel like someone will make a movie about it someday.

The real question we should be asking is what others HAVE agreed to. Has OpenAI just agreed to let the government go crazy with their models?

  • If you read Anthropic statement carefully, they explicitly confirm they are already working with the U.S. government on a range of military and national security use cases, many including areas that clearly relate to real world lethal operations.

    They are only refusing two narrow, but important categories. Framing this as blanket "refusal to support the DoD" feels like an angry, reactive own goal rather than a careful reading of what they actually said.

    So far the march toward dictatorship keep being detoured by sheer incompetence. In any case, is hard to seize power when you can’t organize a group chat...

    • Basically now all those projects are screwed and need to restart with another provider. I'm sure that's not going to be a massive PITA and delay for all involved.

  • Elon has agreed to all demands and can’t wait for gigahitler to take the reigns. I swear there is no room for good guys in this is there.

  • Can someone in plain terms explain what this is really about?

    Anyone can use Claude afaik?

    • From the public comments over the last few days, my guess is they want a militarized version of Claude. Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai. Then some guardrails are probably quite bothersome for the military and they want them removed. Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea.

      Now, my guess is in the ensuing lawsuit Antropic's defense will be that that is just not a product they offer, somewhat akin to ordering Ford to build a tank variant of the F150.

      8 replies →

    • Claude won't answer questions about what cities you should nuke in what order. The Pentagon wants Claude to answer those sorts of questions for them.

      Edit: oops, I misunderstood. This seems to be more about contractual restrictions.

      1 reply →

    • They want Claude to process tasks like "identify the terrorists in this photo" and "steer this drone towards the terrorists" — Anthropic refused.

    • I reached to answer but idk what you mean by the second question. Long story short, Department of “War” wants Anthropic to say theres no restrictions on their use of Claude, Anthropic wants to say you can’t use Claude for domestic mass surveillance or automating killing people domestically or in foreign countries. Rest is just complication. And don’t peer too closely at the “Do”W”” wants Anthropic to say $X, the Team Red line (or, whatever’s left of them publicly after this last year) is basically “you can’t tell the gov’t what it can and can’t do, that’s it, it’s not that Do”W” will use it for that”

    • > Can someone in plain terms explain what this is really about?

      This administration built almost entirely of dunces and conmen has convinced itself/been convinced that chatbots will help them in deciding where to send nukes, and/or they are invested in the incredibly over-leveraged companies engaged in the AI-boom and stand to profit directly by siphoning taxpayer dollars to said companies. My money is on the latter more than the former, but they're also incredibly stupid, so who's to say, maybe they actually think Claude can give strategic points.

      The Republicans have abandoned any pretense of actual governance in favor of pulling the copper out of the White House walls to sell as they will have an extremely hard time winning any election ever again since after decades of crowing about the cabal of pedophiles that run the world, we now know not only how true that actually is, but that the vast majority are Conservatives and their billionaire buddies, and the entire foundation and financial backing of what's now called the alt-Right, with some liberals in there for flavor too of course.

      If this shit was going down in France, the entire capital would have been burned to the ground twice over by now.

      5 replies →

  • Yes. All companies that deal with the government have agreed to let the government do whatever it wants within the bounds of whatever it is those companies do.

It's scary to me that there are a significant voting-bloc out there who don't see this kind of zero-integrity (and self-serving) behavior as disqualifying in anyone wielding authority.

Worse, they act like it's virtuous.

Is this the same Administration that reversed a previous block, and allowed NVIDIA to sell H200 to China?

That's a shame. They might at least continue to work together to spy on foreigners. I don't understand the fuss anyway, what do claude models do that gpt and gemini can't?

  • As a foreigner, i see this as a great thing! I was about to cancel my Claude sub, but now i might hold on to it for a little and see how this plays out.

  • it's more the way they do them.. you've used them right?

    • Sure but I don't find them irreplaceable. Actually anthropic models have dropped out of my top ten usage this month. I only use opus occasionally for writing plans, its been pretty unreliable at executing.

It feels like when you are negotiating a contract for job with a toxic employer who you still don’t know they are toxic yet.

Trump wrote a long rant on Truth Social and ordered ALL federal agencies to stop using Anthropic. Not just the department of defense. This is straight up authoritarian.

Meanwhile, irrelevant "AI Czar" David Sacks, member of the PayPal mafia alongside known Epstein affiliates Elon Musk and Peter Thiel, is furiously retweeting all the posts from Trump, Hegseth, and other accounts. He is such a coward and anti American:

https://xcancel.com/davidsacks

[flagged]

  • Circus-grade contortionism here.

    • Is it? Are you claiming nuclear bombs are not both essential and also a risk to national security?

      Aren't all the AI companies saying that AI poses even a greater threat to humanity than nukes?

      How can these two not be deeply connected? If a technology poses humanity extinction level of risk of course it will also be a matter of national security - how can it not be?

      3 replies →

I don’t see a contradiction here. If control is out of the hands of decision makers, that’s a supply chain risk . Were it not for that, the service is seen as critical to national security.

I dunno, safeguard seems like a weasel word here. It’s just reserving control to one party over another. It’s understandable why the DoD(W) wouldn’t like that.