Comment by outside2344

1 day ago

The real question we should be asking is what others HAVE agreed to. Has OpenAI just agreed to let the government go crazy with their models?

If you read Anthropic statement carefully, they explicitly confirm they are already working with the U.S. government on a range of military and national security use cases, many including areas that clearly relate to real world lethal operations.

They are only refusing two narrow, but important categories. Framing this as blanket "refusal to support the DoD" feels like an angry, reactive own goal rather than a careful reading of what they actually said.

So far the march toward dictatorship keep being detoured by sheer incompetence. In any case, is hard to seize power when you can’t organize a group chat...

  • Basically now all those projects are screwed and need to restart with another provider. I'm sure that's not going to be a massive PITA and delay for all involved.

Elon has agreed to all demands and can’t wait for gigahitler to take the reigns. I swear there is no room for good guys in this is there.

Can someone in plain terms explain what this is really about?

Anyone can use Claude afaik?

  • From the public comments over the last few days, my guess is they want a militarized version of Claude. Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai. Then some guardrails are probably quite bothersome for the military and they want them removed. Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea.

    Now, my guess is in the ensuing lawsuit Antropic's defense will be that that is just not a product they offer, somewhat akin to ordering Ford to build a tank variant of the F150.

    • > Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea.

      On the non-nuclear battlefield, I expect that the goverment wants Claude to green-light attacks on targets that may actually be non-combatants. Such targets might be military but with a risk of being civilian, or they could be civilians that the government wants to target but can't legally attack.

      Humans in the loop would get court-martialed or accused of war crimes for making such targeting calls. But by delegating to AI, the government gets to achieve their policy goals while avoiding having any humans be held accountable for them.

      5 replies →

    • > Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai.

      They already have that. By definition. If Anthropic has done the work to be able to run on classified networks, then it's already running air-gapped and is not under Anthropic's control.

      The thing is, just because you're in a SCIF doesn't (1) mean you can just break laws and (2) Anthropic don't have to support "off-label" applications.

      So this is not about what they have and what it can do today - it's about strong-arming anthropic into supporting a bunch of new applications Anthropic don't want to support (and in turn, which Anthropic or it's engineers could then be held legally liable for when a problem happens).

    • >akin to ordering Ford to build a tank variant of the F150.

      It worked for Porsche ¯\_(ツ)_/¯

  • Claude won't answer questions about what cities you should nuke in what order. The Pentagon wants Claude to answer those sorts of questions for them.

    Edit: oops, I misunderstood. This seems to be more about contractual restrictions.

    • Claude will answer all of those questions. The restriction Anthropic has is letting Claude pull the trigger and vibe-murder with no humans in the loop.

      This restriction is apparently "radically woke"

  • They want Claude to process tasks like "identify the terrorists in this photo" and "steer this drone towards the terrorists" — Anthropic refused.

  • I reached to answer but idk what you mean by the second question. Long story short, Department of “War” wants Anthropic to say theres no restrictions on their use of Claude, Anthropic wants to say you can’t use Claude for domestic mass surveillance or automating killing people domestically or in foreign countries. Rest is just complication. And don’t peer too closely at the “Do”W”” wants Anthropic to say $X, the Team Red line (or, whatever’s left of them publicly after this last year) is basically “you can’t tell the gov’t what it can and can’t do, that’s it, it’s not that Do”W” will use it for that”

  • > Can someone in plain terms explain what this is really about?

    This administration built almost entirely of dunces and conmen has convinced itself/been convinced that chatbots will help them in deciding where to send nukes, and/or they are invested in the incredibly over-leveraged companies engaged in the AI-boom and stand to profit directly by siphoning taxpayer dollars to said companies. My money is on the latter more than the former, but they're also incredibly stupid, so who's to say, maybe they actually think Claude can give strategic points.

    The Republicans have abandoned any pretense of actual governance in favor of pulling the copper out of the White House walls to sell as they will have an extremely hard time winning any election ever again since after decades of crowing about the cabal of pedophiles that run the world, we now know not only how true that actually is, but that the vast majority are Conservatives and their billionaire buddies, and the entire foundation and financial backing of what's now called the alt-Right, with some liberals in there for flavor too of course.

    If this shit was going down in France, the entire capital would have been burned to the ground twice over by now.

    • > they will have an extremely hard time winning any election ever again

      Heard that one before. We'll get a reprieve of 4-8 years and the vote will go to the fascists again. Take that to the bank.

      2 replies →

    • I prefer to call them chatboxes. It's appropriately belittling. The department of killing wants their chatbox to tell them who to kill.

Yes. All companies that deal with the government have agreed to let the government do whatever it wants within the bounds of whatever it is those companies do.