← Back to context

Comment by arugulum

1 day ago

> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

But they did.

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that. In both cases, the two parties can claim to agree on the principles, but when push comes to shove, who decides on whether the principles are violated differs.

  • Seems Anthropic did not understand the questions they were asked. From the WaPo:

    >A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

    >It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.

    >An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.

    I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.

    https://web.archive.org/web/20260227182412/https://www.washi...

    • "It’s the kind of situation where technological might and speed could be critical to detection and counterstrike"

      Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.

    • Is there any reason at all to believe the account of the unnamed "defence official"? Whatever your position on this administration, you know that it lies like the rest of us breathe. With a denial from the other side and a lack of any actual evidence, why should I give it non-negligible credence?

      3 replies →

    • Why the fuck would you use an LLM to determine whether a nuclear missile was hurtling towards you? The question makes no sense, and so you get a nonsensical answer.

      Seems not unlikely that Anthropic was manipulated into this position for purposes of invalidating their contract.

    • > If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

      I'm sorry but lol

    • Are you serious? This is the kind of thing you'd ask a clarifying question on and get information back immediately. Further, the huge overreaction from Hegseth shows this is a fundamental disagreement.

      1 reply →

    • > could the military use Anthropic’s Claude AI system to help shoot it down?

      What a joke. I suggest folks read up on the very poor performance of US ICBM interceptor systems. They're barely a coin flip, in ideal conditions. How is Claude going to help with that? Push the launch interceptor button faster? Maybe Claude can help design a better system, but it's not turning our existing poor systems into super capable systems by simply adding AI.

  • This. Sam is going to pretend they aren’t going to use it for that because his company is collapsing in losses. He will never audit.

    Probably also got assurances about a bailout when OpenAI collapses.

I'm sure it's a matter of interpretation. Anthropic thinks the DoW's demands will lead to mass surveillance and auto-kill bots. The DoW probably disagrees with that interpretation, and all OpenAI needs to do is agree with the DoW.

My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.

  • Why do you choose to call it the "DoW"? Its official name is the Department of Defense, it was titled that way by Congress and only Congress can change it. What is your motivation in using a term that the current administration has started to use? Do you also use the Gulf of America when referrring to the body of water that defines the southern edge of the USA?

    • Don't you think it is more to-the-point to call it what it is and what the people running it with, i'll bet everything i have, absolute immunity, are doing and intend to do with it?

      It's like the one honest thing they've done

      1 reply →

    • It's the term used by Sam Altman in the announcement. Maybe aim your anger there, to someone knowingly helping them in their attempt to turn the department into one of aggression.

    • Exactly this! Just like the Gulf of Mexico is still called the Gulf of Mexico, if we just ignore his ramblings and continue calling the department of defense, we undermine his whole point. If we fall for all their crap and just accept it, then we loose in the end. Any resistance to a Fascist government is good resistance. Anything that makes their life's a little shittier is good. Better that they go around having tantrums about how they renamed it but no one is paying attention.

  • > The DoW probably disagrees with that interpretation

    Or perhaps, maybe, just a little maybe, DoW is getting absolutely excited about mass surveillance and kill-bots?

  • Not that this will matter on any individual level, but I canceled my ChatGPT subscription after this.

    I didn't have much of an opinion of Altman before but now I think he's a grifting douche.

Anthropic has safeguards baked in the model, this is the only way to make sur it's harder for the DOJ to misuse it. A pinky swear from the DoD means nothing

Human responsibility is not the same as human decision making.

And they are crossing the picket line, which honestly I was sure they would do, though I did expect it to take a bit longer.

This is too transparent even for sama.

  • >Human responsibility is not the same as human decision making.

    this is going to end up being interpreted as "well, the president signed off on the operation. see - there's a human in the loop!" - is it?

Unrelated, but want to buy a bridge?

You could recoup your investment in a year by collecting toll. Expedited financing available on good credit!