← Back to context

Comment by remarkEon

1 day ago

Seems Anthropic did not understand the questions they were asked. From the WaPo:

>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.

>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.

I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.

https://web.archive.org/web/20260227182412/https://www.washi...

"It’s the kind of situation where technological might and speed could be critical to detection and counterstrike"

Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.

Is there any reason at all to believe the account of the unnamed "defence official"? Whatever your position on this administration, you know that it lies like the rest of us breathe. With a denial from the other side and a lack of any actual evidence, why should I give it non-negligible credence?

  • It is bizarre. I like how, "past performance predicts future performance" is supposed to apply to founders and companies but completely disregarded for a two term president and admin, as if we have no idea how they will operate in the future.

    Anthropic, with its current war chest, is supposedly employeeing lawyers that are misunderstanding the Department of War? This is considered to be the likelier of possibilities, am I understanding this correctly?

    • This is not what I said, and not what the WaPo quoted. We're talking about the CEO, who is shall we say unfamiliar with war making, getting asked a hypothetical about how the product he sells would perform in a first strike scenario, and he reportedly gives what is an entirely legalese answer. Yes, I consider this a likely possibility. It sounds exactly like how someone would respond if they've been swimming in legal memos for months.

  • Anonymous sources are bad again. Glad we could clear that up.

    • That's a copout and you know it. You're focusing on the 'unnamed' part; I'm focusing on the 'representative of an administration that lies constantly and brazenly' part.

> If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?

I'm sorry but lol

Why the fuck would you use an LLM to determine whether a nuclear missile was hurtling towards you? The question makes no sense, and so you get a nonsensical answer.

Seems not unlikely that Anthropic was manipulated into this position for purposes of invalidating their contract.

Are you serious? This is the kind of thing you'd ask a clarifying question on and get information back immediately. Further, the huge overreaction from Hegseth shows this is a fundamental disagreement.

  • The flip side of "Hegseth is an unqualified drunk", a position which I've always held and still maintain, is that he very well might crash out over nothing instead of asking clarifying questions or suggesting obvious compromises. This is the same guy who recalled the entire general staff to yell at them about the warrior mindset. Not an excuse for any of this, but I do think the precise nature of the badness matters.

> could the military use Anthropic’s Claude AI system to help shoot it down?

What a joke. I suggest folks read up on the very poor performance of US ICBM interceptor systems. They're barely a coin flip, in ideal conditions. How is Claude going to help with that? Push the launch interceptor button faster? Maybe Claude can help design a better system, but it's not turning our existing poor systems into super capable systems by simply adding AI.