I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).
Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.
> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.
Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."
And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.
Is a pundit/politician lying to you a new experience?
The DoD is explicitly asking for those things, by forcing contract renegotiation towards a contract that is identical in every way, except removing the prohibition on those things.
If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.
Obviously Anthropic does make a product that could do that -- just give Claude classified data and ask it who to target.
Obviously the military wants to use it for that purpose since they couldn't accept Anthropic's extremely limited terms.
One can easily and immediately infer the answers to both your questions are yes.
The DoW has explicitly said they don’t want this, and what you are describing are not automated kill drones.
Anthropic’s safeguards already prevent what you are describing, again the thing thar DoW has said they don’t want.
I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).
Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.
7 replies →
https://x.com/SeanParnellASW/status/2027072228777734474?s=20
Here's the Chief Pentagon Spokesman pointing to the same verbiage and reiterating they they won't agree to those terms of use.
The first sentence of that post is:
> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.
Saying something on twitter is not a guarantee.
Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."
2 replies →
This administration would never lie, no siree! And especially not on Twitter!
I'm torn here. Who should we believe? The normal people or the people who operate exclusively in dishonesty?
And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.
Is a pundit/politician lying to you a new experience?
The DoD is explicitly asking for those things, by forcing contract renegotiation towards a contract that is identical in every way, except removing the prohibition on those things.
If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.
No, the DoW may be implicitly asking for those things.
That’s the point I’m trying to make here: Anthropic should just say the unsaid thing here.
DoW asked for the following thing: $foo. We won’t give that to them.
That thing is removing the restrictions from the contract.
> Anthropic should just say the unsaid thing here.
> DoW asked for the following thing: $foo. We won’t give that to them.
Anthropic has explicitly said that multiple times, including in the letter we are presently discussing.
$foo is the ability to use Claude for domestic mass surveillance and analysis, and/or fully-autonomous killbots.
I certainly wouldn’t give them the benefit of the doubt.
Then Anthropic should say: this is what the DoW has asked for, and we aren’t able to do it, or don’t want to.
They may not be legally allowed to.