Comment by nilkn

1 day ago

Obviously Anthropic does make a product that could do that -- just give Claude classified data and ask it who to target.

Obviously the military wants to use it for that purpose since they couldn't accept Anthropic's extremely limited terms.

One can easily and immediately infer the answers to both your questions are yes.

The DoW has explicitly said they don’t want this, and what you are describing are not automated kill drones.

Anthropic’s safeguards already prevent what you are describing, again the thing thar DoW has said they don’t want.

  • I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).

    Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.

    • GOOD. I don’t want Anthropic, or anybody else to have their tools used for these things either.

      But Dario is showing weakness here by talking around it. Whatever they were asked to do, they should just be upfront about.

      6 replies →