← Back to context

Comment by jedberg

1 day ago

From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.

It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.

I think it's dumber than that; the terms of the contract, as posted by OpenAI (https://openai.com/index/our-agreement-with-the-department-o...), are basically just "all lawful purposes" plus some extra words that don't modify that in any significant way.

> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.

> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.

  • > will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control

    That says it all. Those laws get issued the same way the tariffs did.

That isn't my understanding. OpenAI and others are wanting to limit the government to doing what is lawful based on what laws the government writes. Anthropic is wanting to draw their own line on what is allowed regardless of laws passed.

  • I’m so confused by the focus on “all lawful use.” Yea of course a contract without terms of use implicitly is restricted by laws. But contracts with terms of use are incredibly common, if not almost every single contract ever signed.

    • The administration objected to those terms of use. Anthropic refused to compromise on them. OpenAI agreed to permit "all lawful use" but claims to have insisted on what at first glance appears to be terms of use in their contract. But in reality those terms permit all lawful use and thus are a noop.

    • If the president does it, it's not illegal.

      These were words issued by the president - which means at face value, if Trump orders it, it's not illegal - that was the fight that was lost today.

      1 reply →

    • "All lawful use" is the weasel word that makes the whole contract useless for the purposes of safety.

      That is why it is the focus of this debate.

    • "All lawful USS" in the hands of those that decide what is lawful is effectively a blank check. They want a terms of use that says "I do what I want."

The key difference is that Anthropic aired their disagreement with the DoD publicly, and the DoD is not going to work with a company that tries to exert any amount of control over their relationship via the public sphere. Same goes for Trump.

I think Anthropic knew full well that by publishing their disagreement, it would sink the deal and relationship, and I think they also calculated (correctly) that that act of defiance would get them good publicity and potentially peel away some of OpenAIs user base. I think this profit incentive happened to align with their morals, and now here we are.

Anthropic wants to enforce them via language of the contracts and take a hands off approach. OpenAI has a contract that is paired with humans in the room (FDEs) that can pull the plug.

No, it’s significantly worse than that. OpenAI has required zero actual guarantees from the government and Sam. The psychopath is lying to you. All the government has to do is have a lawyer say it’s legal, and most of the government’s lawyers are folks who were involved in attempting to overthrow the last election and should’ve been convicted of treason, so that means very little.

Sam stands for nothing except his own greed