← Back to context

Comment by _alternator_

18 hours ago

The language allows for the DoD to use the model for anything that they deem legal. Read it carefully.

It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations.

As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc.

Sam Altman is either a fool, or he thinks the rest of us are.

No, that is incorrect.

This is an objective standard as a matter of contract interpretation. If it was the government’s right to determine the lawfulness of a usage, it would say so. Perhaps it does elsewhere in the agreement, but that’s not the case here.

  • Ok, honest question: Can you point to language in the contract that definitively limits the use of OAI tools that’s beyond what current laws or regulations require?

    • Sorry, I think we may be talking past each other. The language you quoted is an objective standard. If, for example, a court ruled that the government had violated the Constitution using the tool, that language would be breached. I don’t think anything I’ve seen (though we haven’t seen the whole agreement!) allows the government to use the product in violation of the law. Anthropic wanted to go further by further limiting the uses in specific cases.