← Back to context

Comment by 3eb7988a1663

6 hours ago

I have no idea what exactly Anthropic was offering the DoD, but if there were a LLM product, possible that the existing guardrails prevented the model from executing on the DoD vision.

"Find all of the terrorists in this photo", "Which targets should I bomb first?"

Even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations.