← Back to context

Comment by cheema33

3 hours ago

> OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient.

Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable.

I don’t care who is in the whitehouse. Snowden revealed the crimes of the NSA in 2013 when Obama was president. They’re all going to want to use AI for mass surveillance

I find it confusing in most directions.

Ex: For the above statement, if they're truly dishonest brokers and openly ignore the rules that are inconvenient, they would have zero problems agreeing to Anthropic's terms and then violating them. So what you say may be quite true, but there would still need to be more to the story for it to make sense.

Ex: DoW officials are stating that they were shocked that their vendor checked in on whether signed contractual safety terms were violated: They require a vendor who won't do such a check. But that opens up other confusing oversight questions, eg, instead of a backchannel check, would they have preferred straight to the IG? Or the IG more aggressively checking these things unasked so vendors don't? It's hard to imagine such an important and publicly visible negotiation being driven by internal regulatory politicking.

I wonder if there's a straighter line for all these things. Irrespective of whether folks like or dislike the administration, they love hardball negotiations and to make money. So as with most things in business and government, follow the money...

  • I have no idea what exactly Anthropic was offering the DoD, but if there were a LLM product, possible that the existing guardrails prevented the model from executing on the DoD vision.

    "Find all of the terrorists in this photo", "Which targets should I bomb first?"

    Even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations.

Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary.

  • There's a reason it's unpopular.

    If your company makes an herbicide that happens to be very good at killing off anyone who drinks it at a high concentration in their water supply, you're saying that there should be no way for your company to resist being used for mass murder (including unavoidable collateral damage)?

    Also, the core mission of the military is not "killing its adversaries through any means necessary". It is to defend state interests. Some people have a belief that mass killing is the best mechanism for accomplishing that. I do not agree with, nor do I want to associate with, those people. They are morally and objectively wrong. Yes, sometimes killing people is the most effective -- or more likely, the quickest -- way. In practice, it doesn't work very well. The threat of violence is much more powerful than actually committing violence. If you have to resort to the latter, you've usually screwed up and lost the chance to achieve the optimal outcome. It is true that having no restrictions whatsoever on your ability to commit violence is going to be more intimidating, but it also means that you have to maintain that threat constantly for everyone, because nobody has any other reason to give you what you want.

    The actual military is not evil. Your conception of it is.

    • My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it. And we would all go through military training just in case, you know, a neighbor drank too much last night and thinks it can win against you.

      > The threat of violence is much more powerful than actually committing violence.

      While I agree with this statement, the only way the threat works is if from time to time you apply violence to reinforce your capability and availability to actually do it. And the US is really good at actually being violent so others don't even think about doing something against it, at least the majority of countries anyway.

    • >> Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary.

      > The actual military is not evil. Your conception of it is.

      You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?

      To give different hypothetical example: should Microsoft be allowed to put terms in its Windows contracts with the government, stipulating that Windows cannot be used to create or enforce certain tax policy or regulations that Microsoft disagrees with? Windows is all over, and I'm sure pretty much every government process touches Windows at some point, so such a term would have a lot of power.

  • If I start a small business that sells Apples and the US government comes to me and says "we want to buy your apples and fire them at high speed to" these are now your words "kill adversaries through any means necessary."

    If I say, no, then am I stopping the military?

    I feel like it is reasonable that I can say "no, I don't want to sell you my apples."

    I cannot for the life of me figure out why that means I am stopping the military from killing people. The US Military will definitely still be able to kill people for centuries. I'm just saying I don't want to participate in it.

    • More to the point, if everyone stopped selling anything to the military they would still be able to kill people with their bare hands. People are arguably very good at killing people and it takes civilization to train us not to kill each other.

    • In the context of the larger discussion, if you already sold apples to the military, you cannot go to them and say you don't like how they're using the apples you sold them.

  • Any company is free to choose its business partners and set terms to them. "Don't like our terms, don't partner with us"

    If government can force any private company to work specially for government then US is no better than PRC

  • Yes, Musk is guilty of treason for exactly that reason. He directly sabotaged a major US military operation in Ukraine.

    However, the military is bound by US and international law. It's clear they're not going to obey either of those with respect to this contract.

    On top of that, Anthropic has correctly pointed out that the use cases Trump was pushing for are well beyond the current capabilities of any of Anthropic models. Misusing their stuff in the way Trump has been (in violation of the contract) is a war crime, because it has already made major mistakes, targeted civilians, etc.