← Back to context

Comment by mrandish

4 hours ago

When @sama announced within hours that OAI was replacing Anthropic with the "same conditions ", it was clear that either the DoW or OAI (or both) were fudging. DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.

And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

I'd have money on OpenAI hiding behind the "all lawful use" phrasing to claim high levels of protection.

He also claimed that they would build rules into the model the DoD would use, preventing misuse. Aka he claims OpenAI will quickly solve alignment and build it right in...I wouldn't hold my breath.

  • Most likely scenario is that if it does something “unlawful” and found out - claim that “These machines are black boxes and they don’t know what went wrong. They will set up an investigative committee and find out.”

    • When shit hits the fan they are going to blame AI, but then not even use hand sanitizer. They will 100% be using OAI as a scapegoat, although I'd like to see the OAI goat stay and someone else run into the woods.

      All Lawful Use is a tautology with fascists because they cannot break laws by definition.

  • All lawful use. And then they followed up with “intentionally doing illegal things.” If they happen to accidentally do illegal things, OpenAI is ok with it.

    • I hate this so much. The nsa’s spying on everyone in 2010 was “legal” and I can only imagine how much worse it is now with AI to follow your digital footprint around everywhere. Too bad we don’t have any more whistleblowers like Snowden

      1 reply →

For consumer ChatGPT accounts, go to their privacy portal [1] and, first, delete your GPTs, and then, second, delete your account.

[1] https://privacy.openai.com/policies?modal=take-control

  • How do I cancel my subscription to the DoW?

    The bigger picture is that the DoW got what it wanted and it got it by threatening one company while the other did its bidding.

  • Why?

    If you have so little faith in them that they won’t honour the privacy controls you should also delete your non-consumer account too.

We know how this story will end for Dario. See Oppenheimer, Turing, Lavoisier, Galileo, Socretes etc. Power does not reside in the hands of people with knowledge or even wealth. And most technical people have not taken a political philosophy course or even a philosphy course. The Ring of Gyges story is 4000 years old.

  • I think Amodei is widely underestimated. The consensus viewpoint on the deal that OpenAI struck with the Pentagon is that Anthropic got played. I disagree. I'm certain that Amodei and his team gamed this out. In doing so, I think there's at least two conclusions they would have drawn:

    1. Some other AI company would cut a deal with the Pentagon. There's no world in which all the labs boycott the Pentagon. So who? Choosing Grok would be bad for the US, which is a bad outcome, but Amodei would have discounted that option, because he knows that despite their moral failures, the Pentagon is not stupid and Grok sucks.

    That leaves Gemini or OpenAI, and I bet they predicted it would be OpenAI. Choosing OpenAI does not harm the republic - say what you will about Altman, ChatGPT is not toxic and it is capable - but it does have the potential to harm OpenAI, which is my second point:

    2. OpenAI may benefit from this in the short term, and Anthropic may likewise be harmed in the short term, but what about the long game? Here, the strategic benefits to Anthropic in both distancing themselves from the Trump administration and letting OpenAI sully themselves with this association are readily apparent. This is true from a talent retention and attraction standpoint and especially true from a marketing standpoint. Claude has long had much less market share than ChatGPT. In that position, there are plenty of strategic reasons to take a moral/ethical stand like this.

    What I did not expect, and I would guess Amodei did not either, is that Claude would now be #1 in the app store. The benefits from this stance look to be materializing much more quickly than anyone in favour of his courage might have hoped.

    • > Choosing Grok would be bad for the US

      They chose Grok and OpenAI. The story was drowned out by the Anthropic controversy, but an xAI deal was signed the same week.

      2 replies →

    • The mistake here is thinking they can take on Power without really sitting in any officual position of Power.

      Wikileaks and Assange got popular too. What happened to them?

      The State Dept and CIA do exactly what Assange did. They pick and choose who to target with leaks. They get away with it (mostly even when exposed) because they officially are in power. Assange was not in power. If you take a moral position do it when you have real power.

    • There is also:

      3. Talent migration to Anthropic. No serious researcher working towards AGI will want it to be in the hands of OpenAI anymore. They are all asking themselves: "do I trust Sam or Dario more with AGI/ASI?" and are finding the former lacking.

      It is already telling that Anthropic's models outperform OAI's with half the headcount and a fraction of the funding.

    • They still need a lot of money and what their VC’s think is going to be more important than what Amedei does. Nothing more profitable than war and government.

      App Store rankings are meaningless, I have Claude, ChatGPT and Gemini all in top five, with a electronic mail app being 1 and a postal tracking service app (for a very small provider) being 3.

  • Oppenheimer? Really? Quoting a review of an Oppenheimer biography:

    “Oppenheimer was clearly an enormously charming man, but also a manipulative man and one who made enemies he need not have made. The really horrible things Oppenheimer did as a young man – placing a poisoned apple on the desk of his advisor at Cambridge, attempting to strangle his best friend – and yes, he really did those things – Monk passes off as the result of temporary insanity, a profound but passing psychological disturbance. (There’s no real attempt by Monk to explain Oppenheimer’s attempt to get Linus Pauling’s wife Ava to run off to Mexico with him, which ended the possibility of collaboration with one of the greatest scientists of the twentieth, or any, century.) Certainly the youthful Oppenheimer did go through a period of serious mental illness; but the desire to get his own way, and feelings of enormous frustration with people who prevented him from getting his own way, seem to have been part of his character throughout his life.”

    Seems more like Sam Altman, who is known to get his way, than Dario.

  • I do not believe the Ring of Gyges preceded Plato making it up for The Republic... Where are you getting 4000 years?

    Also maybe not seeing the message or connection here... That myth isn't really about who has power or not, right? It's kind of just a trite little "why you should do good even when no one is watching" thing. It just serves Socrates for his argument with Thrasymachus, and leads us into book 2 where it really gets going with Glaucon and all that. This is from memory so I might be a little off.

    • I got it from Tamar Gendlers philosophy and human nature course on open yale courses. She says it was a popular folk story passed down orally much before it was written in a book. Plato used it because people grew up hearing the story.

      The story is asking whats the source of morality? Who decides where the lines are? And its not scientists. Science produces the Ring.

      1 reply →

> it was clear that either the DoW or OAI (or both) were fudging.

This is my first thought as well. It's too obvious. He should have consulted ChatGPT before the announcement.

Greg Brockman donated 25 million dollars, and DoW gives OpenAI 200 million dollar contract.

Just good 'ol fashion grifting mixed with a bit of government corruption.

This country has been boiling the frog of graft, grifting, and corruption too long.

Or, as is likely, OpenAI models have no guardrails, Anthropic's did and the DoD was bumping into them.

  • Does anyone else notice claude is just plain better at reasoning? It may not just be post training guardrails. It would not surprise me of it was something anthropic couldn't simply disable. Either from reinforcement or even training corpus curation. Of all the models, claude is the only one that makes me wonder if they have figured out something beyond stochastic language generation and aren't telling anyone

    • I have noticed this too, despite the close benchmark results Claude just works better. It knows when to push back, it has an "agency"... there is something there that I don't see with Gemini or OpenAI's best paid models.

> OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient.

Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable.

  • I don’t care who is in the whitehouse. Snowden revealed the crimes of the NSA in 2013 when Obama was president. They’re all going to want to use AI for mass surveillance

  • I find it confusing in most directions.

    Ex: For the above statement, if they're truly dishonest brokers and openly ignore the rules that are inconvenient, they would have zero problems agreeing to Anthropic's terms and then violating them. So what you say may be quite true, but there would still need to be more to the story for it to make sense.

    Ex: DoW officials are stating that they were shocked that their vendor checked in on whether signed contractual safety terms were violated: They require a vendor who won't do such a check. But that opens up other confusing oversight questions, eg, instead of a backchannel check, would they have preferred straight to the IG? Or the IG more aggressively checking these things unasked so vendors don't? It's hard to imagine such an important and publicly visible negotiation being driven by internal regulatory politicking.

    I wonder if there's a straighter line for all these things. Irrespective of whether folks like or dislike the administration, they love hardball negotiations and to make money. So as with most things in business and government, follow the money...

    • I have no idea what exactly Anthropic was offering the DoD, but if there were a LLM product, possible that the existing guardrails prevented the model from executing on the DoD vision.

      "Find all of the terrorists in this photo", "Which targets should I bomb first?"

      Even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations.

  • Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary.

    • There's a reason it's unpopular.

      If your company makes an herbicide that happens to be very good at killing off anyone who drinks it at a high concentration in their water supply, you're saying that there should be no way for your company to resist being used for mass murder (including unavoidable collateral damage)?

      Also, the core mission of the military is not "killing its adversaries through any means necessary". It is to defend state interests. Some people have a belief that mass killing is the best mechanism for accomplishing that. I do not agree with, nor do I want to associate with, those people. They are morally and objectively wrong. Yes, sometimes killing people is the most effective -- or more likely, the quickest -- way. In practice, it doesn't work very well. The threat of violence is much more powerful than actually committing violence. If you have to resort to the latter, you've usually screwed up and lost the chance to achieve the optimal outcome. It is true that having no restrictions whatsoever on your ability to commit violence is going to be more intimidating, but it also means that you have to maintain that threat constantly for everyone, because nobody has any other reason to give you what you want.

      The actual military is not evil. Your conception of it is.

      2 replies →

    • If I start a small business that sells Apples and the US government comes to me and says "we want to buy your apples and fire them at high speed to" these are now your words "kill adversaries through any means necessary."

      If I say, no, then am I stopping the military?

      I feel like it is reasonable that I can say "no, I don't want to sell you my apples."

      I cannot for the life of me figure out why that means I am stopping the military from killing people. The US Military will definitely still be able to kill people for centuries. I'm just saying I don't want to participate in it.

      2 replies →

    • Any company is free to choose its business partners and set terms to them. "Don't like our terms, don't partner with us"

      If government can force any private company to work specially for government then US is no better than PRC

      2 replies →

    • Yes, Musk is guilty of treason for exactly that reason. He directly sabotaged a major US military operation in Ukraine.

      However, the military is bound by US and international law. It's clear they're not going to obey either of those with respect to this contract.

      On top of that, Anthropic has correctly pointed out that the use cases Trump was pushing for are well beyond the current capabilities of any of Anthropic models. Misusing their stuff in the way Trump has been (in violation of the contract) is a war crime, because it has already made major mistakes, targeted civilians, etc.