Comment by blueblisters

1 day ago

My knee-jerk reaction to this was looks like an opportunistic maneuver that Sam is known for and I'm considering canceling my subscriptions and business with OpenAI

But what's the most charitable / objective interpretation of this?

For example - https://x.com/UnderSecretaryF/status/2027594072811098230

Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?

Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.

Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.

I think Altman probably rationalised it to himself by thinking that if he doesn’t do it, Musk/xAI will, and they give zero fucks about safety. So maybe he told himself that it’s better if OpenAI does it.

As people have repeatedly mentioned, if the War Department was unhappy with Anthropic's terms, they could have refused to sign the contract. But they didn't: they were fine with it for over a year. And if they changed their mind, they could've ended the contract and both sides could've walked away. Anthropic said that would've been fine. But that's not what happened either: they threatened Anthropic with both SCR designation and a DPA takeover if Anthropic didn't agree to unilateral renegotiation of terms that the War Department had already agreed were fine.

It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.

  • it seems like oai deal does include the same red lines, plus some more, and the ability for oai to deploy safety systems to limit the use cases of the model via technical means

    this seems strictly better than what anthropic had. anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand

    the oai folks are good at making deals, just look at all the complex funding arrangements they have

    • "OAI wins by playing the government's game" is such a catastrophically bad take.

      > anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand

      You want to try defending this ridiculous statement a bit more thoroughly?

      For a start, the designation by the government of a company as a supply chain risk is not a negotiating tool. It may well be found to be arbitrary and capricious once the courts look at it. Business have rights too.

      For another, why do you think OAI was able to make what looks like the same deal? Anthropic was willing to say yes to anything lawful up to their red lines, and it was still a no. Why turn around and give OAI exactly the same thing, unless it's not really what it looks like?

      And Altman is always looking for the next buck.

      All these supposedly impressive complex funding arrangements have OAI on the hook to firms like Oracle in the hundreds of billions of dollars. No indication at all how this unprofitable business will become a trillion dollar juggernaut.

      4 replies →

Unless you're using an enterprise plan or pay per token, you're not hurting their business at all by cancelling. The consumer plans are heavily subsidised.

  • Cancelling is the only language these companies understand.

    Even Disney couldn't ignore the mass cancellations after dropping Kimmel and Disney+ bearly turns over a profit.

  • I think their consumer plans are gross margin positive but OpenAI has ~50M paying subscribers driving >$10B in revenue.

    Realistically, you need at least ~1M subscribers to cancel to make this painful.

    But I suspect this will get drowned out in the face of other news.

  • It will hurt in future funding rounds if their subscriber metric is stalling or going backwards, regardless of how many of those subscriptions are profitable.

  • This is ultimately about drawing moral lines, isn't it? In that case it wouldn't matter if it hurts their business or not.

  • Does it matter? These AI companies need to be able to prove that users are willing to pay at all, even if they're not paying a profitable amount of money. If investors see that they're dumping money into something that's not selling, why continue to do so?

  • There is value tied to free users, but also, not sure I want my work and data in a product that’s OK with DoD mass surveillance and I’m not sure my customers would want their data pumping through it either.

  • AI companies seem to be growth companies whose whole point seems to be that they are okay with extreme amounts of losses/lack of profitability so long as they grow a lot.

    If you back down from using Chatgpt, you throw a wrench in their growth numbers.

    I would consider training data could have important info as well and to be honest, with their circular financing, Nvidia <-> openAI with GPU's being the main cost (and given that OpenAI isn't facing the Ram crisis heck it created the ram crisis by pre-ordering 20%) and recent deals, money isn't an issue to them for some time now. Growth is.

    You are also forgetting that OpenAI is planning to add ads in which case you would be the product, its better not to discourage anyone who wishes to cancel perhaps.

    Other commentators have made some good points as well and I used to think the same thing as you but I do think that cancelling might make the most sense.

    That or if you want to cause maximum damage, trying to burn the most tokens that you physically can asking random things to burn OpenAI's money but remember that the model still takes energy requirements so you'd be wasting energy for something quite pointless.

    IMO, it might be better to cancel/not use OpenAI.