The $100B megadeal between OpenAI and Nvidia is on ice

6 hours ago (wsj.com)

Not only has OpenAI's market share gone down significantly in the last 6mo, Nvidia has been using its newfound liquid funds to train its own family of models[1]. An alliance with OpenAI just makes less sense today than it did 6mo ago.

[1] https://blogs.nvidia.com/blog/open-models-data-tools-acceler...

  • > Nvidia has been using its newfound liquid funds to train its own family of models

    Nvidia has always had its own family of models, it's nothing new and not something you should read too much into IMHO. They use those as template other people can leverage and they are of course optimized for Nvidia hardware.

    Nvidia has been training models in the Megatron family as well as many others since at least 2019 which was used as blueprint by many players. [1]

    [1] https://arxiv.org/abs/1909.08053

  • And the whole AI craze is becoming nothing but a commodity business where all kinds of models are popping in and out, one better this update, the other better the next update etc. In short - they're basically indistinguishable for the average layman.

    Commodity businesses are price chasers. That's the only thing to compete on when product offerings are similar enough. AI valuations are not setup for this. AI Valuations are for 'winner takes all' implications. These are clearly now falling apart.

  • I think there are two things that happened

    1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.

    2. Sam Altman is profoundly unlikable.

    • Instead of anecdotes about “what you saw on TikTok and Reddit”, it’s really not that hard to lookup how many paid users ChatGPT has.

      Besides OpenAI was never going to recoup the billions of dollars based on advertising or $20/month subscriptions

    • You have to give credit to Sam, he’s charismatic enough to the right people to climb man made corporate structures. He was also smart enough to be at the right place at the right time to enrich himself (Silicon Valley). He seems to be pretty good at cutting deals. Unfortunately all of the above seems to be at odds with having any sort of moral core.

      4 replies →

    • I actually think Sam is “better” than say Elon or Dario because he seems like a typical SF/SV tech bro. You probably know the type (not talking about some 600k TC fang worker, I mean entrepreneurs).

      He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling. I don’t know him personally but he comes across like an average person if that makes sense (in this environment that is).

      I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months. It’s hard for me to trust a megalomaniac or a total nerd. So Sam is kinda in the middle there.

      I hope OpenAI continues to dominate even if the margins of winning tighten.

      21 replies →

  • Yeah. Even if OpenAI models were the best, I still wouldn't used them, given how the Sam Altman persona is despicable (constantly hyping, lying, asking for no regulations, then asking for regulations, leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims...). I know other companies are not better, but at least they have a business model and something to lose.

    • > leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims

      Point me to these? Would like to have a look.

      4 replies →

  • [flagged]

    • ChatGPT has nowhere the lead it used to have. Gemini is excellent and Google and Anthropic are very serious competitors. And open weight models are slowly catching up.

      1 reply →

    • ChatGPT is a goner. OpenAI will probably rule the scam creation, porn bot, and social media slop markets.

      Gemini will own everything normie and professional services, and Anthropic will own engineering (at least software)

      Honestly as of the last few months anyone still hyping ChatGPT is outing themselves.

      1 reply →

Last paragraph is informative:

> Anthropic relies heavily on a combination of chips designed by Amazon Web Services known as Trainium, as well as Google’s in-house designed TPU processors, to train its AI models. Google largely uses its TPUs to train Gemini. Both chips represent major competitive threats to Nvidia’s best-selling products, known as graphics processing units, or GPUs.

So which leading AI company is going to build on Nvidia, if not OpenAI?

  • "Largely" is doing a lot of heavy lifting here. Yes Google and Amazon are making their own GPU chips, but they are also buying as many Nvidia chips as they can get their hands on. As are Microsoft, Meta, xAI, Tesla, Oracle and everyone else.

  • Nvidia had the chance to build its own AI software and chose not to. It was a good choice so far, better to sell shovels than go to the mines - but they still could go mining if the other miners start making their own shovels.

    If I were Nvidia I would be hedging my bets a little. OpenAI looks like it's on shaky ground, it might not be around in a few years.

  • OpenAI will keep using Nvidia GPUs but they may have to actually pay for them.

  • Literally all the other companies that still believe they can be the leading ones one day?

This video that breaks down the crazy financial positions of all the AI companies and how they are all involved with one called CoreWeave (who could easily bring the whole thing tumbling down) is fascinating: https://youtu.be/arU9Lvu5Kc0?si=GWTJsXtGkuh5xrY0

It’s probably not really related, but this bug and the saga of OpenAI trying and failing to fix it for two weeks is not indicative of a functional company:

https://github.com/openai/codex/issues/9253

OTOH, if Anthropic did that to Claude Code, there wasn’t a moderately straightforward workaround, and Anthropic didn’t revert it quickly, it might actually be a risk-the-whole-business issue. Nothing makes people jump ship quite like the ship refusing to go anywhere for weeks while the skipper fumbles around and keeps claiming to have fixed the engines.

Also, the fact that it’s not major news that most business users cannot log in to the agent CLI for two weeks running is not major news suggests that OpenAI has rather less developer traction than they would like. (Personal users are fine. Users who are running locally on an X11-compatible distro and thus have DISPLAY set are okay because the new behavior doesn’t trigger. It kind of seems like everyone else gets nonsense errors out of the login flow with precise failures that change every couple days while OpenAI fixes yet another bug.)

  • I don't know what you're so surprised about. The ticket reads like any other typical [Big] enterprise ticket. UI works, headless - not (headless is what only hackers use, so not a priority, etc.) Oh, found the support guy who knows what headless is and the doc page with a number of workarounds. There is even ssh tunnel (how is that made in into enterprise docs?!) and the classic - copy logged in credentials from UI machine once you logged in there. Bla-bla-bla and again classic:

    "Root Cause

    The backend enforces an Enterprise-only entitlement for codex_device_code_auth on POST /backend-api/accounts/{account_id}/beta_features. Your account is on the Team plan, so the server rejects the toggle with {"detail":"Enterprise plan required."} "

    and so on and so forth. At any given day i have several such long-term tickets that get ultimately escalated to me (i'm in dev and usually the guy who would pull the page with ssh tunnel or credentials copying :)

    • Sort of?

      The backstory here is that codex-rs (OpenAI’s CLI agent harness) launched an actual headless login mechanism, just like Claude Code has had forever. And it didn’t work, from day one. And they can’t be bothered to revert it for some reason.

      Sure, big enterprises are inept. But this tool is fundamentally a command line tool. It runs in a terminal. It’s their answer to one of their top two competitors’ flagship product. For a company that is in some kind of code red, the fact that they cannot get their ducks in a row to fix it is not a good sign.

      Keep in mind that OpenAI is a young company. They should have have a thicket of ancient garbage to wade through to fix this — it’s not as if this is some complex Active Directory issue that no one knows how to fix because the design is 30-40 years old and supports layers and layers of legacy garbage.

  • Funny that they can't just get the "AI" to fix it.

    • You still need to get engineers to actually dispatch that work, test it, possibly update the backend. Each of those can be already done via AI, but actually doing that in a large environment - we're not there yet.

All these giant non-binding investment announcements are just a massive confidence scam.

  • We know that it is all a grift before the inevitable collapse, so everyone is racing for the exit before that happens.

    I guarrantee you that in 10 years time, you will get claims of unethical conduct by those companies only after the mania has ended (and by then the claimants have sold all their RSUs.)

Many of us predicted OpenAIs insistence that the model was the product was the wrong path.

The tools on top of the models are the path and people building things faster is the value.

I wonder how much the indications of Altman's duplicitous behavior through the deposition findings have been relevant here.

Google has the data and the TPUs and the massive cash to advance.

Microsoft has GitHub - the world’s biggest pile of code training data, plus infinite cash.

OpenAI has …… none of these advantages.

  • this is the argument i continue to have with people. first mover isnt always an advantage - i think openai will be sold or pennies on these dollars someday (next 5 years after they run out of funding).

    Google has data, TPUs, and a shitload of cash to burn

> He[Jensen Huang] has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic, some of the people said.

The article references an “undisciplined” business. I wonder if this is speaking to projects like Sora. Sora is technically impressive and was fun for a moment, but it’s nowhere near the cultural relevance of TikTok, but I believe significantly more expensive, harder to monetize, and consuming some significant share of their precious GPU capacity. Maybe I’m just not the demo and missing something.

And yes, Sam is incredibly unlikable. Every time I see him give an interview, I am shocked how poorly prepared he is. Not to mention his “ads are distasteful, but I love my supercar and ridiculous sunglasses.”

...and the merry go round stopped

  • Not for all the players. Not everyone has over-raised their fundamentals.

    • Literally the whole economy has "over-raised its fundamentals" though. Not everyone is going to fail in exactly this way, but (again, pretty much literally) everyone is exposed to a feedback-driven crash from "everyone else" that ended up too exposed.

      We all know this is a speculative run-up. We all know it'll end somehow. Crashes always start with something like this. Is this the tipping point? Damned if I know. But it'll come.