Comment by gip

14 hours ago

There is no doubt that OpenAI is taking a lot of risks by betting that AI adoption will translate into revenues in the very short term. And that could really happen imo (with a low probability sure, but worth the risk for VCs? Probably).

It's mathematically impossible what OpenAI is promising. They know it. The goal is to be too big to fail and get bailed out by US taxpayers who have been groomed into viewing AI as a cold war style arms race that America cannot lose.

  • > The goal is to be too big to fail and get bailed out by US taxpayers

    I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean? OpenAI failing wouldn’t mean AI disappears and all of their customers go bankrupt, too. It’s not like a bank. If OpenAI became insolvent or declared bankruptcy, their intellectual property wouldn’t disappear or become useless. Someone would purchase it and run it again under a new company. We also have multiple AI companies and switching costs are not that high for customers, although some adjustment is necessary when changing models.

    I don’t even know what people think this is supposed to mean. The US government gives them money for something to prevent them from filing for bankruptcy? The analogy to bank bailouts doesn’t hold.

    • I think what Altman is looking at is becoming so codependent with NVidia and Microsoft that they'll all go down together, meaning the US government would have to deal with the biggest software company and the biggest chip company both imploding together.

      If you look at the financial crisis, the US government decided to bail out AIG, after passing on Bear Sterns, because big banks like Goldman Sachs and Morgan Stanley (and even Jack Welch's General Electric) all had huge counterparty risk with AIG.

    • >I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean?

      Someone else put it succintly.

      "When A million dollar company fails, it's their problem. When a billion dollar company fails, it's our problem"

      In essence, there's so much investment in AI that it's a significant part of the US GDP. If AI falters, that is something that the entire stock market will feel, and by effect, all Americans. No matter how detached from tech they are. In other words, the potential for the another great depression.

      In that regard, the government wants to avoid that. So they will at least give a small bailout to lessen the crash. But more likely (as seen with the Great Financial Crisis), they will likely supply billions upon billions to prop up companies that by all business logic deserved to fail. Because the alternative would be too politically damaging to tolerate.

      ----

      That's the theory. These all aren't certain and there are arguments to suggest that a crash in AI wouldn't be as bad as any of the aforementioned crashes. But that's what people mean by "become too big to fail and get bailed out".

      9 replies →

    • > Someone would purchase it and run it again under a new company.

      That happened a long time ago! Microsoft already owns the model weights!

    • > If OpenAI became insolvent or declared bankruptcy, their intellectual property wouldn’t disappear or become useless

      Yes but with all stock growth being in AI companies it would tank the market for one. Secondly, all of those dollars they are using are backed by creditors who would have a default. short of another TARP (likely IMO, the US NEEDS to keep pumping AI to compete with China) .... it could scare investors off too..

      Plus with the growth in AI effecting the overall makeup of the stockmarket, something like this hurts every Americans 401k

  • Bailing out OAI would be entirely unnecessary (crowded field) and political suicide (how many hundreds of billions that could have gone to health care instead?)

    If it happens in the next 3 years, tho, and Altman promises enough pork to the man, it could happen.

    • This administration has "committed political suicide" dozens of times this year. What's one more to add to the pile?

    • >Bailing out OAI would be ... political suicide (how many hundreds of billions that could have gone to health care instead?)

      Not that I have an opinion one way or another regarding whether or not they'd be bailed out, but this particular argument doesn't really seem to fit the current political landscape.

  • on the one hand, i understand you are making a stylized comment, on the other hand, as soon as i started writing something reasonable, i realized this is an "upvote lame catastrophizing takes about" (checking my notes) "some company" thread, which means reasonable stuff will get downvoted... for example, where is there actual scarcity in their product inputs? for example, will they really be paying retail prices to infrastructure providers forever? is that a valid forecast? many reasonable ways to look at this. even if i take your cynical stuff at 100% face value, the thing about bailouts is that they're more complicated than what you are saying, but your instinct is to say they're not complicated, "grooming" this and "cold war" that, because your goal is to concern troll, not advance this site's goal of curiosity...

    • They've already spent so much money that even if they get any new hardware at a deep discount they will have a very hard time breaking even

Apparently we all have enough money to put it into OpenAI.

Some players have to play, like google, some players want to play like USA vs. China.

Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.

  • > Some players have to play, like google

    I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.

Correction: OpenAI investors do take that risk. Some of the investors (e.g. Microsoft, Nvidia) dampen that risk by making such investment conditioned on boosting the investor's own revenue, a stock buyback of sorts.