AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
Something nobody's talking about: OpenAI's losses might actually be attractive to certain investors from a tax perspective.
Microsoft and other corporate investors can potentially use their share of OpenAI's operating losses to offset their own taxable income through partnership tax treatment. It's basically a tax-advantaged way to fund R&D - you get the loss deductions now while retaining upside optionality later. This is why the "cash burn = value destruction" framing misses the mark. For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation.
The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?" If you're sitting on a pile of profitable cloud revenue like Microsoft, suddenly OpenAI's burn rate starts looking like a pretty efficient way to minimize your tax bill while getting a free option on the AI leader. This also explains why big tech is so eager to invest at nosebleed valuations. They're not just betting on AI upside, they're getting immediate tax benefits that de-risk the whole thing.
> For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation
I know nothing about finances at this level, so asking like a complete newbie: doesn't that just mean that instead of risking $10B they're risking $7-8B? It is a cheaper bet for sure, but doesn't look to me like a game changer when the range of the bet's outcome goes from 0 to 1000% or more.
Amazon already has not been paying any sort of income tax to the EU. There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.
Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
> OpenAI's losses might actually be attractive to certain investors from a tax perspective.
OpenAI is anyways seeking Govt Bailout for "National Security" reasons. Wow, I earlier scoffed at "Privatize Profits, Socialize Losses", but this appears to now be Standard Operating Procedure in the U.S.
So the U.S. Taxpayer will effectively pay for it. And not just the U.S. Taxpayer - due to USD reserve currency status, increasing U.S. debt is effectively shared by the world. Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)
this is not accurate. microsoft recognizes openai losses on their income statement, proportionate to their ownership stake. this has created a huge drag on eps, along with a lot more eps volatility than in the past. it's gotten so bad that microsoft now points people to adjusted net income, which is notable as they had always avoided those games. none of this has been welcomed
OpenAI is a corporation, so their losses do not flow up to their owners.
Their investors, if publicly traded like Microsoft do have to take write-downs on their financial statements but those aren't realized losses for tax purposes. The only tax "benefit" Microsoft might get from the OpenAI investment is writing off the amount it invested if/when OpenAI goes bankrupt.
Can you explain it in another way? What you are saying is that instead of loosing 100% they loose 70% and loosing 70% is somehow good? Or are you saying the risk adjusted returns are then 30% better on the downside than previously thought? Because if you are, I think people here are saying the risk is so high that it is a given they will fail.
Whilst that is an option, it wont cover the share price hit from the fallout, which would wipe out more than the debt as when the big domino falls, others will follow as the market panic shifts.
So kinda looking at a bank level run on tech companies if they go broke.
Lmao this is ridiculous. If MSFT really wanted the tax benefits they should’ve just wholly acquired OAI long ago to acquire the financial synergy you speak of.
There is a pretty big moat for Google: extreme amounts of video data on their existing services and absolutely no dependence on Nvidia and it's 90% margin.
Google has several enviable, if not moats, at least redoubts. TPUs, mass infrastructure and own their own cloud services, they own delivery mechanisms on mobile (Android) and every device (Chrome). And Google and Youtube are still #1 and #2 most visited websites in the world.
I have yet to be convinced the broader population has an appetite for AI produced cinematography or videos. Independence from Nvidia is no more of a liability than dependence on electricity rates; it's not as if it's in Nvidia's interest to see one of its large customers fail. And pretty much any of the other Mag7 companies are capable of developing in-house TPUs + are already independently profitable, so Google isn't alone here.
On paper, Google should never have allowed the ChatGPT moment to happen ; how did a then non-profit create what was basically a better search engine than Google?
Google suffers from classic Innovator's Dilemma and need competition to refocus on what ought to be basic survival instincts. What is worse is the search users are not the customers. The customers of Google Search are the advertisers and they will always prioritise the needs of the customers and squander their moats as soon as the threat is gone.
And yes, all their competitors are making custom chips. Google is on TPU v7. absolutely nobody is going to get this right on the first try among their competitors - Google didn't.
The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.
Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.
The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.
Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.
There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.
> Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.
Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?
If the “moat” is not AI technology itself but merely sufficient other lines of business to deploy it well, then that’s further evidence that venture investments in AI startups will yield very poor returns.
I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible. 99% of users are working to a mental model of AI that they learned when they first encountered ChatGPT - the idea that AI is a separate app, that they can talk to and prompt to get outputs, and that's it. They're probably starting to learn that they can select models, and use different modes, but the idea of connecting to other apps isn't something they've grokked yet (and they won't until it's very obvious).
What people see as the featureset of AI is what OpenAI is delivering, not Google. Google are going to struggle to leverage their position as custodians of everyone's data if they can't get users to break out of that way of thinking. And honestly, right now, Google are delivering lots of disparate AI interfaces (Gemini, Opal, Nano Banana, etc) which isn't really teaching users that it's all just facets of the same system.
> AI is going to be a highly-competitive, extremely capital-intensive commodity market
It already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.
The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.
But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.
> that ends up in a race to the bottom competing on cost and efficiency of delivering
One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.
---
Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.
> in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model
I suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.
> One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.
I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.
> I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4
It was model improvements, followed by inference time improvements, and now it's RLVR dataset generation driving the wheel.
I haven't read much about it to understand what's going on, but the development of multi-modal models has also felt like a major step. Being able to paste an image into a chat and have it "understand" the image to a comparable extent to language is very powerful.
> But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off
The railroads provided something of enduring value. They did something materially better than previous competitors (horsecarts and canals) could. Even today, nothing beats freight rail for efficient, cheap modest-speed movement of goods.
If we consider "AI" to be the current LLM and ImageGen bubble, I'm not sure we can say that.
We were all wowed that we could write a brief prompt and get 5,000 lines of React code or an anatomically questionable deepfake of Legally Distinct Chris Hemsworth dancing in a tutu. But once we got past the initial wow, we had to look at the finished product and it's usually not that great. AI as a research tool will spit back complete garbage with a straight face. AI images/video require a lot of manual cleanup to hold up to anything but the most transient scrutiny. AI text has such distinct tones that it's become a joke. AI code isn't better than good human-developed code and is prone to its own unique fault patterns.
It can deliver a lot of mediocrity in a hurry, but how much of that do we really need? I'd hope some of the post-bubble reckoning comes in the form of "if we don't have AI to do it (vendor failures or pricing-to-actual-cost makes it unaffordable), did we really need it in the first place?" I don't need 25 chatbots summarizing things I already read or pleading to "help with my writing" when I know what I want to say.
The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.
Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)
Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.
My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.
As stated in TFA, this simply has not been demonstrated , nor are there any artifacts of proof. It's reasonable to suspect that there is no special apparatus behind the curtain in this Oz.
From TFA: "One vc [sic] says discussion of cash burn is taboo at the firm, even though leaked figures suggest it will incinerate more than $115bn by 2030."
Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more. Once you fully integrate you will not switch. Also being capital intensive is a form of moat.
I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.
>Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more.
I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.
I thought that, too, but lately I've been using OpenCode with Claude Opus, rather than Claude Code, and have been loving it.
OpenCode has LSPs out of the box (coming to Claude Code, but not there yet), has a more extensive UI (e.g. sidebar showing pending todos), allows me to switch models mid-chat, has a desktop app (Electron-type wrapper, sure, but nevertheless, desktop; and it syncs with the TUI/web versions so you can use both at the same time), and so on.
So far I like it better, so for me that moat isn't that. The technical moat is still the superiority of the model, and others are bound to catch up there. Gemini 3 Preview is already doing better at some tasks (but frequently goes insane, sadly).
Except most of their product line is oriented towards software development which has historically been dominated by free software. I don't see developers moving away from this tendency and IMO Anthropic will find themselves in a similar position to JetBrains soon enough (profitable, but niche)... assuming things pan out as you describe.
I, personally, use chatGPT for search more than I do Google these days. It, more often than not, gives me more exact results based on what I'm looking for and it produces links I can visit to get more information. I think this is where their competitive advantage lies if they can figure out how to monetize that.
We don’t need anecdotes. We have data. Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic. Apple also hinted at the fact that it also didn’t see any decreased revenues from the Google Search deal.
AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.
2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch
3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing
4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],
5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.
I personally know people that used ChatGPT a lot but have recently moved to using Gemini.
There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.
I'm genuinely curious. Why do you do this instead of Google Searches which also have an AI Overview / answer at the top, that's basically exactly the same as putting your search query into a chat bot, but it ALSO has all the links from a regular Google search so you can quickly corroborate the info even using sources not from the original AI result (so you also see discordant sources from what the AI answer had)?
Like railroads, internet, electricity, aviation or car industries before: they've all been indeed the future, and they all peaked (in relative terms), at the very early stages of these industries future.
And among them the overwhelming majority of companies in the sectors died. Out of the 2000ish car-related companies that existed in 1925 only 3 survived to today. And none of those 3 ended up a particularly good long term investment.
This will remain the case until we have another transformer-level leap in ML technology. I don’t expect such an advancement to be openly published when it is discovered.
>That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.
All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.
But some people do believe that AI is all hype and it will all go away. It’s hard to find two people who actually mean the same thing when they talk about a “bubble” right now.
I don't think anyone seriously believes AI will disappear without a trace. At the very least, LLMs will remain as the state of the art in high-level language processing (editing, translation, chat interfaces, etc.)
The real problem is the massive over-promises of transforming every industry, replacing most human labor, and eventually reaching super-intelligence based on current models.
I hope we can agree that these are all wholly unattainable, even from a purely technological perspective. However, we are investing as if there were no tomorrow without these outcomes, building massive data-centers filled with "GPUs" that, contrary to investor copium, will quickly become obsolete and are increasingly useless for general-purpose datacenter applications (Blackwell Ultra has NO FP64 hardware, for crying out loud...).
We can agree that the bubble deflating, one way or another, is the best outcome long term. That said, the longer we fuel these delusions, the worse the fallout will be when it does. And what I fear is that one day, a bubble (perhaps this one, perhaps another) will grow so large that it wipes out globalized free-market trade as we know it.
Have you thought that there was a massive physical infrastructure left behind by the original railroad builders, all compatible with future vehicles? Other companies were able to buy the railroads for low prices and use.
Large Language Models change their power consumption requirements monthly, the hardware required to run them is replaced at a rapid rate too. If it were to stop tomorrow, what would you be left with? Out of date hardware, massively wasted power, and a gigantic hole in your wallet.
You could argue you have the blueprints for LLM building, known solutions, and it could all be rebuilt. The thing is, would you want to rebuild, and invest so much again for arguably little actual, tangible output? There isn't anything you can reuse, like others that came after could reuse the railroads.
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
Practically, what I'm finding is that whenever I ask Claude to search stuff on Reddit, it can't but Gemini can. So I think the practical advantages are where certain organizations have unfair data advantages. What I found out is that LLMs work a lot better when they have quality data.
This is different because now the cats out of the bag: AI is big money!
I don't expect AGI or Super intelligence to take that long but I do think it'll happen in private labs now. There's an AI business model (pay per token) that folks can use also.
> don't expect AGI or Super intelligence to take that long
I appreciate the optimism for what would be the biggest achievement (and possibly disaster) in human history. I wish other technologies like curing cancer, Alzheimer's, solving world hunger and peace would have similar timelines.
> AI is a world-changing technology, just like the railroads were
This comparison keeps popping up, and I think it's misleading. The pace of technology uptake is completely different from that of railroads: the user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis. Even if you think that railroads and AI are equally impactful (I don't, I think AI will be far more impactful) the rapidity with which investments can turn into revenue and profit makes the situation entirely different from an investor's point of view.
Railroads carried the goods that everybody used. That’s like almost 100% in a given country.
The pace was slower indeed. It takes time to build the railroads. But at that time advancements also lasted longer. Now it is often cash grabs until the next thing. Not comparable indeed but for other reasons.
> just three years- around 900 million users on a weekly basis.
Well, I rotate about a dozen of free accounts because I don't want to send 1 cent their way, I imagine I'm not the only one. I do the same for gemini, claude and deepseek, so all in all I account for like 50 "unique" weekly users
Apparently they have about 5% of paying customers, the amount of total users is meaningless, it just tells you how much money they burn and isn't an indication of anything else.
> user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis.
Doesn't have anything to do with AI itself. Consider Instagram then TikTok before this, WhatsApp before, etc. There is a clear adoption curve timeline : it's going WorldWide faster. AI is not special in that sense. It doesn't mean AI itself isn't special (arguable, in fact Narayanan precisely argue it's "normal") but rather than adoption pace is precisely on track with everything else.
is not correct IMO. Those are two very different areas. The impact of railroads on transport and everything transport-related cannot be understated. By now roads and cars have taken over much of it, and ships and airplanes are doing much more, but you have to look at the context at the time.
I think we'll find that that asymptote only holds for cases where the end user is not really an active participant in creating the next model:
- take your data
- make a model
- sell it back to you
Eventually all of the available data will have been squeezed for all it's worth the only way to differentiate oneself as an AI company will be to propel your users to new heights so that there's new stuff to learn. That growth will be slower, but I think it'll bear more meaningful fruit.
I'm not sure if today's investors are patient enough to see us through to that phase in any kind of a controlled manner, so I expect a bumpy ride in the interim.
Yeah except that models don't propel communities towards new heights. They drive towards the averages. They take from the best to give to the worst, so that as much value is destroyed as created. There's no virtuous cycle there...
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result.
I think this conflates together a lot of different types of AI investment - the application layer vs the model layer vs the cloud layer vs the chip layer.
It's entirely possible that it's hard to generate an economic profit at the model layer, but that doesn't mean that there can't be great returns from the other layers (and a lot of VC money is focused on the application layer).
> The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did.
Why "soon"? All your arguments may be correct, but none of them imply when the pending implosion will happen.
As a loan officer in Japan who remembers the 1989 bubble, I see the same pattern.
In the traditional "Shinise" world I work with, Cash is Oxygen. You hoard it to survive the inevitable crash.
For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
In 1989, we also bet that land prices would outrun gravity forever.
But usually, Physics (and Debt) wins in the end.
When the railway bubble bursts, only those with "Oxygen" will survive.
I‘m aware this means leaving the original topic of this thread, but would you mind giving us a rundown of this whole Japan 1989 thing? I would love to read a first-person account.
> Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
For OpenAI, cash is oxygen too; they're burning it all to reach escape velocity. They could use it to weather the upcoming storm, but I don't think they will.
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
I think this is analysis is too surface level. We are seeing Google Gemini pull away in terms of image generation, and their access to billions of organic user images gives them a huge moat. And in terms of training data, Google also has a huge advantage there.
The moat is the training data, capital investment, and simply having a better AI that others cannot recreate.
If performance indeed asymptotes, and if we are not at the end of silicon scaling or decreasing cost of compute, then it will eventually be possible to run the very best models at home on reasonably priced hardware.
Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.
The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.
This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.
People seem to have the assumption that OpenAI and Anthropic dying would be synonymous with AI dying, and that's not the case. OpenAI and Anthropic spent a lot of capital on important research, and if the shareholders and equity markets cannot learn to value and respect that and instead let these companies die, new companies will be formed with the same tech, possibly by the same general group of people, thrive, and conveniently leave out the said shareholders.
Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.
You weren’t around pre Google were you? The only thing Google learned from other search engines is what not to do - like rank based on the number of times a keyword appeared and not to use expensive bespoked servers
Isn't it really the other way around? Not to say OpenAI and Anthropic haven't done important work, but the genesis of this entire market was paper on attention that came out of Google. We have the private messages inside OpenAI saying they needed to get to market ASAP or Google would kill them.
I like to tell people that all the AI stuff happening right now is capitalism actually working as intended for once. People competing on features and price where we arent yet in a monopoly/duopoly situation yet. Will it eventually go rotten? Probably — but it's nice that right now for the first time in a while it feels like companies are actually competing for my dollar.
Aaahh the beautiful free market where the energy prices keep increasing and if it all fails they will be saved by the government that they bribed before. Don't forget the tax subsidies. AKA your money. Pure honest capitalism....
Perhaps it would be useful to define what we mean by "commoditization" in terms of software. I would say a software product that is not commoditized is one where the brand still can command a premium, which in the world of software, generally means people are willing to pay non-zero dollars for it. Once software is commoditized it generally becomes free or ad-supported or is bundled with another non-software product or service. By this standard I would say there are very few non-commoditized consumer software products. People pay for services that are delivered via software (e.g. Spotify, Netflix) but in this case the software is just the delivery mechanism, not the product. So perhaps one viable path for chatbots to avoid commoditization would be to license exclusive content, but in this scenario the AI tech itself becomes a delivery mechanism, albeit a sophisticated one. Otherwise it seems selling ads is the only viable strategy, and precedents show that the economics of that only work when there is a near monopoly (e.g. Meta or Google). So it seems unlikely that a lot of the current AI companies will survive.
Um meta didn't achieve the same results yet. And does it matter if they can all achieve the same results if they all manage high enough payoffs? I think subscription based income is only the beginning. Next stage is AI-based subcompanies encroaching on other industries (e.g. deepmind's drug company)
What exactly is "second" place? No-one really knows what first place looks like. Everyone is certain that it will cost an arm, a leg and most of your organs.
For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.
The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.
It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!
Translation is big thing, maybe not the same scale as railroads, but still important. The rest is of dubious economic utility (as in you can do it with LLM easier than without, but if you think a little you could just as well not do it at all without losing anything). On the other hand, disrupting signalling will have pretty long-lasting consequences. People used to assume that a long formal-sounding text is a signal of seriousness, certainly so if it's personally addressed. Now it's just a sign of sloppiness. School essays are probably dead as a genre (good riddance). Hell, maybe even some edgy censorable language will enter mainstream as a definite proof of non-LLMness - and stay.
Eh, I wouldn't be so sure, chips with brain matter and or light are on its way and or quantum chips, one of those or even a combination will give AI a gigantic boost in performance. Finally replacing a lot more humans and whoever implements it first will rule the world.
You seem to have forgotten that the ruling class requires tax payers to fund their incomes. If we're all out of work, there's nobody to buy their products and keep them rich.
Pretty much every major historical trend of Western societies in the second half of the eighteenth century, from the development of the modern corporation to the advent of total war, was intimately tied to railroad transportation.
Besides from he fact the freight is still universally carried by the rail when possible, railroads changed the world just like the vacuum valves did. If not for them nobody would invest in developing tire transport or transistors.
Umm yes? The metro even if not a big deal in the states is like a small but quiet way it has changed public transport, plus moving freight, plus people over large distances, plus the bullet train that mixed luxury, speed and efficiency onto trains, all of these are quietly disruptive transformations, that I think we all take for granted.
This is why I think China will win the AI race. As once it becomes a commodity no other country is capable of bringing down manufacturing and energy costs the way China is today. I am also rooting for them to get on parity with node size for chips for the same reason as they can crash the prices PC hardware.
"AI is going to be a highly-competitive" - In what way?
It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.
> There's no evidence of a technological moat or a competitive advantage in any of these companies.
I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.
I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.
The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.
I still find it so fascinating how experiences with these models are so varied.
I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.
There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.
No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.
I’m not saying that no company will ever have an advantage. But with the pace of advances slowing, even if others are 6-12 months behind OpenAI, the conclusion is the same.
Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).
For me OpenAI is the worst of all. Claude code and Gemini deep research is much much more better in terms of quality while ChatGPT hallucinating and saying “sorry you’re right”.
I use both and ChatGPT will absolutely glaze me. I will intentionally say some BS and ChatGPT will say “you’re so right.” It will hilariously try to make me feel good.
But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.
AI is turning into the worst possible business setup for AI startups. A commodity that requires huge capital investment and ongoing innovation to stay relevant. There’s no room for someone to run a small but profitable gold mine or couple of oil wells on the side. The only path to survival is investing crazy sums just to stay relevant and keep up. Meanwhile customers have virtually zero brand loyalty so if you slip behind just a bit folks will swap API endpoints and leave you in the dust. It’s a terrible setup business wise.
There’s also no real moat with all the major models converging to be “good enough” for nearly all use cases. Far beyond a typical race to the bottom.
Those like Google with other products will just add AI features and everyone else trying to make AI their product will just get completely crushed financially.
For consumers, the chat history is the moat. Why switch to a different provider for a marginal model improvement when ChatGPT already “knows” you? The value of sticking to a single provider is already there, even with the limited memory features they’ve implemented thus far.
There is clearly a very strong moat. OpenAI is close to 1 billion active users on ChatGPT while Claude barely have any non-business users. Even though Anthropic had better models at different times this year, I never stopped using ChatGPT and paying for Plus.
We just don't know who will win in which area yet. It doesn't mean there is no moat.
I don’t think it’s a question of moat. The usage limits on the chat interface with the more advanced Claud models are brutal. I feel like I can barely start a conversation before I get shutdown. However, I switched over to Gemini almost completely and barely ever checkin with ChatGPT these days.
OpenAI has close to 1 billion users which are mostly free users and will switch provider the moment OpenAI start charging them or adding ads. Which they will, as OpenAI themselves said they are losing money even with 200$ subs. So that amount of users is pretty meaningless.
If you think of it like cloud, where it's a commodity that reaches competitive prices, then you can use it to build products and applications, instead of competing for infrastructure (see also: railroads, optical fiber)
There is tons of money to be made at the application layer, and VCs will start looking at that once the infrastructure layer collapses.
Not really though. The cloud has some stickiness. It’s pretty hard to move once you’ve settled in. For a lot of AI integrations though it’s just swapping some API endpoints and maybe tweaking the prompting a bit. For probably 95% of AI use cases there almost no barrier to switching.
Well, Claude has the best personality in a field where the rest are in a race to make the most awful personality. That's kind of a moat. The models were smarter too though the others have largely caught up, especially Gemini.
Because almost everyone involved in AI race grew up in "winner takes it all" environments, typical for software, and they try really hard to make it reality. This means your model should do everything to just take 90% of market share, or at least 90% of specific niche.
The problem is, they can't find the moat, despite searching very hard, whatever you bake into your AI, your competitors will be able to replicate in few months. This is why OpenAI is striking deal with Disney, because copyright provides such moat.
Alice changed things such that code monkeys algorithms were not patentable (except in some narrow cases where true runtime novelty can be established.) Since the transformers paper, the potential of self authoring content was obvious to those who can afford to think about things rather than hustle all day.
Apple wants to sell AI in an aluminum box while VCs need to prop up data center agrarianism; they need people to believe their server farms are essential.
Not an Apple fanboy but in this case, am rooting for their "your hardware, your model" aspirations.
Altman, Thiel, the VC model of make the serfs tend their server fields, their control of foundation models, is a gross feeling. It comes with the most religious like sense of fealty to political hierarchy and social structure that only exists as hallucination in the dying generations. The 50+ year old crowd cannot generationally churn fast enough.
> your competitors will be able to replicate in few months.
Will they really be able to replicate the quality while spending significantly less in compute investment? If not then the moat is still how much capital you can acquire for burning on training?
OpenAI is (was?) extremely good at making things that go viral. The successful ones for sure boost subscriber count meaningfully
Studio Ghibli, Sora app. Go viral, juice numbers then turn the knobs down on copyrighted material. Atlas I believe was a less successful than they would've hoped for.
And because of too frequent version bumps that are sometimes released as an answer to Google's launch, rather than a meaningful improvement - I believe they're also having harder time going viral that way
Overall OpenAI throws stuff at the wall and see what sticks. Most of it doesn't and gets (semi) abandoned. But some of it does and it makes for better consumer product than Gemini
It seems to have worked well so far, though I'm sceptical it will be enough for long
Going viral is great when you're a small team or even a million dollar company. That can make or break your business.
Going viral as a billion dollar company spending upward of 1T is still not sustainable. You can't pay off a trillion dollars on "engagement". The entire advertising industry is "only" worth 1T as is: https://www.investors.com/news/advertising-industry-to-hit-1...
I guess we'd have to see the graph with the evolution of paying customers: I don't see the number of potential-but-not-yet clients being that high, certainly not one order of magnitude higher. And everyone already knows OpenAI, they don't have the benefit of additional exposure when they go viral: the only benefit seems to be to hype up investors.
And there's something else about the diminishing returns of going viral... AI kind of breaks the usual assumptions in software: that building it is the hard part and that scaling is basically free. In that sense, AI looks more like regular commodities or physical products, in that you can't just Ctrl-C/Ctrl-V: resources are O(N) on the number of users, not O(log N) like regular software.
Because as with the internet 99% of the usage won’t be for education, work, personal development, what have you. It will be for effing kitten videos and memes.
If Gemini can create or edit an image, chatgpt needs to be able to do this too. Who wants to copy&paste prompts between ai agents?
Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
OpenAI is also relevant bigger than antropic and is known as a generic 'helper'. Antropic probably saw the benefits of being more focused on developer which allows it to succeed longer in the game for the amount of money they have.
It'll just end up spreading itself too thin and be second or third best at everything.
The 500lb gorilla in the room is Google. They have endless money and maybe even more importantly they have endless hardware. OpenAI are going to have an increasingly hard time competing with them.
That Gemini 3 is crushing it right now isn't the problem. It's Gemini 4 or 5 that will likely leave them in the dust for the general use case, meanwhile specialist models will eat what remains of their lunch.
> Who wants to copy&paste prompts between ai agents?
An AI!
The specialist vs generalist debate is still open. And for complex problems, sure, having a model that runs on a small galaxy may be worth it. But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
>Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
I think you are confusing generation with analysis. As far I am aware your model does not need to be good at generating images to be able to decode an image.
I think you're partially right, but I don't think being an AI leader is the main motivation -- that's a side effect.
I think it's important to OpenAI to support as many use-cases as possible. Right now, the experience that most people have with ChatGPT is through small revenue individual accounts. Individual subscriptions with individual needs, but modest budgets.
The bigger money is in enterprise and corporate accounts. To land these accounts, OpenAI will need to provide coverage across as many use-cases as they can so that they can operate as a one-stop AI provider. If a company needs to use OpenAI for chat, Anthropic for coding, and Google for video, what's the point? If Google's chat and coding is "good enough" and you need to have video generation, then that company is going to go with Google for everything. For the end-game I think OpenAI is playing for, they will need to be competitive in all modalities of AI.
Because for all the incessant whining about "slop," multimodal AI i/o is incredibly useful. Being able to take a photo of a home repair issue, have it diagnosed, and return a diagram showing you what to do with it is great, and it's the same algos that power the slop. "Sorry, you'll have to go to Gemini for that use case, people got mad about memes on the internet" is not really a good way for them to be a mass consumer company.
But how much more profitable are they? We see revenue but not profits / spending. Anthropic seems to be growing faster than OpenAI did but that could be the benefit of post-GPT hype.
Because the general idea here is that image and video models, when scaled way up, can generalize like text models did[1], and eventually be treated as "world models"[2]; models that can accurately model real world processes. These "world models" then could be used to train embodied agents with RL in an scalable way[3]. The video-slop and image-slop generators is just a way to take advantage of the current research in world models to get more out of it.
I get the allure of the hypothetical future of video slop. Imagine if you could ask the AI to redo lord of the rings but with magneto instead of gandalf. Imagine watching shawshank redemption but in the end we get a "hot fuzz" twist where andy fights everyone. Imagine a dirty harry style police movie but where the protagonist is a xenomorph which is only barely acknowledged.
You could imagine an entirely new cultural engine where entire genres are born off of random reddit "hey have you guys every considered" comments.
However, the practical reality seems to be that you get tick toc style shorts that cost a bunch to create and have a dubious grasp on causality that have to compete with actual tick toc, a platform that has its endless content produced for free.
You and I see the tiktok slop. But as that functionality improves, its going to make its way into the toolchain of every digital image and video editing software in existence, the same way that its finding its way into programming IDEs. And that type of feature build is worth $. It might be a matter of time until we get to the point where we start seeing major Hollywood movies (for example) doing things that were unthinkable the same way that CGI revolutionized cinema in the 80s. Even if it doesn't, from my layman perception, it seems that Hollywood has spent the last ~20 years differentiating itself from the rest of global cinema largely based on a moat built on IP ownership and capital intensive production value (largely around name brand actors and expensive CGI). AI already threatens to remove one of those pillars, which I have to think in turn makes it very valuable.
because these are mostly the same players of the 2010's. So when they can't get more investor money and the hard problems are still being cracked, the easiest fallback is the same social media slop they used to become successful 10-15 years prior. Falling back on old ways to maximize engagement and grind out (eventually) ad revenue.
This article doesn’t add anything to what we know already. It’s still an open question what happens with the labs this coming year, but I personally think Anthropic’s focus on coding represents the clearest path to subscriber-based success (typical SaaS) whereas OpenAI has a clear opportunity with advertising. Both of these paths could be very lucrative. Meanwhile I expect Google will continue to struggle with making products that people actually want to use, irrespective of the quality of its models.
1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.
2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.
3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.
The difference in model capability seems to be marginal at best, or even in Google's favour.
OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.
Google's most significant advantage in this space is its organizational experience in providing services at this scale, as well as its mature infrastructure to support them. When the bubble pops, it's not lights-out or permanently degraded performance.
What Google AI products do people not want to use? Gemini is catching up to chatpt from a MAU perspective, ai overviews in search are super popular and staggeringly more used than any other ai-based product out there, a Google ai mode has decent usage, and Google Lens has surprisingly high usage. These products together dwarf everyone else out there by like 10x.
> ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".
When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.
I use it several times a day just to change text in image form to text form so you can search it and the like.
It's built into chrome but they move the hidden icon about regularly to confuse you. This month you click the url and it appears underneath, helpfully labeled "Ask Google about this page" so as to give you little idea it's Google Lens.
>Gemini is catching up to chatpt from a MAU perspective
It is far behind, and GPT hasn't exactly stopped growing either. Weekly Active Users, Monthly visits...Gemini is nowhere near. They're comfortably second, but second is still well below first.
>ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
Is it ? How would you even know ? It's a forced feature you can not opt out of or not use. I ignore AI overviews, but would still count as a 'user' to you.
Bart was a flop.
Google search is losing market share to other LLM providers.
Gemini adoption is low, people around me prefer OpenAI because it is good enough and known.
But on the contrary, Nano Banana is very good, so I don't know.
And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.
What "we" know already is hard to add to, as a forum that has a dozen AI articles a day on every little morsel of news.
>whereas OpenAI has a clear opportunity with advertising.
Personally, having "a clear opportunity with advertising" feels like a last ditch effort for a company that promised the moon in solving all the hard problems in the world.
There are other avenues of income. You can invade other industries which are slow on AI uptake and build an AI-from-ground competitor with large advantages over peers. There are hints of this (not AI-from-ground but with more AI) with deepmind's drug research labs. But this can be a huge source of income. You can kill entire industries which inevitably cannot incorporate AI as fast as AI companies can internally.
Their acquisition of Jony Ive's organization for a ton of money and that creepy webpage https://openai.com/sam-and-jony/ makes me think OpenAI is just racing for headlines and groping in the dark for some magic fairy dust.
ChatGPT isn't bad, I use it for some things / pay for it, but their spend and moves make me think that they don't seem confidant in it ...
Is all the doomer-ism about AI companies not being profitable right? Do the AI companies believe it? Seems like it sometimes.
Sam wants so bad for OpenAI to be a proper big tech company, probably one that's more culturally similar to Apple-y than Google/MSFT-y so I guess they are cargo-culting some parts of Apple. That website reminds me of a very low quality version of Apple's myth-making ala Think Different. Ive is obviously also a big part of the cargo cult.
The best case I can see is they integrate shopping and steal the best high-intent cash cow commercial queries from G. It's not really about AI, it's about who gets to be the next toll road.
Google already puts AI summaries at the top of search. It would be trivial for them to incorporate shopping. And they have infinitely more traffic than OpenAI does. I just don’t see how OpenAI could possibly compete with that. What are you seeing that I’m not?
ChatGPT has already won a lot of people away from Google like my mum, who now defaults to ChatGPT when she has a question. I was just talking to one of their friends last night who is in his 90s and he loves using Perplexity to learn about cooking and gardening.
A lot of people now reach for ChatGPT by default instead of Google, even with the AI summaries. I wonder whether they just prefer the interface of the chat apps to Google that can be a bit cluttered in comparison.
I can see users preferring GPT for big ticket items like cars, travel or service companies where you don't have a rec and want something a bet better curated than sponsored results. Especially if they improve the integration so you can book your entire iterary through the chat interface.
The fact is nobody has any idea what OpenAI's cash burn is. Measuring how much they're raising is not an adequate proxy.
For all we know, they could be accumulating capital to weather an AI winter.
It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1]
It said:
OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.
However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.
If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.
> The fact is nobody has any idea what OpenAI's cash burn is.
Their investors surely do (absent outrageous fraud).
> For all we know, they could be accumulating capital to weather an AI winter.
If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.
> For all we know, they could be accumulating capital to weather an AI winter.
Right, this is nonsense. Even if investors wanted to be complicit in fraud, it's an insane investment. "Give us money so we can survive the AI winter" is a pitch you might try with the government, but a profit-motivated investor will... probably not actually laugh in your face, but tell you they'll call you and laugh about you later.
The GPT-5 series is a new model, based on the o1/o3 series. It's very much inaccurate to say that it's a routing system and prompt chain built on top of 4o. 4o was not a reasoning model and reasoning prompts are very weak compared to actual RLVR training.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
Prior to 5.2 you couldn’t expect to get good answers to questions prior to March 2024. It was arguing with me that Bruno Mars did not have two hit songs in the last year. It’s clear that in 2025 OpenAI used the old 4.0 base model and tried to supercharge it using RLVR. That had very mixed results.
Didn't they create Sora and other models and literally burned so much money with their AI video app which they wanted to make a social media but what ended up happening was that they burned billions of dollars.
I wonder about what happens to people who make these hilariously bad business decisions? Like the person at Twitter who decided to kill Vine. Do they spin it and get promotoed? Something else?
I'd love a blog or coffee table book of "where are they now" for the director level folks who do dumb shit like this.
> It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
This isn't really accurate.
Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did.
Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense.
Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run.
i'm sure openai and their investors know what the cash burn is. it's also been well reported by The Information with no pushback from the company or investors. they have also reported that openai is forecasting $9b in training compute spending for 2025, up from $3b last year. this more or less lines up with Epoch's estimate that training compute has reliably grown by ~4x per year. the vast majority of that is just from building bigger data centers rather than chip performance improvements. you obviously need to grow revenue pretty quickly to absorb that.
>It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4)
At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.
RAG? Even for a "fresh" model, there is no way to keep it up to date, so there has to be a mechanism by which to reference eg last night's football game.
Yes it was, op didn't read the reporting closely enough. It said something to the effect of "Didn't pretrain a new broadly released, generally available model"
There is no doubt that OpenAI is taking a lot of risks by betting that AI adoption will translate into revenues in the very short term. And that could really happen imo (with a low probability sure, but worth the risk for VCs? Probably).
It's mathematically impossible what OpenAI is promising. They know it. The goal is to be too big to fail and get bailed out by US taxpayers who have been groomed into viewing AI as a cold war style arms race that America cannot lose.
> The goal is to be too big to fail and get bailed out by US taxpayers
I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean? OpenAI failing wouldn’t mean AI disappears and all of their customers go bankrupt, too. It’s not like a bank. If OpenAI became insolvent or declared bankruptcy, their intellectual property wouldn’t disappear or become useless. Someone would purchase it and run it again under a new company. We also have multiple AI companies and switching costs are not that high for customers, although some adjustment is necessary when changing models.
I don’t even know what people think this is supposed to mean. The US government gives them money for something to prevent them from filing for bankruptcy? The analogy to bank bailouts doesn’t hold.
Bailing out OAI would be entirely unnecessary (crowded field) and political suicide (how many hundreds of billions that could have gone to health care instead?)
If it happens in the next 3 years, tho, and Altman promises enough pork to the man, it could happen.
on the one hand, i understand you are making a stylized comment, on the other hand, as soon as i started writing something reasonable, i realized this is an "upvote lame catastrophizing takes about" (checking my notes) "some company" thread, which means reasonable stuff will get downvoted... for example, where is there actual scarcity in their product inputs? for example, will they really be paying retail prices to infrastructure providers forever? is that a valid forecast? many reasonable ways to look at this. even if i take your cynical stuff at 100% face value, the thing about bailouts is that they're more complicated than what you are saying, but your instinct is to say they're not complicated, "grooming" this and "cold war" that, because your goal is to concern troll, not advance this site's goal of curiosity...
Apparently we all have enough money to put it into OpenAI.
Some players have to play, like google, some players want to play like USA vs. China.
Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.
I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.
Correction: OpenAI investors do take that risk. Some of the investors (e.g. Microsoft, Nvidia) dampen that risk by making such investment conditioned on boosting the investor's own revenue, a stock buyback of sorts.
it is a large spinning plate that can only keep spinning with more money, so the plate gets bigger and bigger, with everyone betting that it would carry on spinnning by itself to the stage that it has become too big to fail, due to the fallout, the impact on the stock market upon others companies would wipe out more than the sum of their debts. It's kinda at that stage now as when one domino falls, the impact on others will follow.
Just a case of too many companies have skin in OpenAI's game for it to be allowed to fail now.
A second, less likely bubble?: IP rights enforcement. While the existing content hosters might have a neatly sewn up content agreement with their users such that all their group chats and cat photos can be used for training, I am a lot less confident that OAI came by its training data legitimately.
(Adjacent to this is how crazy it was that Meta were accused of torrenting ebooks. Did they need them for the underlying knowledge? I can’t imagine they needed them for natural langauge examples.)
OpenAI has #5 traffic levels globally. Their product-market fit is undeniable. The question is monetization.
Their cost to serve each request is roughly 3 orders of magnitude higher than conventional web sites.
While it is clear people see value in the product, we only know they see value at today’s subsidized prices. It is possible that inference prices will continue their rapid decline. Or it is possible that OAI will need to raise prices and consumers will be willing to pay more for the value.
Yes, but that is the standard methodology for startups in their boost phase. Burn vast piles of cash to acquire users, then find out at the end if a profitable business can be made of it.
Only in as much as their product is a pure commodity like oil. Like yes it’s trivial to get customers if you sell gas for half the price, but I don’t think LLMs are that simple right now. ChatGPT has a particular voice that is different from Gemini and Grok.
it's a simple problem really. what is actually scarce?
a spot on the iOS home screen? yes.
infrastructure to serve LLM requests? no.
good LLM answers? no.
the economist can't tell the difference between scarcity and real scarcity.
it is extremely rare to buy a spot on the iOS home screen, and the price for that is only going up - think of the trend of values of tiktok, whatsapp and instagram. that's actually scarce.
that is what openai "owns." you're right, #5 app. you look at someone's home screen, and the things on it are owned by 8 companies, 7 of which are the 7 biggest public companies in the world, and the 8th is openai.
whereas infrastructure does in fact get cheaper. so does energy. they make numerous mistakes - you can't forecast retail prices Azure is "charging" openai for inference. but also, NVIDIA participates in a cartel. GPUs aren't actually scarce, you don't actually need the highest process nodes at TSMC, etc. etc. the law can break up cartels, and people can steal semiconductor process knowledge.
but nobody can just go and "create" more spots on the iOS home screen. do you see?
depends if they can monetize that spot. So either ads or subscription. It is as yet unclear whether ads/subscription can generate sufficient revenue to cover costs and return a profit. Perhaps 'enough ads' will be too much for users to bear, perhaps 'enough subscription' will be too much for users to afford.
I think I super important aspect that people are overlooking, is that every VC wants to invest in the next "big" AI company, and the probability is in your favor to only give funding to AI companies, bc any one of them could be the next big thing. I think, with a downturn of VC investment, we will see some more investment in companies that arent AI native, but use AI as a tool in the toolbox to deliver insights.
Personally I use ChatGPT a lot, it is a wonderful service.
I use it in conjunction with Claude. I’ve gotten pretty good results using both of them in tandem.
However on a principal basis I prefer to self host, I wonder if an advantage of OpenAI imploding wouldn’t generate basement level prices of useful chips? Ideally I want to run my LLM and train it on my data.
For what I use them for, the LLM market has become a two player game, and the players are Anthropic and Google. So I find it quite interesting that OpenAI is still the default assumption of the leader.
And at one point in the 90s, Internet=Netscape Navigator.
I see Google doing to OpenAI today what Microsoft did to Netscape back then, using their dominant position across multiple channels (browser, search, Android) to leverage their way ahead of the first mover.
From what I've seen, 99% of people are using the free version of ChatGPT. Those who are using Claude are on the subscription, very often the $100/month one.
ChatGPT dominates the consumer market (though Nano Banana is singlehandedly breathing some life into consumer Gemini).
A small anecdote: when ChatGPT went down a few months ago, a lot of young people (especially students) just waited for it to come back up. They didn't even think about using an alternative.
When ChatGPT starts injecting ads or forcing payment or doing anything else that annoys its userbase then the young people won't have a problem looking for alternatives
codex cli with gpt-5.2-codex is so reliably good, it earns the default position in my book. I had cancelled my subscription in early 2024 but started back up recently and have been blown away at how terse, smart, and effective it is. Their CLI harness is top-notch and it manages to be extremely efficient with token usage, so the little plan can go for much of the day. I don’t miss Claude’s rambling or Gemini’s random refactorings.
Interestingly Claude is so far down in traffic it's below things like CharacterAI, it's the best model but it's something like 70% ChatGPT, 10% Gemini and Claude is only 1% or so
You are already paying for several national lab HPC centers. These are used for government/university research - no idea if commercial interests can rent time on them. The big ones are running weather, astronomy simulations, nuclear explosions, biological sequencing, and so on.
if datacenters are built by the government, then i think it's fair to assume there will be some level of democratic control of what those datacenters will be used for.
That's like every government initiative. Same as healthcare? School? I mean if you don't have children why do you pay taxes... and roads if you don't drive? I mean the examples are so many... why do you bring this argument that if it doesn't benefit you directly right now today, it shouldn't be done?
Well, people bid for USA government resources all the time. It's why the Washington DC suburbs have some of the country's most affluent neighborhoods among their ranks.
In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.
The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.
In a completely alternate dimension, a quarter of the capital being invested in AI literally just goes towards making sure everyone has quality food and water.
Without capital invested in the past we wouldn’t have almost anything of modern technology. That has done a lot more for everyone, including food affordability, than actually simply buying food for people to eat once.
As we all know, throwing money at a problem solves it completely. Remember how Live Aid saved Ethiopia from starvation and it never had any problems again?
Datacenters are not a natural monopoly, you can always build more. Beyond what the public sector itself might need for its own use, there's not much of a case for governments to invest in them.
That could make sense in some steady state regime where there were stable requirements and mature tech (I wouldn’t vote for it but I can see an argument).
I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?
Given where we are posting, the motive is obvious: to socialize the riskiest part of AI while the investors retain all the potential upside. These people have no sense of shame so they'll loudly advocate for endless public risk and private rewards.
In a better parallel universe, we found a different innovation without using brute-force computation to train systems that unreliably and inefficiently compute things and still leaves us able to understand what we're building.
Same reason they should own access lines: everyone needs rackspace/access, it should be treated like a public service to avoid rent seeing. Having a data center in every city where all of the local lines terminate into could open the doors to a lot of interesting use cases, really help with local resiliency/decentralization efforts, and provide a great alternative to cloud providers that doesn't break the bank.
That's malinvestment. Too much overhead, disconnected from long term demand. The government doesn't have expertise, isn't lean and nimble. What if it all just blows over? (It won't? But who knows?)
Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.
The government has better things to do. Geopolitics, trade, transportation, resources, public health, consumer safety, jobs, economy, defense, regulatory activities, etc.
Prediction: on this thread you'll get a lot of talk about how government would slow things down. But when the AI bubble starts to look shaky, see how fast all the tech bros line up for a "public private partnership."
Burn rate often gets treated as a hard signal, but it is mostly about expectations. Once people get used to the idea of cheap intelligence, any slowdown feels like failure, even if the technology is still moving forward. That gap is usually where bubbles begin.
why does the article used words like burn and incinerate, implying that OpenAI is somehow making money disappear or something? They’re spending it; someone is profiting here, even if it’s not OpenAI. Is it all Nvidia?
Because typically one expect a return on investment with that level of spending. Not only have they run at a loss for years, their spending is expected to increase, with no path to profitability in sight.
Tbh this whole AI thing is probably a negative ROI but it will pay off. Even if the debt is written off the AI enhancements that this whole misallocation of capital created are now "sunk" and are here to stay - the assets and techniques have been built.
There's an element of arms race between players, and the genie is out of the bottle now so have to move with it. Game theory is more driving this than economics in the short term.
Marginal gains on top of these investments probably have a ROI now (i.e. new investments from this point).
I suspect most of it is going to utilities for power, water and racking.
That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.
“Burn rate” is a standard financial term for how much money a startup is losing. If you have $1 cash on hand and a burn rate of $2 a year, then you have six months before you either need to get profitable, raise more money, or shut down.
I don't see a bubble, I see a rapidly growing business case.
MS Office has about 345 million active users. Those are paying subscriptions. IMHO that's roughly the totally addressable market for OpenAI for non coding users. Coding users is another few 20-30 million.
If OpenAI can convert double digit percentages of those onto 20$ and 50$ per month subscriptions by delivering good enough AI that works well, they should be raking in cash by the billions per month adding up to close to the projected 2030 cash burn per year. That would be just subscription revenue. There is also going to be API revenue. And those expensive models used for video and other media creation are going to be indispensable for media and advertising companies which is yet more revenue.
The office market at 20$/month is worth about 82 billion per year in subscription revenue. Add maybe a few premium tiers to that at 50$/month and 100$/month and that 2030 130 billion per year in cash burn suddenly seems quite reasonable.
I've been quite impressed with Codex in the last few months. I only pay 20$/month for that currently. If that goes up, I won't loose sleep over it as it is valuable enough to me. Most programmers I know are on some paid subscription to that, Anthropic's Claude, or similar. Quite a few spend quite a bit more than that. My Chat GPT Plus subscription feels like really good value to me currently.
Agentic tooling for business users is currently severely lacking in capability. Most of the tools are crap. You can get models to generate text. But forget about getting them to format that text correctly in a word processor. I'm constantly fixing bullets, headings and what not in Google docs for my AI assisted writings. Gemini is close to ff-ing useless both with the text and the formatting.
But I've seen enough technology demos of what is possible to know that this is mostly a UX and software development problem, not a model quality problem. It seems companies are holding back from fully integrating things mainly for liability reasons (I suspect). But unlocking AI value like that is where the money is. Something similarly useful as codex for business usage with full access to your mail, drive, spread sheets, slides, word processors, CRMs, and whatever other tools you use running in YOLO mode (which is how I use codex in a virtual machine currently, --yolo). That would replace a shit ton of manual drudgery for me. It would be valuable to me and lots of other users. Valuable as in "please take my money".
Currently doing stuff like this is a very scary thing to do because it might make expensive/embarrassing mistakes. I do it for code because I can contain the risk to the vm. It actually seems to be pretty well behaved. The vm is just there to make me feel good. It could do all sorts of crazy shit. It mostly just does what I ask it to. Clearly the security model around this needs work and instrumentation. That's not a model training problem though.
Something like this for business usage is going to be the next step in agent powered utility that people will pay for at MS office levels of numbers of users and revenue. Google and MS could do it technically but they have huge legal exposure via their existing SAAS contracts and they seem scared shitless of their own lawyers. OpenAI doing something aggressive in this space in the next year or so is what I'm expecting to happen.
Anyway, the bubble predictors seem to be ignoring the revenue potential here. Could it go wrong for OpenAI? Sure. If somebody else shows up and takes most of the revenue. But I think we're past the point where that revenue is not looking very realistic. Five years is a long time for them to get to 130 billion per year in revenue. Chat GPT did not exist five years ago. OpenAI can mess this up by letting somebody else take most of that revenue. The question is who? Google, maybe but I'm underwhelmed so far. MS, seems to want to but unable to. Apple is flailing. Anthropic seems increasingly like an also ran.
There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue. They'll have competition and enter a race to the bottom in terms of hardware cost. If OpenAI burning 130 billion per year, it will probably be getting a lot more compute for it than currently projected. IMHO that's a reasonable cost level given the total addressable market for them. They should be raking in hundreds of billions by then.
There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue.
Whoever has the most compute will ultimately be the winner. This is why these companies are projecting hundreds of billions in infrastructure spend.
With more compute, you can train better models, serve them to more users, serve them faster. The more users, the more compute you can buy. It's a run away cycle. We're seeing only 3 (4 if you count Meta) frontier LLM providers left in the US market.
Nvidia's margins might come down by 2030. It won't stay in the 70s. But the overall market can expand quicker than Nvidia's profits shrink so that they can be more profitable in 2030 despite lower market share.
Banks get bailed out because if confidence in the banking system disappears and everyone tries to withdraw their money at once, the whole economy seizes up. And whoever is Treasury Secretary (usually an ex Wall Street person) is happy to do it.
I don't see OpenAI having the same argument about systemic risk or the same deep ties into government.
Even in a bank bailout, the equity holders typically get wiped out. It's really not that different from a bankruptcy proceeding, there's just a whole lot more focus on keeping the business itself going smoothly. I doubt OpenAI want to be in that kind of situation.
In 2008 the US government ended up making more money then they spent though (at least with the tarp), because they invested a ton of money after everything collapsed, and thus was extremely cheap. Once the markets recovered, they made a hefty sum selling all the derivatives they got at the lowest point. Seems like the epitome of buy when low and sell when high tbh.
Even if there is a bailout. Will it happen in time? Once the confidence is lost it is lost and valuations have dropped. Bailout would just mean that who ever gave money would end up as bag holder of something now worth lot less.
Banks needed bailout to keep lending money. Auto industry needed one to keep employing lot of people. AI doesn't employ that many.
I just don't believe bailout can happen before it is too late for it to be effective in saving the market.
Not really. It was not about stocks. It was the collapse of insurance companies at the core of 2008 crisis.
The same can happen now on the side of private credit that gradually offloads its junk to insurance companies (again):
As a result, private credit is on the rise as an investment option to compensate for this slowdown in traditional LBO (Figure 2, panel 2), and PE companies are actively growing the private credit side of their business by influencing the companies they control to help finance these operations. Life insurers are among these companies. For instance, KKR’s acquisition of 60 percent of Global Atlantic (a US life insurer) in 2020 cost KKR approximately $3billion.
What does it mean for the AI bubble to pop? Everyone stops using AI en masse and we go back to the old ways? Cloud based AI no longer becomes an available product?
I think it mostly just means a few hundred billion dollars of value wiped from the stock market - all the models that have been trained will still exist, as well as all the datacentres, even if the OpenAI entity itself and some of the other startups shut down and other companies else get their assets for pennies on the dollar.
But it might mean that LLMs don't really improve much from where they are today, since there won't be the billions of dollars to throw at training for small incremental improvements that consumers mostly don't care to pay anything for.
On the radio they mentioned that the total global chocolate market is ~100B, I googled it when I was home and it seems to be about ~135B. Apparently that is ... all chocolate, everywhere.. OpenAI's valuation is about 500B. Maybe going up to like 835B.
I'd love to see the rationale that OpenAI (not "AI" everywhere) is more valuable than chocolate globally.
Ignoring that those numbers aren't directly comparable, it did make me wonder, if I had to give up either "AI" or chocolate tomorrow, which would I pick?
Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.
OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.
I am just trying to help you write better. Your writing says "if I had to give up either AI or chocolate [...] I would probably choose AI". However, your language and intent seems to be that you would give up chocolate.
If you really wanted to know you could stop eating chocolate or stop using ai and see if you break. Or do both at different times and see how long you last without one or the other.
The comparison to railroad bubble economics is apt. OpenAI's infrastructure costs are astronomical - training runs, inference compute, and scaling to meet demand all burn through capital at an incredible rate.
What's interesting is the strategic positioning. They need to maintain leadership while somehow finding a sustainable business model. The API pricing already feels like it's in a race to the bottom as competition intensifies.
For startups building on top of LLM APIs, this should be a wake-up call about vendor lock-in risks. If OpenAI has to dramatically change their pricing or pivot their business model to survive, a lot of downstream products could be impacted. Diversifying across multiple model providers isn't just good engineering - it's business risk management.
AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
Something nobody's talking about: OpenAI's losses might actually be attractive to certain investors from a tax perspective. Microsoft and other corporate investors can potentially use their share of OpenAI's operating losses to offset their own taxable income through partnership tax treatment. It's basically a tax-advantaged way to fund R&D - you get the loss deductions now while retaining upside optionality later. This is why the "cash burn = value destruction" framing misses the mark. For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation. The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?" If you're sitting on a pile of profitable cloud revenue like Microsoft, suddenly OpenAI's burn rate starts looking like a pretty efficient way to minimize your tax bill while getting a free option on the AI leader. This also explains why big tech is so eager to invest at nosebleed valuations. They're not just betting on AI upside, they're getting immediate tax benefits that de-risk the whole thing.
> For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation
I know nothing about finances at this level, so asking like a complete newbie: doesn't that just mean that instead of risking $10B they're risking $7-8B? It is a cheaper bet for sure, but doesn't look to me like a game changer when the range of the bet's outcome goes from 0 to 1000% or more.
32 replies →
>> For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields
So just a loss for governments, or in other words, socializing the losses.
15 replies →
Amazon already has not been paying any sort of income tax to the EU. There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.
Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
19 replies →
> OpenAI's losses might actually be attractive to certain investors from a tax perspective.
OpenAI is anyways seeking Govt Bailout for "National Security" reasons. Wow, I earlier scoffed at "Privatize Profits, Socialize Losses", but this appears to now be Standard Operating Procedure in the U.S.
https://www.citizen.org/news/openais-request-for-massive-gov...
So the U.S. Taxpayer will effectively pay for it. And not just the U.S. Taxpayer - due to USD reserve currency status, increasing U.S. debt is effectively shared by the world. Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)
31 replies →
this is not accurate. microsoft recognizes openai losses on their income statement, proportionate to their ownership stake. this has created a huge drag on eps, along with a lot more eps volatility than in the past. it's gotten so bad that microsoft now points people to adjusted net income, which is notable as they had always avoided those games. none of this has been welcomed
OpenAI is a corporation, so their losses do not flow up to their owners.
Their investors, if publicly traded like Microsoft do have to take write-downs on their financial statements but those aren't realized losses for tax purposes. The only tax "benefit" Microsoft might get from the OpenAI investment is writing off the amount it invested if/when OpenAI goes bankrupt.
Can you explain it in another way? What you are saying is that instead of loosing 100% they loose 70% and loosing 70% is somehow good? Or are you saying the risk adjusted returns are then 30% better on the downside than previously thought? Because if you are, I think people here are saying the risk is so high that it is a given they will fail.
1 reply →
Whilst that is an option, it wont cover the share price hit from the fallout, which would wipe out more than the debt as when the big domino falls, others will follow as the market panic shifts.
So kinda looking at a bank level run on tech companies if they go broke.
> The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?"
Was that an organic "it's not A, it's B" or synthetic?
It’s hardly a free option, by your numbers it’d be a 20-30% discount.
1 reply →
> Something nobody's talking about
Nobody is talking about this because it's not a thing.
People here will shit on LLMs all day for being confidently incorrect, then upvote aggressively financially illiterate comments like this.
None of this is how taxes work.
1 reply →
Lmao this is ridiculous. If MSFT really wanted the tax benefits they should’ve just wholly acquired OAI long ago to acquire the financial synergy you speak of.
There is a pretty big moat for Google: extreme amounts of video data on their existing services and absolutely no dependence on Nvidia and it's 90% margin.
Google has several enviable, if not moats, at least redoubts. TPUs, mass infrastructure and own their own cloud services, they own delivery mechanisms on mobile (Android) and every device (Chrome). And Google and Youtube are still #1 and #2 most visited websites in the world.
17 replies →
I have yet to be convinced the broader population has an appetite for AI produced cinematography or videos. Independence from Nvidia is no more of a liability than dependence on electricity rates; it's not as if it's in Nvidia's interest to see one of its large customers fail. And pretty much any of the other Mag7 companies are capable of developing in-house TPUs + are already independently profitable, so Google isn't alone here.
33 replies →
On paper, Google should never have allowed the ChatGPT moment to happen ; how did a then non-profit create what was basically a better search engine than Google?
Google suffers from classic Innovator's Dilemma and need competition to refocus on what ought to be basic survival instincts. What is worse is the search users are not the customers. The customers of Google Search are the advertisers and they will always prioritise the needs of the customers and squander their moats as soon as the threat is gone.
7 replies →
And yes, all their competitors are making custom chips. Google is on TPU v7. absolutely nobody is going to get this right on the first try among their competitors - Google didn't.
1 reply →
In this case the difference between its and it’s does alter the meaning of the sentence.
Agreed. Even xAI's (Grok's) access to live data on x.com and millions of live video inputs from Tesla is a moat not enjoyed by OpenAI.
2 replies →
The TAM for video generation isn't as big as the other use cases.
3 replies →
Hasn't it all been scraped by other ai companies already?
Your premise is wrong in a very important way.
The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.
Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.
The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.
Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.
There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.
> Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.
2 replies →
Google’s moat:
Try “@gmail” in Gemini
Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?
If the “moat” is not AI technology itself but merely sufficient other lines of business to deploy it well, then that’s further evidence that venture investments in AI startups will yield very poor returns.
4 replies →
That kind of makes it sound like AI is a feature and not a product, which supports avalys' point.
Try “@gmail” in Gemini
I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible. 99% of users are working to a mental model of AI that they learned when they first encountered ChatGPT - the idea that AI is a separate app, that they can talk to and prompt to get outputs, and that's it. They're probably starting to learn that they can select models, and use different modes, but the idea of connecting to other apps isn't something they've grokked yet (and they won't until it's very obvious).
What people see as the featureset of AI is what OpenAI is delivering, not Google. Google are going to struggle to leverage their position as custodians of everyone's data if they can't get users to break out of that way of thinking. And honestly, right now, Google are delivering lots of disparate AI interfaces (Gemini, Opal, Nano Banana, etc) which isn't really teaching users that it's all just facets of the same system.
4 replies →
Also, Google doesn't have to finance Gemini using venture capital or debt, it can use its own money.
2 replies →
I tried it, but nothing happened. It said that it sent an email but didn't. What is supposed to happen?
> AI is going to be a highly-competitive, extremely capital-intensive commodity market
It already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.
The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.
But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.
> that ends up in a race to the bottom competing on cost and efficiency of delivering
One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.
---
Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.
> in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model
I suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.
3 replies →
> One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.
I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.
2 replies →
> I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4
It was model improvements, followed by inference time improvements, and now it's RLVR dataset generation driving the wheel.
I haven't read much about it to understand what's going on, but the development of multi-modal models has also felt like a major step. Being able to paste an image into a chat and have it "understand" the image to a comparable extent to language is very powerful.
> But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off
Citation needed!
The railroads provided something of enduring value. They did something materially better than previous competitors (horsecarts and canals) could. Even today, nothing beats freight rail for efficient, cheap modest-speed movement of goods.
If we consider "AI" to be the current LLM and ImageGen bubble, I'm not sure we can say that.
We were all wowed that we could write a brief prompt and get 5,000 lines of React code or an anatomically questionable deepfake of Legally Distinct Chris Hemsworth dancing in a tutu. But once we got past the initial wow, we had to look at the finished product and it's usually not that great. AI as a research tool will spit back complete garbage with a straight face. AI images/video require a lot of manual cleanup to hold up to anything but the most transient scrutiny. AI text has such distinct tones that it's become a joke. AI code isn't better than good human-developed code and is prone to its own unique fault patterns.
It can deliver a lot of mediocrity in a hurry, but how much of that do we really need? I'd hope some of the post-bubble reckoning comes in the form of "if we don't have AI to do it (vendor failures or pricing-to-actual-cost makes it unaffordable), did we really need it in the first place?" I don't need 25 chatbots summarizing things I already read or pleading to "help with my writing" when I know what I want to say.
You're absolutely correct! ( ;) )
The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.
Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)
Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.
My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.
Yes, exactly -- AI would only be analogous to railroads if passenger trains took you to the wrong location roughly 50% of the time.
I was really hoping, and with a different administration I think there was a real shot, for a huge influx of cash into clean energy infrastructure.
Imagine a trillion dollars (frankly it might be more, we'll see) shoved into clean energy generation and huge upgrades to our distribution.
With a bubble burst all we'd be left with is a modern grid and so much clean energy we could accelerate our move off fossil fuels.
Plus a lot of extra compute, that's less clear of a long term value.
Alas.
> AI is a world-changing technology
As stated in TFA, this simply has not been demonstrated , nor are there any artifacts of proof. It's reasonable to suspect that there is no special apparatus behind the curtain in this Oz.
From TFA: "One vc [sic] says discussion of cash burn is taboo at the firm, even though leaked figures suggest it will incinerate more than $115bn by 2030."
Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more. Once you fully integrate you will not switch. Also being capital intensive is a form of moat.
I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.
>Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more.
I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.
I thought that, too, but lately I've been using OpenCode with Claude Opus, rather than Claude Code, and have been loving it.
OpenCode has LSPs out of the box (coming to Claude Code, but not there yet), has a more extensive UI (e.g. sidebar showing pending todos), allows me to switch models mid-chat, has a desktop app (Electron-type wrapper, sure, but nevertheless, desktop; and it syncs with the TUI/web versions so you can use both at the same time), and so on.
So far I like it better, so for me that moat isn't that. The technical moat is still the superiority of the model, and others are bound to catch up there. Gemini 3 Preview is already doing better at some tasks (but frequently goes insane, sadly).
10 replies →
A GPT wrapper isn't a moat.
3 replies →
If AI is capable of doing what they claim then these aren‘t moats because they are just one prompt away from being replicated.
All those features are basically prompts in various formats, not much of a moat.
Except most of their product line is oriented towards software development which has historically been dominated by free software. I don't see developers moving away from this tendency and IMO Anthropic will find themselves in a similar position to JetBrains soon enough (profitable, but niche)... assuming things pan out as you describe.
I, personally, use chatGPT for search more than I do Google these days. It, more often than not, gives me more exact results based on what I'm looking for and it produces links I can visit to get more information. I think this is where their competitive advantage lies if they can figure out how to monetize that.
We don’t need anecdotes. We have data. Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic. Apple also hinted at the fact that it also didn’t see any decreased revenues from the Google Search deal.
AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.
2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch
3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing
4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],
5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.
2 replies →
I personally know people that used ChatGPT a lot but have recently moved to using Gemini.
There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.
OAI does not fit that description as of today.
I'm genuinely curious. Why do you do this instead of Google Searches which also have an AI Overview / answer at the top, that's basically exactly the same as putting your search query into a chat bot, but it ALSO has all the links from a regular Google search so you can quickly corroborate the info even using sources not from the original AI result (so you also see discordant sources from what the AI answer had)?
8 replies →
Like railroads, internet, electricity, aviation or car industries before: they've all been indeed the future, and they all peaked (in relative terms), at the very early stages of these industries future.
And among them the overwhelming majority of companies in the sectors died. Out of the 2000ish car-related companies that existed in 1925 only 3 survived to today. And none of those 3 ended up a particularly good long term investment.
This will remain the case until we have another transformer-level leap in ML technology. I don’t expect such an advancement to be openly published when it is discovered.
>That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.
All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.
But some people do believe that AI is all hype and it will all go away. It’s hard to find two people who actually mean the same thing when they talk about a “bubble” right now.
I don't think anyone seriously believes AI will disappear without a trace. At the very least, LLMs will remain as the state of the art in high-level language processing (editing, translation, chat interfaces, etc.)
The real problem is the massive over-promises of transforming every industry, replacing most human labor, and eventually reaching super-intelligence based on current models.
I hope we can agree that these are all wholly unattainable, even from a purely technological perspective. However, we are investing as if there were no tomorrow without these outcomes, building massive data-centers filled with "GPUs" that, contrary to investor copium, will quickly become obsolete and are increasingly useless for general-purpose datacenter applications (Blackwell Ultra has NO FP64 hardware, for crying out loud...).
We can agree that the bubble deflating, one way or another, is the best outcome long term. That said, the longer we fuel these delusions, the worse the fallout will be when it does. And what I fear is that one day, a bubble (perhaps this one, perhaps another) will grow so large that it wipes out globalized free-market trade as we know it.
4 replies →
I thought the same.
Have you thought that there was a massive physical infrastructure left behind by the original railroad builders, all compatible with future vehicles? Other companies were able to buy the railroads for low prices and use.
Large Language Models change their power consumption requirements monthly, the hardware required to run them is replaced at a rapid rate too. If it were to stop tomorrow, what would you be left with? Out of date hardware, massively wasted power, and a gigantic hole in your wallet.
You could argue you have the blueprints for LLM building, known solutions, and it could all be rebuilt. The thing is, would you want to rebuild, and invest so much again for arguably little actual, tangible output? There isn't anything you can reuse, like others that came after could reuse the railroads.
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
Practically, what I'm finding is that whenever I ask Claude to search stuff on Reddit, it can't but Gemini can. So I think the practical advantages are where certain organizations have unfair data advantages. What I found out is that LLMs work a lot better when they have quality data.
This is different because now the cats out of the bag: AI is big money!
I don't expect AGI or Super intelligence to take that long but I do think it'll happen in private labs now. There's an AI business model (pay per token) that folks can use also.
> don't expect AGI or Super intelligence to take that long
I appreciate the optimism for what would be the biggest achievement (and possibly disaster) in human history. I wish other technologies like curing cancer, Alzheimer's, solving world hunger and peace would have similar timelines.
1 reply →
> AI is a world-changing technology, just like the railroads were
This comparison keeps popping up, and I think it's misleading. The pace of technology uptake is completely different from that of railroads: the user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis. Even if you think that railroads and AI are equally impactful (I don't, I think AI will be far more impactful) the rapidity with which investments can turn into revenue and profit makes the situation entirely different from an investor's point of view.
Railroads carried the goods that everybody used. That’s like almost 100% in a given country.
The pace was slower indeed. It takes time to build the railroads. But at that time advancements also lasted longer. Now it is often cash grabs until the next thing. Not comparable indeed but for other reasons.
> just three years- around 900 million users on a weekly basis.
Well, I rotate about a dozen of free accounts because I don't want to send 1 cent their way, I imagine I'm not the only one. I do the same for gemini, claude and deepseek, so all in all I account for like 50 "unique" weekly users
Apparently they have about 5% of paying customers, the amount of total users is meaningless, it just tells you how much money they burn and isn't an indication of anything else.
6 replies →
> user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis.
Doesn't have anything to do with AI itself. Consider Instagram then TikTok before this, WhatsApp before, etc. There is a clear adoption curve timeline : it's going WorldWide faster. AI is not special in that sense. It doesn't mean AI itself isn't special (arguable, in fact Narayanan precisely argue it's "normal") but rather than adoption pace is precisely on track with everything else.
It is beside the point, but
> I think AI will be far more impactful
is not correct IMO. Those are two very different areas. The impact of railroads on transport and everything transport-related cannot be understated. By now roads and cars have taken over much of it, and ships and airplanes are doing much more, but you have to look at the context at the time.
Paid user base or free user base? Because free user base on a very expensive product is next to meaningless.
4 replies →
Railroads enabled people and goods to move from one place to another much easier and faster.
AI enables people to... produce even more useless slop than before?
12 replies →
I think we'll find that that asymptote only holds for cases where the end user is not really an active participant in creating the next model:
- take your data
- make a model
- sell it back to you
Eventually all of the available data will have been squeezed for all it's worth the only way to differentiate oneself as an AI company will be to propel your users to new heights so that there's new stuff to learn. That growth will be slower, but I think it'll bear more meaningful fruit.
I'm not sure if today's investors are patient enough to see us through to that phase in any kind of a controlled manner, so I expect a bumpy ride in the interim.
Yeah except that models don't propel communities towards new heights. They drive towards the averages. They take from the best to give to the worst, so that as much value is destroyed as created. There's no virtuous cycle there...
5 replies →
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result.
I think this conflates together a lot of different types of AI investment - the application layer vs the model layer vs the cloud layer vs the chip layer.
It's entirely possible that it's hard to generate an economic profit at the model layer, but that doesn't mean that there can't be great returns from the other layers (and a lot of VC money is focused on the application layer).
Whilst those other layers are useful, none of them are particularly hard to build or rebuild when you have many millions of dollars on hand.
One doesn't need tens of billions for them.
2 replies →
> The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did.
Why "soon"? All your arguments may be correct, but none of them imply when the pending implosion will happen.
Also what will happen if/when a different lab (or a current lab) develop a new architecture that can actually achieve AGI?
The other, highly invested, companies (if openai and anthropic) may be in for a free fall.
You never wake to be left in the wake of "the next big thing".
The "Railway Bubble" analogy is spot on.
As a loan officer in Japan who remembers the 1989 bubble, I see the same pattern. In the traditional "Shinise" world I work with, Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
In 1989, we also bet that land prices would outrun gravity forever. But usually, Physics (and Debt) wins in the end. When the railway bubble bursts, only those with "Oxygen" will survive.
I‘m aware this means leaving the original topic of this thread, but would you mind giving us a rundown of this whole Japan 1989 thing? I would love to read a first-person account.
6 replies →
> Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
For OpenAI, cash is oxygen too; they're burning it all to reach escape velocity. They could use it to weather the upcoming storm, but I don't think they will.
1 reply →
> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
I think this is analysis is too surface level. We are seeing Google Gemini pull away in terms of image generation, and their access to billions of organic user images gives them a huge moat. And in terms of training data, Google also has a huge advantage there.
The moat is the training data, capital investment, and simply having a better AI that others cannot recreate.
I don't see how Google doesn't win this thing.
If performance indeed asymptotes, and if we are not at the end of silicon scaling or decreasing cost of compute, then it will eventually be possible to run the very best models at home on reasonably priced hardware.
Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.
The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.
This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.
People seem to have the assumption that OpenAI and Anthropic dying would be synonymous with AI dying, and that's not the case. OpenAI and Anthropic spent a lot of capital on important research, and if the shareholders and equity markets cannot learn to value and respect that and instead let these companies die, new companies will be formed with the same tech, possibly by the same general group of people, thrive, and conveniently leave out the said shareholders.
Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.
You weren’t around pre Google were you? The only thing Google learned from other search engines is what not to do - like rank based on the number of times a keyword appeared and not to use expensive bespoked servers
5 replies →
Isn't it really the other way around? Not to say OpenAI and Anthropic haven't done important work, but the genesis of this entire market was paper on attention that came out of Google. We have the private messages inside OpenAI saying they needed to get to market ASAP or Google would kill them.
I like to tell people that all the AI stuff happening right now is capitalism actually working as intended for once. People competing on features and price where we arent yet in a monopoly/duopoly situation yet. Will it eventually go rotten? Probably — but it's nice that right now for the first time in a while it feels like companies are actually competing for my dollar.
Aaahh the beautiful free market where the energy prices keep increasing and if it all fails they will be saved by the government that they bribed before. Don't forget the tax subsidies. AKA your money. Pure honest capitalism....
Have you thought about what happens if we get a new improvement in model architecture like transformers that grows the compute needs even further
Or the airlines. Airlines have created a huge amount of economic value that has mostly been captured by other entities.
very few software has commoditized, doubt it will be the fate of AI tech stack.
Perhaps it would be useful to define what we mean by "commoditization" in terms of software. I would say a software product that is not commoditized is one where the brand still can command a premium, which in the world of software, generally means people are willing to pay non-zero dollars for it. Once software is commoditized it generally becomes free or ad-supported or is bundled with another non-software product or service. By this standard I would say there are very few non-commoditized consumer software products. People pay for services that are delivered via software (e.g. Spotify, Netflix) but in this case the software is just the delivery mechanism, not the product. So perhaps one viable path for chatbots to avoid commoditization would be to license exclusive content, but in this scenario the AI tech itself becomes a delivery mechanism, albeit a sophisticated one. Otherwise it seems selling ads is the only viable strategy, and precedents show that the economics of that only work when there is a near monopoly (e.g. Meta or Google). So it seems unlikely that a lot of the current AI companies will survive.
for me its clear OpenAI and Anthropic have a lead. I dont buy Gemini 3 being good. it isnt. whatever the benchmark said. same for meta and deepseek.
Um meta didn't achieve the same results yet. And does it matter if they can all achieve the same results if they all manage high enough payoffs? I think subscription based income is only the beginning. Next stage is AI-based subcompanies encroaching on other industries (e.g. deepmind's drug company)
Also that open source models are just months behind
Deepseek has invested the same amount as OpenAI?
Just in time for a Government guaranteed backstop.
I’m waiting to get an RTX 5090 on the cheap.
A penny saved is a penny earned
Massive upfront costs and second place is just first loser. It’s like building fabs but your product is infinitely copyable. Seems pretty rough.
What exactly is "second" place? No-one really knows what first place looks like. Everyone is certain that it will cost an arm, a leg and most of your organs.
For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.
The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.
It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!
4 replies →
I still don't understand how it's world-changing apart from considerably degrading the internet. It's laughable to compare it to railroads.
Translation is big thing, maybe not the same scale as railroads, but still important. The rest is of dubious economic utility (as in you can do it with LLM easier than without, but if you think a little you could just as well not do it at all without losing anything). On the other hand, disrupting signalling will have pretty long-lasting consequences. People used to assume that a long formal-sounding text is a signal of seriousness, certainly so if it's personally addressed. Now it's just a sign of sloppiness. School essays are probably dead as a genre (good riddance). Hell, maybe even some edgy censorable language will enter mainstream as a definite proof of non-LLMness - and stay.
Did you try asking chatgpt to explain?
When it gets a bit better two robots can make four robots and so on to infinity.
AI is capital intensive because autodiff kinda sucks.
This is so obviously right.
I may add that investors are mostly US-centric, and so will the bubble-bursting chaos that ensues.
Eh, I wouldn't be so sure, chips with brain matter and or light are on its way and or quantum chips, one of those or even a combination will give AI a gigantic boost in performance. Finally replacing a lot more humans and whoever implements it first will rule the world.
> chips with brain matter and or light
The... what now?
3 replies →
You seem to have forgotten that the ruling class requires tax payers to fund their incomes. If we're all out of work, there's nobody to buy their products and keep them rich.
2 replies →
Did railroads change the world though?
They only lasted a couple of decades as the main transportation method. I'd say the internal combustion engine was a lot more transformative.
Pretty much every major historical trend of Western societies in the second half of the eighteenth century, from the development of the modern corporation to the advent of total war, was intimately tied to railroad transportation.
Transportation of people, yeah, but it still carries a majority of inter-city freight in North America.
Besides from he fact the freight is still universally carried by the rail when possible, railroads changed the world just like the vacuum valves did. If not for them nobody would invest in developing tire transport or transistors.
Railroads built America and won multiple large wars.
Umm yes? The metro even if not a big deal in the states is like a small but quiet way it has changed public transport, plus moving freight, plus people over large distances, plus the bullet train that mixed luxury, speed and efficiency onto trains, all of these are quietly disruptive transformations, that I think we all take for granted.
This is why I think China will win the AI race. As once it becomes a commodity no other country is capable of bringing down manufacturing and energy costs the way China is today. I am also rooting for them to get on parity with node size for chips for the same reason as they can crash the prices PC hardware.
"AI is going to be a highly-competitive" - In what way?
It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.
Yes they did, at least twice in the 19th century. It was the largest financial crisis before 1929
9 replies →
Your view is ahistorical.
I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.
I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.
The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.
I still find it so fascinating how experiences with these models are so varied.
I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.
There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.
No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.
There is always a comment like this in these threads. It’s just 50-50 whether it’s Claude or OpenAI.
1 reply →
I’m not saying that no company will ever have an advantage. But with the pace of advances slowing, even if others are 6-12 months behind OpenAI, the conclusion is the same.
Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).
1 reply →
For me OpenAI is the worst of all. Claude code and Gemini deep research is much much more better in terms of quality while ChatGPT hallucinating and saying “sorry you’re right”.
codex is sooo slow but it is good at planning, opus is good at coding but not at good at seeing the big picture
I use both and ChatGPT will absolutely glaze me. I will intentionally say some BS and ChatGPT will say “you’re so right.” It will hilariously try to make me feel good.
But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.
Truthfully I just use both.
2 replies →
AI is turning into the worst possible business setup for AI startups. A commodity that requires huge capital investment and ongoing innovation to stay relevant. There’s no room for someone to run a small but profitable gold mine or couple of oil wells on the side. The only path to survival is investing crazy sums just to stay relevant and keep up. Meanwhile customers have virtually zero brand loyalty so if you slip behind just a bit folks will swap API endpoints and leave you in the dust. It’s a terrible setup business wise.
There’s also no real moat with all the major models converging to be “good enough” for nearly all use cases. Far beyond a typical race to the bottom.
Those like Google with other products will just add AI features and everyone else trying to make AI their product will just get completely crushed financially.
For consumers, the chat history is the moat. Why switch to a different provider for a marginal model improvement when ChatGPT already “knows” you? The value of sticking to a single provider is already there, even with the limited memory features they’ve implemented thus far.
The chat history feature makes the product worse, I had to turn it off.
2 replies →
Do we know as a matter of fact that people don't want to leave ChatGPT for Gemini just because of chat history?
3 replies →
Just wait until you ask chatgpt where the export chat history button is!
There is clearly a very strong moat. OpenAI is close to 1 billion active users on ChatGPT while Claude barely have any non-business users. Even though Anthropic had better models at different times this year, I never stopped using ChatGPT and paying for Plus.
We just don't know who will win in which area yet. It doesn't mean there is no moat.
I don’t think it’s a question of moat. The usage limits on the chat interface with the more advanced Claud models are brutal. I feel like I can barely start a conversation before I get shutdown. However, I switched over to Gemini almost completely and barely ever checkin with ChatGPT these days.
1 reply →
OpenAI has close to 1 billion users which are mostly free users and will switch provider the moment OpenAI start charging them or adding ads. Which they will, as OpenAI themselves said they are losing money even with 200$ subs. So that amount of users is pretty meaningless.
9 replies →
I did, moved off to Claude in 2024 and many others around me did the same. Do you have data for non-business claim besides anecdotal evidence?
1 reply →
If you think of it like cloud, where it's a commodity that reaches competitive prices, then you can use it to build products and applications, instead of competing for infrastructure (see also: railroads, optical fiber)
There is tons of money to be made at the application layer, and VCs will start looking at that once the infrastructure layer collapses.
Here's a blog post I wrote about that: https://parsnip.substack.com/p/models-arent-moats
It's a good take. I think both trajectories are occurring simultaneously.
OpenAI challenging Google search is a winner takes all situation, not to mention the vast amounts of user data.
On the other hand, us lesser mortals can leverage AI like a commoditized service to build applications with it.
Not really though. The cloud has some stickiness. It’s pretty hard to move once you’ve settled in. For a lot of AI integrations though it’s just swapping some API endpoints and maybe tweaking the prompting a bit. For probably 95% of AI use cases there almost no barrier to switching.
Well, Claude has the best personality in a field where the rest are in a race to make the most awful personality. That's kind of a moat. The models were smarter too though the others have largely caught up, especially Gemini.
I'll be sad when $20 a month Claude goes away.
Not sure why they put so much investment into videoSlop and imageSlop. Anthropic seems to be more focused at least.
Because almost everyone involved in AI race grew up in "winner takes it all" environments, typical for software, and they try really hard to make it reality. This means your model should do everything to just take 90% of market share, or at least 90% of specific niche.
The problem is, they can't find the moat, despite searching very hard, whatever you bake into your AI, your competitors will be able to replicate in few months. This is why OpenAI is striking deal with Disney, because copyright provides such moat.
> copyright provides such a moat.
Been saying this since the 2016 Alice case. Apple jumped into content production in 2017. They saw the long term value of copyright interests.
https://arstechnica.com/information-technology/2017/08/apple...
Alice changed things such that code monkeys algorithms were not patentable (except in some narrow cases where true runtime novelty can be established.) Since the transformers paper, the potential of self authoring content was obvious to those who can afford to think about things rather than hustle all day.
Apple wants to sell AI in an aluminum box while VCs need to prop up data center agrarianism; they need people to believe their server farms are essential.
Not an Apple fanboy but in this case, am rooting for their "your hardware, your model" aspirations.
Altman, Thiel, the VC model of make the serfs tend their server fields, their control of foundation models, is a gross feeling. It comes with the most religious like sense of fealty to political hierarchy and social structure that only exists as hallucination in the dying generations. The 50+ year old crowd cannot generationally churn fast enough.
8 replies →
> your competitors will be able to replicate in few months.
Will they really be able to replicate the quality while spending significantly less in compute investment? If not then the moat is still how much capital you can acquire for burning on training?
3 replies →
Striking deals without a proper vision is a waste of resources. And that’s the path OAI is on.
It's also why they bought 40% of the world's RAM supply, too
1 reply →
OpenAI is (was?) extremely good at making things that go viral. The successful ones for sure boost subscriber count meaningfully
Studio Ghibli, Sora app. Go viral, juice numbers then turn the knobs down on copyrighted material. Atlas I believe was a less successful than they would've hoped for.
And because of too frequent version bumps that are sometimes released as an answer to Google's launch, rather than a meaningful improvement - I believe they're also having harder time going viral that way
Overall OpenAI throws stuff at the wall and see what sticks. Most of it doesn't and gets (semi) abandoned. But some of it does and it makes for better consumer product than Gemini
It seems to have worked well so far, though I'm sceptical it will be enough for long
Going viral is great when you're a small team or even a million dollar company. That can make or break your business.
Going viral as a billion dollar company spending upward of 1T is still not sustainable. You can't pay off a trillion dollars on "engagement". The entire advertising industry is "only" worth 1T as is: https://www.investors.com/news/advertising-industry-to-hit-1...
I guess we'd have to see the graph with the evolution of paying customers: I don't see the number of potential-but-not-yet clients being that high, certainly not one order of magnitude higher. And everyone already knows OpenAI, they don't have the benefit of additional exposure when they go viral: the only benefit seems to be to hype up investors.
And there's something else about the diminishing returns of going viral... AI kind of breaks the usual assumptions in software: that building it is the hard part and that scaling is basically free. In that sense, AI looks more like regular commodities or physical products, in that you can't just Ctrl-C/Ctrl-V: resources are O(N) on the number of users, not O(log N) like regular software.
Selling a bunch of $20 a month subscriptions isn’t going to make a dent in OpenAI losses. Going viral for a day or two doesn’t help.
Normal people are already getting tired of AI Slop
Because as with the internet 99% of the usage won’t be for education, work, personal development, what have you. It will be for effing kitten videos and memes.
That’s an unusual way of saying uh…adult entertainment
2 replies →
Are the posters of effing kitten videos a customer base with a significant LTV?
(The obvious well-paying market would be erotic / furry / porn, but it's too toxic to publicly touch, at least in the US.)
2 replies →
If only 99% of the Internet was kitten videos and memes
3 replies →
It is a matter who will actually pay for compute. Is it people who care about work or entertainment?
Even if developers are 1:1000 of your users, I'm going to guess that ratio shifts a lot when you look at subscribers.
Because OpenAI stands for AI leader.
If Gemini can create or edit an image, chatgpt needs to be able to do this too. Who wants to copy&paste prompts between ai agents?
Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
OpenAI is also relevant bigger than antropic and is known as a generic 'helper'. Antropic probably saw the benefits of being more focused on developer which allows it to succeed longer in the game for the amount of money they have.
> Because OpenAI stands for AI leader.
It'll just end up spreading itself too thin and be second or third best at everything.
The 500lb gorilla in the room is Google. They have endless money and maybe even more importantly they have endless hardware. OpenAI are going to have an increasingly hard time competing with them.
That Gemini 3 is crushing it right now isn't the problem. It's Gemini 4 or 5 that will likely leave them in the dust for the general use case, meanwhile specialist models will eat what remains of their lunch.
1 reply →
> Who wants to copy&paste prompts between ai agents?
An AI!
The specialist vs generalist debate is still open. And for complex problems, sure, having a model that runs on a small galaxy may be worth it. But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
3 replies →
>Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
I think you are confusing generation with analysis. As far I am aware your model does not need to be good at generating images to be able to decode an image.
2 replies →
I think you're partially right, but I don't think being an AI leader is the main motivation -- that's a side effect.
I think it's important to OpenAI to support as many use-cases as possible. Right now, the experience that most people have with ChatGPT is through small revenue individual accounts. Individual subscriptions with individual needs, but modest budgets.
The bigger money is in enterprise and corporate accounts. To land these accounts, OpenAI will need to provide coverage across as many use-cases as they can so that they can operate as a one-stop AI provider. If a company needs to use OpenAI for chat, Anthropic for coding, and Google for video, what's the point? If Google's chat and coding is "good enough" and you need to have video generation, then that company is going to go with Google for everything. For the end-game I think OpenAI is playing for, they will need to be competitive in all modalities of AI.
Because those and world models are the endgame, way way more than text is.
When they released their first good image model is when they got a new 100 million users in a week.
Because for all the incessant whining about "slop," multimodal AI i/o is incredibly useful. Being able to take a photo of a home repair issue, have it diagnosed, and return a diagram showing you what to do with it is great, and it's the same algos that power the slop. "Sorry, you'll have to go to Gemini for that use case, people got mad about memes on the internet" is not really a good way for them to be a mass consumer company.
Can Claude not do that? I've sent it pictures for simpler things and got answers, usually Id of bugs and plants.
1 reply →
Because there is only so much programmers and companies will pay for AI coders. The big prizes is AI-generated TikTok.
The entertainment industry is by far the easiest way to tap into global discretionary income.
But how much more profitable are they? We see revenue but not profits / spending. Anthropic seems to be growing faster than OpenAI did but that could be the benefit of post-GPT hype.
Because the general idea here is that image and video models, when scaled way up, can generalize like text models did[1], and eventually be treated as "world models"[2]; models that can accurately model real world processes. These "world models" then could be used to train embodied agents with RL in an scalable way[3]. The video-slop and image-slop generators is just a way to take advantage of the current research in world models to get more out of it.
[1] https://arxiv.org/pdf/2509.20328
[2] https://deepmind.google/blog/genie-3-a-new-frontier-for-worl...
[3] https://arxiv.org/pdf/2509.24527
The fact that they do this isn't very bullish for them achieving whatever they define as AGI.
You don't expect AGI to be multi-modal?
2 replies →
I get the allure of the hypothetical future of video slop. Imagine if you could ask the AI to redo lord of the rings but with magneto instead of gandalf. Imagine watching shawshank redemption but in the end we get a "hot fuzz" twist where andy fights everyone. Imagine a dirty harry style police movie but where the protagonist is a xenomorph which is only barely acknowledged.
You could imagine an entirely new cultural engine where entire genres are born off of random reddit "hey have you guys every considered" comments.
However, the practical reality seems to be that you get tick toc style shorts that cost a bunch to create and have a dubious grasp on causality that have to compete with actual tick toc, a platform that has its endless content produced for free.
You and I see the tiktok slop. But as that functionality improves, its going to make its way into the toolchain of every digital image and video editing software in existence, the same way that its finding its way into programming IDEs. And that type of feature build is worth $. It might be a matter of time until we get to the point where we start seeing major Hollywood movies (for example) doing things that were unthinkable the same way that CGI revolutionized cinema in the 80s. Even if it doesn't, from my layman perception, it seems that Hollywood has spent the last ~20 years differentiating itself from the rest of global cinema largely based on a moat built on IP ownership and capital intensive production value (largely around name brand actors and expensive CGI). AI already threatens to remove one of those pillars, which I have to think in turn makes it very valuable.
Because their main use is for advertising/propaganda, which is largely videoSlop & imageSlop even without AI.
Outside of this: https://openai.com/index/disney-sora-agreement/ I don't think there has been much of a win for them even in advertising for image/video slop.
2 replies →
because these are mostly the same players of the 2010's. So when they can't get more investor money and the hard problems are still being cracked, the easiest fallback is the same social media slop they used to become successful 10-15 years prior. Falling back on old ways to maximize engagement and grind out (eventually) ad revenue.
This article doesn’t add anything to what we know already. It’s still an open question what happens with the labs this coming year, but I personally think Anthropic’s focus on coding represents the clearest path to subscriber-based success (typical SaaS) whereas OpenAI has a clear opportunity with advertising. Both of these paths could be very lucrative. Meanwhile I expect Google will continue to struggle with making products that people actually want to use, irrespective of the quality of its models.
I don't. Google has at least a few advantages:
1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.
2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.
3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.
The difference in model capability seems to be marginal at best, or even in Google's favour.
OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.
And they have hardware as well, and their own cloud platform.
In my view Google is uniquely well positioned because, contrary to the others, it controls most of the raw materials for Ai.
Google's most significant advantage in this space is its organizational experience in providing services at this scale, as well as its mature infrastructure to support them. When the bubble pops, it's not lights-out or permanently degraded performance.
What Google AI products do people not want to use? Gemini is catching up to chatpt from a MAU perspective, ai overviews in search are super popular and staggeringly more used than any other ai-based product out there, a Google ai mode has decent usage, and Google Lens has surprisingly high usage. These products together dwarf everyone else out there by like 10x.
> ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".
When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.
>Google Lens has surprisingly high usage
I use it several times a day just to change text in image form to text form so you can search it and the like.
It's built into chrome but they move the hidden icon about regularly to confuse you. This month you click the url and it appears underneath, helpfully labeled "Ask Google about this page" so as to give you little idea it's Google Lens.
Is Gemini, as a chatbot, a product that sustains current valuations and investment?
1 reply →
>Gemini is catching up to chatpt from a MAU perspective
It is far behind, and GPT hasn't exactly stopped growing either. Weekly Active Users, Monthly visits...Gemini is nowhere near. They're comfortably second, but second is still well below first.
>ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
Is it ? How would you even know ? It's a forced feature you can not opt out of or not use. I ignore AI overviews, but would still count as a 'user' to you.
9 replies →
Where does google struggle to make products people don’t want to use? Is it a personal opinion?
Bart was a flop. Google search is losing market share to other LLM providers. Gemini adoption is low, people around me prefer OpenAI because it is good enough and known.
But on the contrary, Nano Banana is very good, so I don't know. And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.
11 replies →
Anti Gravity is a flop. I mean it uses Gemini under the hood.
But you cannot use it with an API key.
If you're on a workspace account, you can't have normal individual plan.
You have to have the team plan with $100/month or nothing.
Google's product management tier is beyond me.
9 replies →
What "we" know already is hard to add to, as a forum that has a dozen AI articles a day on every little morsel of news.
>whereas OpenAI has a clear opportunity with advertising.
Personally, having "a clear opportunity with advertising" feels like a last ditch effort for a company that promised the moon in solving all the hard problems in the world.
There are other avenues of income. You can invade other industries which are slow on AI uptake and build an AI-from-ground competitor with large advantages over peers. There are hints of this (not AI-from-ground but with more AI) with deepmind's drug research labs. But this can be a huge source of income. You can kill entire industries which inevitably cannot incorporate AI as fast as AI companies can internally.
Probably more people use googles AI than anything else. Every search result has an LLM generated summary at the top.
Their acquisition of Jony Ive's organization for a ton of money and that creepy webpage https://openai.com/sam-and-jony/ makes me think OpenAI is just racing for headlines and groping in the dark for some magic fairy dust.
ChatGPT isn't bad, I use it for some things / pay for it, but their spend and moves make me think that they don't seem confidant in it ...
Is all the doomer-ism about AI companies not being profitable right? Do the AI companies believe it? Seems like it sometimes.
This is such a bizarre page. Thanks for sharing!
Sam wants so bad for OpenAI to be a proper big tech company, probably one that's more culturally similar to Apple-y than Google/MSFT-y so I guess they are cargo-culting some parts of Apple. That website reminds me of a very low quality version of Apple's myth-making ala Think Different. Ive is obviously also a big part of the cargo cult.
https://archive.ph/rHPk3
The best case I can see is they integrate shopping and steal the best high-intent cash cow commercial queries from G. It's not really about AI, it's about who gets to be the next toll road.
Google already puts AI summaries at the top of search. It would be trivial for them to incorporate shopping. And they have infinitely more traffic than OpenAI does. I just don’t see how OpenAI could possibly compete with that. What are you seeing that I’m not?
ChatGPT has already won a lot of people away from Google like my mum, who now defaults to ChatGPT when she has a question. I was just talking to one of their friends last night who is in his 90s and he loves using Perplexity to learn about cooking and gardening.
A lot of people now reach for ChatGPT by default instead of Google, even with the AI summaries. I wonder whether they just prefer the interface of the chat apps to Google that can be a bit cluttered in comparison.
1 reply →
I can see users preferring GPT for big ticket items like cars, travel or service companies where you don't have a rec and want something a bet better curated than sponsored results. Especially if they improve the integration so you can book your entire iterary through the chat interface.
The fact is nobody has any idea what OpenAI's cash burn is. Measuring how much they're raising is not an adequate proxy.
For all we know, they could be accumulating capital to weather an AI winter.
It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1]
It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.
However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.
If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.
[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
> The fact is nobody has any idea what OpenAI's cash burn is.
Their investors surely do (absent outrageous fraud).
> For all we know, they could be accumulating capital to weather an AI winter.
If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.
[1] Again c.f. fraud
> For all we know, they could be accumulating capital to weather an AI winter.
Right, this is nonsense. Even if investors wanted to be complicit in fraud, it's an insane investment. "Give us money so we can survive the AI winter" is a pitch you might try with the government, but a profit-motivated investor will... probably not actually laugh in your face, but tell you they'll call you and laugh about you later.
The GPT-5 series is a new model, based on the o1/o3 series. It's very much inaccurate to say that it's a routing system and prompt chain built on top of 4o. 4o was not a reasoning model and reasoning prompts are very weak compared to actual RLVR training.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
Prior to 5.2 you couldn’t expect to get good answers to questions prior to March 2024. It was arguing with me that Bruno Mars did not have two hit songs in the last year. It’s clear that in 2025 OpenAI used the old 4.0 base model and tried to supercharge it using RLVR. That had very mixed results.
1 reply →
Didn't they create Sora and other models and literally burned so much money with their AI video app which they wanted to make a social media but what ended up happening was that they burned billions of dollars.
I wonder about what happens to people who make these hilariously bad business decisions? Like the person at Twitter who decided to kill Vine. Do they spin it and get promotoed? Something else?
I'd love a blog or coffee table book of "where are they now" for the director level folks who do dumb shit like this.
1 reply →
Why do you think they have not trained a new model since 4o? You think the GPT-5 release is /just/ routing to differently sized 4o models?
they're incorrect about the routing statement but it is not a newly trained model
> It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
This isn't really accurate.
Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did.
Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense.
Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run.
i'm sure openai and their investors know what the cash burn is. it's also been well reported by The Information with no pushback from the company or investors. they have also reported that openai is forecasting $9b in training compute spending for 2025, up from $3b last year. this more or less lines up with Epoch's estimate that training compute has reliably grown by ~4x per year. the vast majority of that is just from building bigger data centers rather than chip performance improvements. you obviously need to grow revenue pretty quickly to absorb that.
https://www.theinformation.com/articles/openai-says-business...
https://epoch.ai/blog/training-compute-of-frontier-ai-models...
>It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4)
At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.
People would have paid it if it was actually significantly better. It was a huge cost increase for a pretty minor performance increase.
they're paying million dollar salaries to engineers and building data centers, it's not a huge mystery where their expenses are
They have not successfully trained a new model since 4o. That doesn’t mean they haven’t burned a pile of cash trying.
I know sama says they aren’t trying to train new models, but he’s also a known liar and would definitely try to spin systemic failure.
Gpt-5.2 was new pretrain run i believe
lol the typical AI boosters are down voting you.
How are they updating the data then? Wouldn’t the cutoff date be getting further away from today?
RAG? Even for a "fresh" model, there is no way to keep it up to date, so there has to be a mechanism by which to reference eg last night's football game.
They’re just feeding a little bit of slop in every so often. Fine tuning rather than training a new one.
Well we do know their consumption of energy is not insignificant and comes at great cost.
> they could be accumulating capital to weather an AI winter
Doubtful. This would be the very antithesis of the Silicon Valley way.
wasn't 4.5 new
Yes it was, op didn't read the reporting closely enough. It said something to the effect of "Didn't pretrain a new broadly released, generally available model"
1 reply →
Wasn't 4.5 before 4o?
There is no doubt that OpenAI is taking a lot of risks by betting that AI adoption will translate into revenues in the very short term. And that could really happen imo (with a low probability sure, but worth the risk for VCs? Probably).
It's mathematically impossible what OpenAI is promising. They know it. The goal is to be too big to fail and get bailed out by US taxpayers who have been groomed into viewing AI as a cold war style arms race that America cannot lose.
> The goal is to be too big to fail and get bailed out by US taxpayers
I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean? OpenAI failing wouldn’t mean AI disappears and all of their customers go bankrupt, too. It’s not like a bank. If OpenAI became insolvent or declared bankruptcy, their intellectual property wouldn’t disappear or become useless. Someone would purchase it and run it again under a new company. We also have multiple AI companies and switching costs are not that high for customers, although some adjustment is necessary when changing models.
I don’t even know what people think this is supposed to mean. The US government gives them money for something to prevent them from filing for bankruptcy? The analogy to bank bailouts doesn’t hold.
14 replies →
> It's mathematically impossible what OpenAI is promising
Citation is needed
8 replies →
Bailing out OAI would be entirely unnecessary (crowded field) and political suicide (how many hundreds of billions that could have gone to health care instead?)
If it happens in the next 3 years, tho, and Altman promises enough pork to the man, it could happen.
2 replies →
on the one hand, i understand you are making a stylized comment, on the other hand, as soon as i started writing something reasonable, i realized this is an "upvote lame catastrophizing takes about" (checking my notes) "some company" thread, which means reasonable stuff will get downvoted... for example, where is there actual scarcity in their product inputs? for example, will they really be paying retail prices to infrastructure providers forever? is that a valid forecast? many reasonable ways to look at this. even if i take your cynical stuff at 100% face value, the thing about bailouts is that they're more complicated than what you are saying, but your instinct is to say they're not complicated, "grooming" this and "cold war" that, because your goal is to concern troll, not advance this site's goal of curiosity...
1 reply →
Unlikely, Elon bought the presidency and owns a competitor.
Apparently we all have enough money to put it into OpenAI.
Some players have to play, like google, some players want to play like USA vs. China.
Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.
> Some players have to play, like google
I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.
1 reply →
Correction: OpenAI investors do take that risk. Some of the investors (e.g. Microsoft, Nvidia) dampen that risk by making such investment conditioned on boosting the investor's own revenue, a stock buyback of sorts.
it is a large spinning plate that can only keep spinning with more money, so the plate gets bigger and bigger, with everyone betting that it would carry on spinnning by itself to the stage that it has become too big to fail, due to the fallout, the impact on the stock market upon others companies would wipe out more than the sum of their debts. It's kinda at that stage now as when one domino falls, the impact on others will follow.
Just a case of too many companies have skin in OpenAI's game for it to be allowed to fail now.
A second, less likely bubble?: IP rights enforcement. While the existing content hosters might have a neatly sewn up content agreement with their users such that all their group chats and cat photos can be used for training, I am a lot less confident that OAI came by its training data legitimately.
(Adjacent to this is how crazy it was that Meta were accused of torrenting ebooks. Did they need them for the underlying knowledge? I can’t imagine they needed them for natural langauge examples.)
OpenAI has #5 traffic levels globally. Their product-market fit is undeniable. The question is monetization.
Their cost to serve each request is roughly 3 orders of magnitude higher than conventional web sites.
While it is clear people see value in the product, we only know they see value at today’s subsidized prices. It is possible that inference prices will continue their rapid decline. Or it is possible that OAI will need to raise prices and consumers will be willing to pay more for the value.
It's easy to get product-market fit when you give away dollars for the price of pennies.
Yes, but that is the standard methodology for startups in their boost phase. Burn vast piles of cash to acquire users, then find out at the end if a profitable business can be made of it.
7 replies →
Only in as much as their product is a pure commodity like oil. Like yes it’s trivial to get customers if you sell gas for half the price, but I don’t think LLMs are that simple right now. ChatGPT has a particular voice that is different from Gemini and Grok.
That's a cool sounding phrase you heard somewhere, unfortunately, it's not related with this at all.
Does that cost to serve multiple stay the same when conventional sites are forced to shovel ai into each request? e.g. the new google search
it's a simple problem really. what is actually scarce?
a spot on the iOS home screen? yes.
infrastructure to serve LLM requests? no.
good LLM answers? no.
the economist can't tell the difference between scarcity and real scarcity.
it is extremely rare to buy a spot on the iOS home screen, and the price for that is only going up - think of the trend of values of tiktok, whatsapp and instagram. that's actually scarce.
that is what openai "owns." you're right, #5 app. you look at someone's home screen, and the things on it are owned by 8 companies, 7 of which are the 7 biggest public companies in the world, and the 8th is openai.
whereas infrastructure does in fact get cheaper. so does energy. they make numerous mistakes - you can't forecast retail prices Azure is "charging" openai for inference. but also, NVIDIA participates in a cartel. GPUs aren't actually scarce, you don't actually need the highest process nodes at TSMC, etc. etc. the law can break up cartels, and people can steal semiconductor process knowledge.
but nobody can just go and "create" more spots on the iOS home screen. do you see?
depends if they can monetize that spot. So either ads or subscription. It is as yet unclear whether ads/subscription can generate sufficient revenue to cover costs and return a profit. Perhaps 'enough ads' will be too much for users to bear, perhaps 'enough subscription' will be too much for users to afford.
1 reply →
I think I super important aspect that people are overlooking, is that every VC wants to invest in the next "big" AI company, and the probability is in your favor to only give funding to AI companies, bc any one of them could be the next big thing. I think, with a downturn of VC investment, we will see some more investment in companies that arent AI native, but use AI as a tool in the toolbox to deliver insights.
The bubble is not at question. What's at question is how big will the pop be.
Archive/Paywall: <https://archive.is/rHPk3>
thank you!
Personally I use ChatGPT a lot, it is a wonderful service.
I use it in conjunction with Claude. I’ve gotten pretty good results using both of them in tandem.
However on a principal basis I prefer to self host, I wonder if an advantage of OpenAI imploding wouldn’t generate basement level prices of useful chips? Ideally I want to run my LLM and train it on my data.
For what I use them for, the LLM market has become a two player game, and the players are Anthropic and Google. So I find it quite interesting that OpenAI is still the default assumption of the leader.
Only in HN and some reddit subs I even see the name claude. In many countries AI=ChatGPT.
And at one point in the 90s, Internet=Netscape Navigator.
I see Google doing to OpenAI today what Microsoft did to Netscape back then, using their dominant position across multiple channels (browser, search, Android) to leverage their way ahead of the first mover.
1 reply →
From what I've seen, 99% of people are using the free version of ChatGPT. Those who are using Claude are on the subscription, very often the $100/month one.
ChatGPT dominates the consumer market (though Nano Banana is singlehandedly breathing some life into consumer Gemini).
A small anecdote: when ChatGPT went down a few months ago, a lot of young people (especially students) just waited for it to come back up. They didn't even think about using an alternative.
When ChatGPT starts injecting ads or forcing payment or doing anything else that annoys its userbase then the young people won't have a problem looking for alternatives
This "moat" that OpenAI has is really weak
1 reply →
That's pretty nuts. With the models changing so much and so often, you have to switch it up sometimes just to see what the other company is offering.
4 replies →
The consumer market is a loss leader.
codex cli with gpt-5.2-codex is so reliably good, it earns the default position in my book. I had cancelled my subscription in early 2024 but started back up recently and have been blown away at how terse, smart, and effective it is. Their CLI harness is top-notch and it manages to be extremely efficient with token usage, so the little plan can go for much of the day. I don’t miss Claude’s rambling or Gemini’s random refactorings.
Interestingly Claude is so far down in traffic it's below things like CharacterAI, it's the best model but it's something like 70% ChatGPT, 10% Gemini and Claude is only 1% or so
What do you use them for?
GPT5.2 Codex is the best coding model right now in benchmarks. I use it exclusively now.
How much of this capital is cheap printed credit from the covid area?
In a parallel universe, governments invest in the compute/datacenters (read: infra), and let model makers compete on the same playing field.
I’d rather stay far away from this parallel universe.
Why would you want my money to be used to build datacenter that won’t benefit me ? I might use a LLM once a month, many people never use it.
Let the one who use it pay for it.
You are already paying for several national lab HPC centers. These are used for government/university research - no idea if commercial interests can rent time on them. The big ones are running weather, astronomy simulations, nuclear explosions, biological sequencing, and so on.
3 replies →
if datacenters are built by the government, then i think it's fair to assume there will be some level of democratic control of what those datacenters will be used for.
3 replies →
Sure. Same for healthcare and education right? If you don't have a child or need medical attention, why should you pay for them?
That's like every government initiative. Same as healthcare? School? I mean if you don't have children why do you pay taxes... and roads if you don't drive? I mean the examples are so many... why do you bring this argument that if it doesn't benefit you directly right now today, it shouldn't be done?
10 replies →
If that did happen, how would the government then issue those resources?
OpenAI ask for 1m GPUs for a month, Anthropic ask for 2m, the government data center only has 500,000, and a new startup wants 750,000 as well.
Do you hand them out to the most convincing pitch? Hopefully not to the biggest donor to your campaign.
Now the most successful AI lab is the one that's best at pitching the government for additional resources.
UPDATE: See comment below for the answer to this question: https://news.ycombinator.com/item?id=46438390#46439067
National HPC labs have been over subscribed for decades with extensive queueing/time sharing allocation systems.
It would still likely devolve into most-money-wins, but it is not an insurmountable political obstacle to arrange some sort of sharing.
Edit: I meant to say over subscribed, not over provisioned. There are far more jobs in the queue than can be handled at once
1 reply →
Well, people bid for USA government resources all the time. It's why the Washington DC suburbs have some of the country's most affluent neighborhoods among their ranks.
In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.
The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.
In a completely alternate dimension, a quarter of the capital being invested in AI literally just goes towards making sure everyone has quality food and water.
I'd rather live in a universe where that money is taken out of the military budget.
1 reply →
Without capital invested in the past we wouldn’t have almost anything of modern technology. That has done a lot more for everyone, including food affordability, than actually simply buying food for people to eat once.
As we all know, throwing money at a problem solves it completely. Remember how Live Aid saved Ethiopia from starvation and it never had any problems again?
Datacenters are not a natural monopoly, you can always build more. Beyond what the public sector itself might need for its own use, there's not much of a case for governments to invest in them.
That could make sense in some steady state regime where there were stable requirements and mature tech (I wouldn’t vote for it but I can see an argument).
I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?
Given where we are posting, the motive is obvious: to socialize the riskiest part of AI while the investors retain all the potential upside. These people have no sense of shame so they'll loudly advocate for endless public risk and private rewards.
In a better parallel universe, we found a different innovation without using brute-force computation to train systems that unreliably and inefficiently compute things and still leaves us able to understand what we're building.
> governments invest in the compute/datacenters (read: infra), and let model makers compete on the same playing field
Hmm, what about member-owned coöperatives? Like what we have for stock exchanges.
That sounds like a nightmare.
why would they do that? not to mention governments are already doing that indirectly by taking equity stakes in some of the companies.
Same reason they should own access lines: everyone needs rackspace/access, it should be treated like a public service to avoid rent seeing. Having a data center in every city where all of the local lines terminate into could open the doors to a lot of interesting use cases, really help with local resiliency/decentralization efforts, and provide a great alternative to cloud providers that doesn't break the bank.
3 replies →
Socialized losses, private profits
Do you like this idea?
That seems like a terrible idea. Data centers aren’t a natural monopoly. Regulate the externalities and let it flourish.
Don't forget that the internet exists because of government agencies.
2 replies →
That's malinvestment. Too much overhead, disconnected from long term demand. The government doesn't have expertise, isn't lean and nimble. What if it all just blows over? (It won't? But who knows?)
Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.
The government has better things to do. Geopolitics, trade, transportation, resources, public health, consumer safety, jobs, economy, defense, regulatory activities, etc.
Prediction: on this thread you'll get a lot of talk about how government would slow things down. But when the AI bubble starts to look shaky, see how fast all the tech bros line up for a "public private partnership."
Burn rate often gets treated as a hard signal, but it is mostly about expectations. Once people get used to the idea of cheap intelligence, any slowdown feels like failure, even if the technology is still moving forward. That gap is usually where bubbles begin.
In other words, OpenAI is a freight train without functioning brakes. There I said it.
And thats brave.
Surprised they burn cash advertising on Reddit to “make a mini me” version of yourself where you hold your body in your hand. What a waste of AI lol
I believe in OpenAI but they were running turn your cat into studio ghibili ads on Reddit up until a week or 2 ago.
They need a better marketing strategy.
why does the article used words like burn and incinerate, implying that OpenAI is somehow making money disappear or something? They’re spending it; someone is profiting here, even if it’s not OpenAI. Is it all Nvidia?
Because that's normal language idioms in financial analysis reporting?
Because typically one expect a return on investment with that level of spending. Not only have they run at a loss for years, their spending is expected to increase, with no path to profitability in sight.
Tbh this whole AI thing is probably a negative ROI but it will pay off. Even if the debt is written off the AI enhancements that this whole misallocation of capital created are now "sunk" and are here to stay - the assets and techniques have been built.
There's an element of arms race between players, and the genie is out of the bottle now so have to move with it. Game theory is more driving this than economics in the short term.
Marginal gains on top of these investments probably have a ROI now (i.e. new investments from this point).
not that I disagree but would it be fair to say though that we have seen this before where it turned out OK? say Uber? Amazon?
18 replies →
I suspect most of it is going to utilities for power, water and racking.
That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.
How many gold toilets do you need? I mean, I don't even own one.
1 reply →
“Burn rate” is a standard financial term for how much money a startup is losing. If you have $1 cash on hand and a burn rate of $2 a year, then you have six months before you either need to get profitable, raise more money, or shut down.
Your burn is the money you spend that exceeds the money you earn, see also "burn rate".
> They’re spending it
That's what the words mean in this context.
I don't see a bubble, I see a rapidly growing business case.
MS Office has about 345 million active users. Those are paying subscriptions. IMHO that's roughly the totally addressable market for OpenAI for non coding users. Coding users is another few 20-30 million.
If OpenAI can convert double digit percentages of those onto 20$ and 50$ per month subscriptions by delivering good enough AI that works well, they should be raking in cash by the billions per month adding up to close to the projected 2030 cash burn per year. That would be just subscription revenue. There is also going to be API revenue. And those expensive models used for video and other media creation are going to be indispensable for media and advertising companies which is yet more revenue.
The office market at 20$/month is worth about 82 billion per year in subscription revenue. Add maybe a few premium tiers to that at 50$/month and 100$/month and that 2030 130 billion per year in cash burn suddenly seems quite reasonable.
I've been quite impressed with Codex in the last few months. I only pay 20$/month for that currently. If that goes up, I won't loose sleep over it as it is valuable enough to me. Most programmers I know are on some paid subscription to that, Anthropic's Claude, or similar. Quite a few spend quite a bit more than that. My Chat GPT Plus subscription feels like really good value to me currently.
Agentic tooling for business users is currently severely lacking in capability. Most of the tools are crap. You can get models to generate text. But forget about getting them to format that text correctly in a word processor. I'm constantly fixing bullets, headings and what not in Google docs for my AI assisted writings. Gemini is close to ff-ing useless both with the text and the formatting.
But I've seen enough technology demos of what is possible to know that this is mostly a UX and software development problem, not a model quality problem. It seems companies are holding back from fully integrating things mainly for liability reasons (I suspect). But unlocking AI value like that is where the money is. Something similarly useful as codex for business usage with full access to your mail, drive, spread sheets, slides, word processors, CRMs, and whatever other tools you use running in YOLO mode (which is how I use codex in a virtual machine currently, --yolo). That would replace a shit ton of manual drudgery for me. It would be valuable to me and lots of other users. Valuable as in "please take my money".
Currently doing stuff like this is a very scary thing to do because it might make expensive/embarrassing mistakes. I do it for code because I can contain the risk to the vm. It actually seems to be pretty well behaved. The vm is just there to make me feel good. It could do all sorts of crazy shit. It mostly just does what I ask it to. Clearly the security model around this needs work and instrumentation. That's not a model training problem though.
Something like this for business usage is going to be the next step in agent powered utility that people will pay for at MS office levels of numbers of users and revenue. Google and MS could do it technically but they have huge legal exposure via their existing SAAS contracts and they seem scared shitless of their own lawyers. OpenAI doing something aggressive in this space in the next year or so is what I'm expecting to happen.
Anyway, the bubble predictors seem to be ignoring the revenue potential here. Could it go wrong for OpenAI? Sure. If somebody else shows up and takes most of the revenue. But I think we're past the point where that revenue is not looking very realistic. Five years is a long time for them to get to 130 billion per year in revenue. Chat GPT did not exist five years ago. OpenAI can mess this up by letting somebody else take most of that revenue. The question is who? Google, maybe but I'm underwhelmed so far. MS, seems to want to but unable to. Apple is flailing. Anthropic seems increasingly like an also ran.
There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue. They'll have competition and enter a race to the bottom in terms of hardware cost. If OpenAI burning 130 billion per year, it will probably be getting a lot more compute for it than currently projected. IMHO that's a reasonable cost level given the total addressable market for them. They should be raking in hundreds of billions by then.
Whoever has the most compute will ultimately be the winner. This is why these companies are projecting hundreds of billions in infrastructure spend.
With more compute, you can train better models, serve them to more users, serve them faster. The more users, the more compute you can buy. It's a run away cycle. We're seeing only 3 (4 if you count Meta) frontier LLM providers left in the US market.
Nvidia's margins might come down by 2030. It won't stay in the 70s. But the overall market can expand quicker than Nvidia's profits shrink so that they can be more profitable in 2030 despite lower market share.
It’s like watching a train wreck waiting to happen, while everyone praise the madman conduct or who’s operating the locomotor at full steam.
I don't think that's diagnosis (as a clinical term); closer to defamation.
Is it necessary to a point you want to make?
You can just point to behavior of a given entity, such as to conclude it's untrustworthy, without the problematic area of armchair psychoanalysis.
I redacted the comment because you’re right. I need a better form to express the point.
1 reply →
2008: US Banks pump stocks -> market correction -> taxpayer bailout
2026: US AI companies pump stocks -> market correction -> taxpayer bailout
Mark my words. OpenAI will be bailed out by US taxpayers.
I'll take that bet.
Banks get bailed out because if confidence in the banking system disappears and everyone tries to withdraw their money at once, the whole economy seizes up. And whoever is Treasury Secretary (usually an ex Wall Street person) is happy to do it.
I don't see OpenAI having the same argument about systemic risk or the same deep ties into government.
Even in a bank bailout, the equity holders typically get wiped out. It's really not that different from a bankruptcy proceeding, there's just a whole lot more focus on keeping the business itself going smoothly. I doubt OpenAI want to be in that kind of situation.
In 2008 the US government ended up making more money then they spent though (at least with the tarp), because they invested a ton of money after everything collapsed, and thus was extremely cheap. Once the markets recovered, they made a hefty sum selling all the derivatives they got at the lowest point. Seems like the epitome of buy when low and sell when high tbh.
Even if there is a bailout. Will it happen in time? Once the confidence is lost it is lost and valuations have dropped. Bailout would just mean that who ever gave money would end up as bag holder of something now worth lot less.
Banks needed bailout to keep lending money. Auto industry needed one to keep employing lot of people. AI doesn't employ that many.
I just don't believe bailout can happen before it is too late for it to be effective in saving the market.
Not really. It was not about stocks. It was the collapse of insurance companies at the core of 2008 crisis.
The same can happen now on the side of private credit that gradually offloads its junk to insurance companies (again):
As a result, private credit is on the rise as an investment option to compensate for this slowdown in traditional LBO (Figure 2, panel 2), and PE companies are actively growing the private credit side of their business by influencing the companies they control to help finance these operations. Life insurers are among these companies. For instance, KKR’s acquisition of 60 percent of Global Atlantic (a US life insurer) in 2020 cost KKR approximately $3billion.
https://www.imf.org/en/Publications/global-financial-stabili...
I don't think so.
Elon owns a competitor and bought the White House.
What does it mean for the AI bubble to pop? Everyone stops using AI en masse and we go back to the old ways? Cloud based AI no longer becomes an available product?
I think it mostly just means a few hundred billion dollars of value wiped from the stock market - all the models that have been trained will still exist, as well as all the datacentres, even if the OpenAI entity itself and some of the other startups shut down and other companies else get their assets for pennies on the dollar.
But it might mean that LLMs don't really improve much from where they are today, since there won't be the billions of dollars to throw at training for small incremental improvements that consumers mostly don't care to pay anything for.
Sounds like a big nothingburger then.
I don’t really have faith the current LLMs will improve dramatically anyway, not without totally new approaches to AI.
It probably means a price correction on cloud based AI, rather than it disappearing entirely.
It's a good question. Sad that you're downvoted.
It happens a lot. People here just want to silence voices that don’t fit their agenda.
I would call it the ELIZA bubble. https://en.wikipedia.org/wiki/ELIZA_effect
The Microsoft effect, everything they touch turns to shit
It’ll be interesting to see if and how the latest release of Gemini is seen in the story that is this bubble
On the radio they mentioned that the total global chocolate market is ~100B, I googled it when I was home and it seems to be about ~135B. Apparently that is ... all chocolate, everywhere.. OpenAI's valuation is about 500B. Maybe going up to like 835B.
I'd love to see the rationale that OpenAI (not "AI" everywhere) is more valuable than chocolate globally.
... so crash early 2026?
Wait, aren't you comparing revenue and market cap?
People take old things for granted often. Explains the Coolidge effect, and why plenty of people cheat.
yes. stock v flow error again ("company X cap bigger than country Y GDP" another all too common version).
Ignoring that those numbers aren't directly comparable, it did make me wonder, if I had to give up either "AI" or chocolate tomorrow, which would I pick?
Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.
OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.
I am just trying to help you write better. Your writing says "if I had to give up either AI or chocolate [...] I would probably choose AI". However, your language and intent seems to be that you would give up chocolate.
It’s a bit of a weird comparison, AI vs a luxury sweet.
Maybe instead of the chocolate market, look at the global washing machine market of $65 billion.
I’d rather give up both AI and chocolate than my washing machine.
If you really wanted to know you could stop eating chocolate or stop using ai and see if you break. Or do both at different times and see how long you last without one or the other.
I love AI, and ChatGPT has been transformative for me. But would I give it up for chocolate? I honestly don't think I could.
I spend a lot more time using AI for work than I do eating chocolate
Can't wait for this stupid bubble to finally burst so we can all move on
Well they got $40B more to burn lol
[dead]
paywall, no upvote
Someone posted already the non-paywall version: https://news.ycombinator.com/item?id=46438679
The comparison to railroad bubble economics is apt. OpenAI's infrastructure costs are astronomical - training runs, inference compute, and scaling to meet demand all burn through capital at an incredible rate.
What's interesting is the strategic positioning. They need to maintain leadership while somehow finding a sustainable business model. The API pricing already feels like it's in a race to the bottom as competition intensifies.
For startups building on top of LLM APIs, this should be a wake-up call about vendor lock-in risks. If OpenAI has to dramatically change their pricing or pivot their business model to survive, a lot of downstream products could be impacted. Diversifying across multiple model providers isn't just good engineering - it's business risk management.
Railroad peaked at 6% of GDP in the US.
AI is at 1% of total US GDP right now.
We have 6x more to go.