Comment by avalys

12 hours ago

AI is going to be a highly-competitive, extremely capital-intensive commodity market that ends up in a race to the bottom competing on cost and efficiency of delivering models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.

The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.

The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.

Something nobody's talking about: OpenAI's losses might actually be attractive to certain investors from a tax perspective. Microsoft and other corporate investors can potentially use their share of OpenAI's operating losses to offset their own taxable income through partnership tax treatment. It's basically a tax-advantaged way to fund R&D - you get the loss deductions now while retaining upside optionality later. This is why the "cash burn = value destruction" framing misses the mark. For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation. The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?" If you're sitting on a pile of profitable cloud revenue like Microsoft, suddenly OpenAI's burn rate starts looking like a pretty efficient way to minimize your tax bill while getting a free option on the AI leader. This also explains why big tech is so eager to invest at nosebleed valuations. They're not just betting on AI upside, they're getting immediate tax benefits that de-risk the whole thing.

  • > For the right investor base, $10B in annual losses at OpenAI could be worth $2-3B in tax shields (depending on their bracket and how the structure works). That completely changes the return calculation

    I know nothing about finances at this level, so asking like a complete newbie: doesn't that just mean that instead of risking $10B they're risking $7-8B? It is a cheaper bet for sure, but doesn't look to me like a game changer when the range of the bet's outcome goes from 0 to 1000% or more.

    • It all depends on the actual numbers. Consider this simplified example: If you are offered a deal that requires you to lay down 10 billion today and it has a 5% chance to pay out 150 billion tomorrow, your accountants will tell you not to take this deal because your expected return is -2.5 billion. But if you can offset 3 billion in cost to the tax payer, your expected return suddenly becomes $500 million, making it a good deal that you should take every time.

      4 replies →

    • That just doesn't sound right. This kind of thought process only works if you think you are guaranteed more than that the next year. It only works in crony capitalism where your friends in government put money in your pockets. It's where we are right now, but definitely not something that is sustainable or something to aspire to.

  • Amazon already has not been paying any sort of income tax to the EU. There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.

    Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".

    • The EU doesnt collect income/corporate tax, the individual countries do.

      These big corps use holdings in low tax jurisdisctions like Ireland and Luxemburg, funnel all their EU subsidiaries’ revenues there and end up paying 0 tax in the individual EU countries.

      This system is actually legal, EU lawmakers should pass laws to prevent this.

    • > Amazon already has not been paying any sort of income tax to the EU.

      That should be expected, because

      https://european-union.europa.eu/priorities-and-actions/acti...

      > The EU does not have a direct role in collecting taxes or setting tax rates.

      > There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.

      Dec 2023.

      > Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".

      Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.

      https://budgetmodel.wharton.upenn.edu/issues/2024/10/14/the-...

      1 reply →

  • > OpenAI's losses might actually be attractive to certain investors from a tax perspective.

    OpenAI is anyways seeking Govt Bailout for "National Security" reasons. Wow, I earlier scoffed at "Privatize Profits, Socialize Losses", but this appears to now be Standard Operating Procedure in the U.S.

    https://www.citizen.org/news/openais-request-for-massive-gov...

    So the U.S. Taxpayer will effectively pay for it. And not just the U.S. Taxpayer - due to USD reserve currency status, increasing U.S. debt is effectively shared by the world. Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)

    • There's already a lot that the US taxpayer is on the hook for that's a lot less valuable than a best on the next big thing in software, productivity, and warfare.

      It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work, study, or pass a drug test, and it absolutely shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.

      7 replies →

  • > The real question isn't "can OpenAI justify its valuation" but rather "what's the blended tax rate of its investor base?"

    Was that an organic "it's not A, it's B" or synthetic?

  • Can you explain it in another way? What you are saying is that instead of loosing 100% they loose 70% and loosing 70% is somehow good? Or are you saying the risk adjusted returns are then 30% better on the downside than previously thought? Because if you are, I think people here are saying the risk is so high that it is a given they will fail.

  • Whilst that is an option, it wont cover the share price hit from the fallout, which would wipe out more than the debt as when the big domino falls, others will follow as the market panic shifts.

    So kinda looking at a bank level run on tech companies if they go broke.

  • It’s hardly a free option, by your numbers it’d be a 20-30% discount.

    • Sure but if there's no moat would you rather pay 100% or 80% until the credits run out? You reap the 100% spend in the meantime. Not everyone even has the no moat discount.

  • Lmao this is ridiculous. If MSFT really wanted the tax benefits they should’ve just wholly acquired OAI long ago to acquire the financial synergy you speak of.

There is a pretty big moat for Google: extreme amounts of video data on their existing services and absolutely no dependence on Nvidia and it's 90% margin.

  • Google has several enviable, if not moats, at least redoubts. TPUs, mass infrastructure and own their own cloud services, they own delivery mechanisms on mobile (Android) and every device (Chrome). And Google and Youtube are still #1 and #2 most visited websites in the world.

    • Not to mention security. I'd trust Google more not to have a data breach than open AI / whomever. Email accounts are hugely valuable but I haven't seen a Google data breach in the 20+ years I've been using them. This matters because I don't want my chats out there in public.

      Also integration with other services. I just had Gemini summarize the contents of a Google Drive folder and it was effortless & effective

      8 replies →

    • Don't forget the other moat.

      While their competitors have to deal with actively hostile attempts to stop scraping training data, in Google's case almost everyone bends over backwards to give them easy access.

      2 replies →

    • The biggest moat is amount of money. Google has infinite amounts of money the print out of thin air (ads). They don't need complex entangled schemes with circular debts to prop up their operations.

    • They also have one of the biggest negatives in that they abandon almost everything they build so it’s hard to get invested in thier products.

      I agree with the rest though

      1 reply →

  • I have yet to be convinced the broader population has an appetite for AI produced cinematography or videos. Independence from Nvidia is no more of a liability than dependence on electricity rates; it's not as if it's in Nvidia's interest to see one of its large customers fail. And pretty much any of the other Mag7 companies are capable of developing in-house TPUs + are already independently profitable, so Google isn't alone here.

    • The value of YouTube for AI isn't making AI videos, it's that it's an incredibly rich source for humanity's current knowledge in one place. All of the tutorials, lectures, news reports, etc. are great for training models.

      9 replies →

    • If you think they are going to catch up with Google's software and hardware ecosystem on their first chip, you may be underestimating how hard this is. Google is on TPU v7. meta has already tried with MTIA v1 and v2. those haven't been deployed at scale for inference.

      3 replies →

    • It's in Nvidia's interest to charge the absolute maximum they can without their customers failing. Every dollar of Nvidia's margin is your own lost margin. Utilities don't do that. Nvidia is objectively a way bigger liability than electricity rates.

      5 replies →

    • I think it will be accepted by broader population. But if generation is easy and cheap I wonder if there is demand. And I mean as total demand in the segment. Will there be enough impressions to go around to actually profit from the content. Especially if storage is also considered.

    • Given the fact that Apple and Coke but rushed to produce AI slop, and the agreements with Disney, we are going to see a metric fuck-ton of AI-generated cinema in the next decade. The broader population's tastes are absolute harbage when it comes to cinema, so I don't see why you need convincing. 40+ superhero films should be enough.

  • And yes, all their competitors are making custom chips. Google is on TPU v7. absolutely nobody is going to get this right on the first try among their competitors - Google didn't.

    • Bigger problem for late starts now is that it will be hard to match the performance and cost of Google/Nvidia. It's an investment that had to have started years ago to be competitive now.

  • On paper, Google should never have allowed the ChatGPT moment to happen ; how did a then non-profit create what was basically a better search engine than Google?

    Google suffers from classic Innovator's Dilemma and need competition to refocus on what ought to be basic survival instincts. What is worse is the search users are not the customers. The customers of Google Search are the advertisers and they will always prioritise the needs of the customers and squander their moats as soon as the threat is gone.

    • Exactly, Google's business isn't search, it's ads. Is ChatGPT a more profitable system for delivering ads? That doesn't appear so, which means there's really no reason for Google to have created it first.

      3 replies →

    • Think about it in terms of the research they put out into the ether though. The research grows into something viable, they sit back and watch the response and move when it makes sense.

      It's like that old concept of saying something wrong in a forum on purpose to have everyone flame you for being wrong and needing to prove themselves better by each writing more elaborate answers.

      You catch more fish with bait.

  • Agreed. Even xAI's (Grok's) access to live data on x.com and millions of live video inputs from Tesla is a moat not enjoyed by OpenAI.

    • >Agreed. Even xAI's (Grok's) access to live data on x.com and millions of live video inputs from Tesla is a moat not enjoyed by OpenAI.

      Tesla does not have live video feed from (every) Tesla car.

  • The TAM for video generation isn't as big as the other use cases.

    • I agree, but isn't the TAM for video generation all of movies, TV, and possibly video games, or all entertainment? That's a pretty big market.

    • What you’re competing for is people’s attention and the tam for that is biggest there is

> AI is a world-changing technology, just like the railroads were

This comparison keeps popping up, and I think it's misleading. The pace of technology uptake is completely different from that of railroads: the user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis. Even if you think that railroads and AI are equally impactful (I don't, I think AI will be far more impactful) the rapidity with which investments can turn into revenue and profit makes the situation entirely different from an investor's point of view.

  • Railroads carried the goods that everybody used. That’s like almost 100% in a given country.

    The pace was slower indeed. It takes time to build the railroads. But at that time advancements also lasted longer. Now it is often cash grabs until the next thing. Not comparable indeed but for other reasons.

  • > just three years- around 900 million users on a weekly basis.

    Well, I rotate about a dozen of free accounts because I don't want to send 1 cent their way, I imagine I'm not the only one. I do the same for gemini, claude and deepseek, so all in all I account for like 50 "unique" weekly users

    Apparently they have about 5% of paying customers, the amount of total users is meaningless, it just tells you how much money they burn and isn't an indication of anything else.

    • > I rotate about a dozen of free accounts .. I do the same for gemini, claude and deepseek

      For someone who doesn't like the product and doesn't care about it, you surely make a lot of effort to use it.

      2 replies →

    • I'm going to go out on a limb here and say that users who put that much effort into using this stuff for free, using a dozen different accounts, are very rare.

  • It is beside the point, but

    > I think AI will be far more impactful

    is not correct IMO. Those are two very different areas. The impact of railroads on transport and everything transport-related cannot be understated. By now roads and cars have taken over much of it, and ships and airplanes are doing much more, but you have to look at the context at the time.

  • Paid user base or free user base? Because free user base on a very expensive product is next to meaningless.

    • It's meaningful because it shows that people like the product a lot, and for a lot of different reasons. There are only few products that can reach such market penetration, not to mention in only three years. As the quality of AI increases, people will quickly realise that they are willing to pay for it as much as they pay for electricity. And the same goes for businesses.

  • Railroads enabled people and goods to move from one place to another much easier and faster.

    AI enables people to... produce even more useless slop than before?

    • At this point I'm taking the word "slop" as a sign meaning "I really didn't think this through and I'm just autocompleting based on a gut feeling and the first word that comes to mind".

      3 replies →

Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more. Once you fully integrate you will not switch. Also being capital intensive is a form of moat.

I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.

  • I thought that, too, but lately I've been using OpenCode with Claude Opus, rather than Claude Code, and have been loving it.

    OpenCode has LSPs out of the box (coming to Claude Code, but not there yet), has a more extensive UI (e.g. sidebar showing pending todos), allows me to switch models mid-chat, has a desktop app (Electron-type wrapper, sure, but nevertheless, desktop; and it syncs with the TUI/web versions so you can use both at the same time), and so on.

    So far I like it better, so for me that moat isn't that. The technical moat is still the superiority of the model, and others are bound to catch up there. Gemini 3 Preview is already doing better at some tasks (but frequently goes insane, sadly).

  • >Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more.

    I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.

  • A GPT wrapper isn't a moat.

    • A generic wrapper is not a moat, but the context is. Both the LLM provider and the wrapper provider depend on local context for task activities. The value flows to the context, the LLMs and wrappers are commodities. Who sets the prompts stands to benefit, not who serves AI services.

Google’s moat:

Try “@gmail” in Gemini

Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?

  • If the “moat” is not AI technology itself but merely sufficient other lines of business to deploy it well, then that’s further evidence that venture investments in AI startups will yield very poor returns.

    • It's funny that a decade ago the exit strategy of many of these startups would have been to get acquired by MSFT / META / GOOG. Now, the regulators have made a lot of these acquisitions effectively impossible for antitrust reasons.

      Is it better for society for promising startups to die on the open market, or get acquired by a monopoly? The third option -- taking down the established players -- appears increasingly unlikely.

      1 reply →

  • Try “@gmail” in Gemini

    I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible. 99% of users are working to a mental model of AI that they learned when they first encountered ChatGPT - the idea that AI is a separate app, that they can talk to and prompt to get outputs, and that's it. They're probably starting to learn that they can select models, and use different modes, but the idea of connecting to other apps isn't something they've grokked yet (and they won't until it's very obvious).

    What people see as the featureset of AI is what OpenAI is delivering, not Google. Google are going to struggle to leverage their position as custodians of everyone's data if they can't get users to break out of that way of thinking. And honestly, right now, Google are delivering lots of disparate AI interfaces (Gemini, Opal, Nano Banana, etc) which isn't really teaching users that it's all just facets of the same system.

    • > I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible.

      Google is telling this in about a hundred different popups and inline hints when you use any of its products

      2 replies →

  • That kind of makes it sound like AI is a feature and not a product, which supports avalys' point.

  • I tried it, but nothing happened. It said that it sent an email but didn't. What is supposed to happen?

  • Also, Google doesn't have to finance Gemini using venture capital or debt, it can use its own money.

> AI is going to be a highly-competitive, extremely capital-intensive commodity market

It already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.

The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.

But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.

> that ends up in a race to the bottom competing on cost and efficiency of delivering

One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.

> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.

I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.

---

Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.

  • > I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4

    It was model improvements, followed by inference time improvements, and now it's RLVR dataset generation driving the wheel.

  • > in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model

    I suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.

    • Very interested in this! I'm mainly a ChatGPT user; for me, o3 was the first sign of true "intelligence" (not 'sentience' or anything like that, just actual, genuine usefulness). Are these models at that level yet? Or are they o1? Still GPT4 level?

      1 reply →

  • > One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.

    I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.

    I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.

    • I don't think anyone knows for sure how much mileage/scalability LLMs have. Given what we do know, I suspect if you can afford to spend more compute on even longer training runs, you can still get much better results compared to SOTA, even for "simple" domains like text/language.

      1 reply →

  • > But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off

    Citation needed!

Like railroads, internet, electricity, aviation or car industries before: they've all been indeed the future, and they all peaked (in relative terms), at the very early stages of these industries future.

And among them the overwhelming majority of companies in the sectors died. Out of the 2000ish car-related companies that existed in 1925 only 3 survived to today. And none of those 3 ended up a particularly good long term investment.

I, personally, use chatGPT for search more than I do Google these days. It, more often than not, gives me more exact results based on what I'm looking for and it produces links I can visit to get more information. I think this is where their competitive advantage lies if they can figure out how to monetize that.

  • We don’t need anecdotes. We have data. Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic. Apple also hinted at the fact that it also didn’t see any decreased revenues from the Google Search deal.

    AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.

    2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch

    3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing

    4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],

    5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.

  • I personally know people that used ChatGPT a lot but have recently moved to using Gemini.

    There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.

    OAI does not fit that description as of today.

  • I'm genuinely curious. Why do you do this instead of Google Searches which also have an AI Overview / answer at the top, that's basically exactly the same as putting your search query into a chat bot, but it ALSO has all the links from a regular Google search so you can quickly corroborate the info even using sources not from the original AI result (so you also see discordant sources from what the AI answer had)?

    • The regular google search AI doesn’t do thinky thinky mode. For most buying decisions these days I ask ChatGPT to go off and search and think for a while given certain constraints, while taking particular note of Reddit and YouTube comments, and come back with some recommendations. I’ve been delighted with the results.

      1 reply →

This will remain the case until we have another transformer-level leap in ML technology. I don’t expect such an advancement to be openly published when it is discovered.

> The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result.

I think this conflates together a lot of different types of AI investment - the application layer vs the model layer vs the cloud layer vs the chip layer.

It's entirely possible that it's hard to generate an economic profit at the model layer, but that doesn't mean that there can't be great returns from the other layers (and a lot of VC money is focused on the application layer).

  • Whilst those other layers are useful, none of them are particularly hard to build or rebuild when you have many millions of dollars on hand.

    One doesn't need tens of billions for them.

    • Yeah, because making good chips (TPU) and compilers (XLA) is notoriously easy, right?

The railroads provided something of enduring value. They did something materially better than previous competitors (horsecarts and canals) could. Even today, nothing beats freight rail for efficient, cheap modest-speed movement of goods.

If we consider "AI" to be the current LLM and ImageGen bubble, I'm not sure we can say that.

We were all wowed that we could write a brief prompt and get 5,000 lines of React code or an anatomically questionable deepfake of Legally Distinct Chris Hemsworth dancing in a tutu. But once we got past the initial wow, we had to look at the finished product and it's usually not that great. AI as a research tool will spit back complete garbage with a straight face. AI images/video require a lot of manual cleanup to hold up to anything but the most transient scrutiny. AI text has such distinct tones that it's become a joke. AI code isn't better than good human-developed code and is prone to its own unique fault patterns.

It can deliver a lot of mediocrity in a hurry, but how much of that do we really need? I'd hope some of the post-bubble reckoning comes in the form of "if we don't have AI to do it (vendor failures or pricing-to-actual-cost makes it unaffordable), did we really need it in the first place?" I don't need 25 chatbots summarizing things I already read or pleading to "help with my writing" when I know what I want to say.

  • You're absolutely correct! ( ;) )

    The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.

    Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)

    Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.

    My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.

  • I was really hoping, and with a different administration I think there was a real shot, for a huge influx of cash into clean energy infrastructure.

    Imagine a trillion dollars (frankly it might be more, we'll see) shoved into clean energy generation and huge upgrades to our distribution.

    With a bubble burst all we'd be left with is a modern grid and so much clean energy we could accelerate our move off fossil fuels.

    Plus a lot of extra compute, that's less clear of a long term value.

    Alas.

Have you thought about what happens if we get a new improvement in model architecture like transformers that grows the compute needs even further

>That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.

I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.

All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.

  • But some people do believe that AI is all hype and it will all go away. It’s hard to find two people who actually mean the same thing when they talk about a “bubble” right now.

This is different because now the cats out of the bag: AI is big money!

I don't expect AGI or Super intelligence to take that long but I do think it'll happen in private labs now. There's an AI business model (pay per token) that folks can use also.

  • > don't expect AGI or Super intelligence to take that long

    I appreciate the optimism for what would be the biggest achievement (and possibly disaster) in human history. I wish other technologies like curing cancer, Alzheimer's, solving world hunger and peace would have similar timelines.

I think we'll find that that asymptote only holds for cases where the end user is not really an active participant in creating the next model:

- take your data

- make a model

- sell it back to you

Eventually all of the available data will have been squeezed for all it's worth the only way to differentiate oneself as an AI company will be to propel your users to new heights so that there's new stuff to learn. That growth will be slower, but I think it'll bear more meaningful fruit.

I'm not sure if today's investors are patient enough to see us through to that phase in any kind of a controlled manner, so I expect a bumpy ride in the interim.

  • Yeah except that models don't propel communities towards new heights. They drive towards the averages. They take from the best to give to the worst, so that as much value is destroyed as created. There's no virtuous cycle there...

    • Is that constraint fundamental to what they are? Or are they just reflecting the behavior of markets when there's low hanging fruit around?

      When you look at models that were built for a specific purpose, closely intertwined with experts who care about that purpose, they absolutely propel communities to new heights. Consider the impact of alphafold, it won a Nobel prize, proteomics is forever changed.

      The issue is that that's not currently the business model that's aimed at most of us. We have to have a race to the bottom first. We can have nice things later, if we're lucky, once a certain sort of investor goes broke and a different sort takes the helm. It's stupid, but its a stupidity that predates AI by a long shot.

The "Railway Bubble" analogy is spot on.

As a loan officer in Japan who remembers the 1989 bubble, I see the same pattern. In the traditional "Shinise" world I work with, Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.

In 1989, we also bet that land prices would outrun gravity forever. But usually, Physics (and Debt) wins in the end. When the railway bubble bursts, only those with "Oxygen" will survive.

  • I‘m aware this means leaving the original topic of this thread, but would you mind giving us a rundown of this whole Japan 1989 thing? I would love to read a first-person account.

    • I am honored to receive a question from a fellow "Craftsman" (I assume from your name).

      To be honest, in 1989, I was just a child. I didn't drink the champagne. But as a banker today, I am the one cleaning up the broken glass. So I can tell you about 1989 from the perspective of a "Survivor's Loan Officer."

      I see two realities every day.

      One is the "Zombie" companies. Many SMEs here still list Golf Club Memberships on their books at 1989 prices. Today, they are worth maybe 1/20th of that value. Technically, these companies are insolvent, but they keep the "Ghost of 1989" on the books, hoping to one day write it off as a tax loss. It is a lie that has lasted 30 years.

      But the real estate is even worse. I often visit apartment buildings built during the bubble. They are decaying, and tenants have fled to newer, modern buildings. The owner cannot sell the land because demolition costs hundreds of thousands of dollars—more than the land is worth.

      The owner is now 70 years old. His family has drifted apart. He lives alone in one of the empty units, acting as the caretaker of his own ruin.

      The bubble isn't just a graph in a history book. It is an old man trapped in a concrete box he built with "easy money." That is why I fear the "Cash Burn" of AI. When the fuel runs out, the wreckage doesn't just disappear. Someone has to live in it.

      2 replies →

  • > Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.

    For OpenAI, cash is oxygen too; they're burning it all to reach escape velocity. They could use it to weather the upcoming storm, but I don't think they will.

    • Exactly. They have chosen to burn the lifeboats to power the engine.

      It is a magnificent gamble. If they reach escape velocity (AGI), they own the future. But if they run out of fuel mid-air, gravity is unforgiving.

      As a loan officer, I prefer businesses that don't need to leave the atmosphere to survive.

Or the airlines. Airlines have created a huge amount of economic value that has mostly been captured by other entities.

Your premise is wrong in a very important way.

The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.

Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.

Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.

The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.

Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.

There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.

  • > Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.

    Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.

    • DeepSeek certainly managed that on the training side but in terms of inference, the actual product was unusably slow and unreliable at launch and for several months after. I have not bothered revisiting it.

      1 reply →

Eh, I wouldn't be so sure, chips with brain matter and or light are on its way and or quantum chips, one of those or even a combination will give AI a gigantic boost in performance. Finally replacing a lot more humans and whoever implements it first will rule the world.

  • You seem to have forgotten that the ruling class requires tax payers to fund their incomes. If we're all out of work, there's nobody to buy their products and keep them rich.

    • Not sure this equation works out. If demand for labor goes towards zero it really means there is no demand. In other words, when AI and robots fulfil every desire of their owners there really is no need for “tax payers”

Did railroads change the world though?

They only lasted a couple of decades as the main transportation method. I'd say the internal combustion engine was a lot more transformative.

  • Pretty much every major historical trend of Western societies in the second half of the eighteenth century, from the development of the modern corporation to the advent of total war, was intimately tied to railroad transportation.

  • Transportation of people, yeah, but it still carries a majority of inter-city freight in North America.

  • Umm yes? The metro even if not a big deal in the states is like a small but quiet way it has changed public transport, plus moving freight, plus people over large distances, plus the bullet train that mixed luxury, speed and efficiency onto trains, all of these are quietly disruptive transformations, that I think we all take for granted.

Um meta didn't achieve the same results yet. And does it matter if they can all achieve the same results if they all manage high enough payoffs? I think subscription based income is only the beginning. Next stage is AI-based subcompanies encroaching on other industries (e.g. deepmind's drug company)

Massive upfront costs and second place is just first loser. It’s like building fabs but your product is infinitely copyable. Seems pretty rough.

  • What exactly is "second" place? No-one really knows what first place looks like. Everyone is certain that it will cost an arm, a leg and most of your organs.

    For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.

    The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.

    It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!

for me its clear OpenAI and Anthropic have a lead. I dont buy Gemini 3 being good. it isnt. whatever the benchmark said. same for meta and deepseek.

I still don't understand how it's world-changing apart from considerably degrading the internet. It's laughable to compare it to railroads.

People seem to have the assumption that OpenAI and Anthropic dying would be synonymous with AI dying, and that's not the case. OpenAI and Anthropic spent a lot of capital on important research, and if the shareholders and equity markets cannot learn to value and respect that and instead let these companies die, new companies will be formed with the same tech, possibly by the same general group of people, thrive, and conveniently leave out the said shareholders.

Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.

  • You weren’t around pre Google were you? The only thing Google learned from other search engines is what not to do - like rank based on the number of times a keyword appeared and not to use expensive bespoked servers

    • I was around pre-Google.

      Ranking was Google's 5% contribution to it. They stood on the shoulders of people who invented physical server and datacenter infrastructure, Unix/Linux, file systems, databases, error correction, distributed computing, the entire internet infrastructure, modern Ethernet, all kinds of stuff.

      3 replies →

  • Isn't it really the other way around? Not to say OpenAI and Anthropic haven't done important work, but the genesis of this entire market was paper on attention that came out of Google. We have the private messages inside OpenAI saying they needed to get to market ASAP or Google would kill them.

If performance indeed asymptotes, and if we are not at the end of silicon scaling or decreasing cost of compute, then it will eventually be possible to run the very best models at home on reasonably priced hardware.

Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.

The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.

This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.

"AI is going to be a highly-competitive" - In what way?

It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.

  • Yes they did, at least twice in the 19th century. It was the largest financial crisis before 1929

    • It did. I question the issue of "what problem am I trying to solve" with AI, though. Transportation across a huge swath of land had a clear problem space, and trains offered a very clear solution; created dedicated railing and you can transport 100x the resources at 10x the speed of a horseman (and I'm probably underselling these gains). In times where trekking across a continent took months, the efficiencies in communication and supply lines are immediately clear.

      AI feels like a solution looking for a problem. Especially with 90% of consumer facing products. Were people asking for better chatbots, or to quickly deepfake some video scene? I think the bubble popping will re-reveal some incredible backend tools in tech, medical, and (eventually) robotics. But I don't think this is otherwise solving the problems they marketed on.

      7 replies →

This is why I think China will win the AI race. As once it becomes a commodity no other country is capable of bringing down manufacturing and energy costs the way China is today. I am also rooting for them to get on parity with node size for chips for the same reason as they can crash the prices PC hardware.

  > There's no evidence of a technological moat or a competitive advantage in any of these companies.

I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.

I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.

The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.

  • I still find it so fascinating how experiences with these models are so varied.

    I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.

    There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.

    No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.

  • There is always a comment like this in these threads. It’s just 50-50 whether it’s Claude or OpenAI.

    • We never hear what the actual questions are. I reckon it's Claude being great at coding in general and GPT being good at niche cases. "Spikey intelligence"

  • I’m not saying that no company will ever have an advantage. But with the pace of advances slowing, even if others are 6-12 months behind OpenAI, the conclusion is the same.

    Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).

  • For me OpenAI is the worst of all. Claude code and Gemini deep research is much much more better in terms of quality while ChatGPT hallucinating and saying “sorry you’re right”.

  • I use both and ChatGPT will absolutely glaze me. I will intentionally say some BS and ChatGPT will say “you’re so right.” It will hilariously try to make me feel good.

    But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.

    Truthfully I just use both.

    • I told ChatGPT via my settings that I often make mistakes and to call out my assumptions. So now it

      1. Glazes me 2. Lists a variety of assumptions (some can be useful / interesting)

      Answers the question

      At least this way I don't spend a day pursuing an idea the wrong way because ChatGPT never pointed out something obvious.

      1 reply →

  • codex is sooo slow but it is good at planning, opus is good at coding but not at good at seeing the big picture