He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct. There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
The biggest problem is the infrastructure left behind from the Dotcom boom that laid the path for the current world (the high speed fiber) doesn't translate to computer chips. Are you still using intel chips from 1998? And the chips are such a huge cost, and being backed by debt but they depreciate in value exponentially. It's not the same because so much of the current debt fueled spending is on an asset that has very short shelf life. I think AI will be huge, I don't doubt the endgame once it matures. But the bubble now, spending huge amounts on these data centers using debt without a path to profitability (and inordinate spending on these chips) is dangerous. You can think AI will be huge and see how dangerous the current manifestation of the bubble is. A lot of people will get hurt very very badly. This is going to maim the economy in a generational way.
And a lot of the gains from the Dotcom boom are being paid back in negative value for the average person at this point. We have automated systems that waste our time when we need support, product features that should have a one-time-cost being turned into subscriptions, a complete usurping of the ability to distribute software or build compatible replacements, etc..
The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.
If you look at year over year chip improvements in 2025 vs 1998, it's clear that modern hardware just has a longer shelf life than it used to. The difficulties in getting more performance for the same power expenditure are just very different than back in the day.
There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.
Honestly I think the most surprising thing about this latest investment boom has been how little debt there is. VC spending and big tech's deep pockets keep banks from being too tangled in all of this, so the fallout will be much more gentle imo.
Companies want GW data centers, which are a new thing that will last decades, even if GPUs are consumable and have high failure rates. Also, depending on how far it takes us, it could upgrade the electric grid, make electricity cheaper.
And there will also be software infrastructure which could be durable. There will be improvements to software tooling and the ecosystem. We will have enormous pre-trained foundation models. These model weight artifacts could be copied for free, distilled, or fine tuned for a fraction of the cost.
About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.
That 40% has a very long shelf life.
Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.
At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.
While yes, I sure look forward to the flood of cheap graphics cards we will see 5-10 years from now. I don't need the newest card, but I don't mind the five-year old top-of-the-line at discount prices.
They're only replacing GPUs because investors will give "free" money to do so. Once the bubble pops people will realize that GPUs actually last a while.
I think you partially answer to yourself though. Is the value in the depreciating chips, or in the huge datacenters, with cooling, energy supply, at such scale etc. ?
Personally I think people should stop trying to reason from the past.
As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.
Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.
This won’t even come close to maiming the economy, that’s one of the more extreme takes I’ve heard.
AI is already making us wildly more productive. I vibe coded 5 deep ML libraries over the last month or so. This would have taken me maybe years before when I was manually coding as an MLE.
We have clearly hit the stage of exponential improvement, and to not invest basically everything we have in it would be crazy. Anyone who doesn’t see that is missing the bigger picture.
The leap of faith necessary in LLMs to achieve the same feat is so large its very difficult to imagine it happening. Particularly due to the well known constraints on what the technology is capable of.
The whole investment thesis of LLMs is that it will be able to a) be intelligent b) produce new knowledge. If those two things that dont happen, what has been delivered is not commensurate to the risk in regards to the money invested.
Many AI startups around LLMs are going to crash and burn.
This is because many people have mistaken LLMs for AI, when they’re just a small subset of the technology - and this has driven myopic focus in a lot of development, and has lead to naive investors placing bets on golden dog turds.
I disagree on AI as a whole, however - as unlike previous technologies this one can self-ratchet and bootstrap. ML designed chips, ML designed models, and around you go until god pops out the exit chute.
Cisco, Level3 and WorldCom all saw astronomical valuation spikes during the dotcom bubble and all three saw their stock prices and actual business prospects collapse in the aftermath of it.
Perhaps the most famous implosion of all was AOL who merged (sort of) with TimeWarner gaining the lion's share of control through market cap balancing. AOL fell so destructively that it nearly wiped out all the value of the actual hard assets that TW controlled pre-merger.
I would add more metrics to think about. For example, very few people used Internet in the dotcom era while now the AI use is distributed into all the population using the Internet that will probably not growth too much. In this case, if Internet population is the driver, and it will not growth significantly we are redistributing the attention. Assuming "all" society will be more productive we will all be in the same train at the relatively same speed.
The 90s bubble also had massive financial fraud and laid capital that wasn’t used at 100% utilization when it hit the ground like what we are seeing now.
It’s different enough that it probably isn’t relevant.
> [At dotcom time] There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
It did, but not for the better. Quality of life and standard of living both declined while income inequality skyrocketed and that period of time is now known as The Great Divergence.
> He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct.
He's got no downside if he's wrong or doesn't deliver, he's promising an analogy to selling you a brand new bridge in exchange for taking half of your money... and you're ecstatic about it.
Thank you for acknowledging this. The internet was created around a lot of lofty idealism and none of that has been realized other than opening up the world's information to a great many. It made society and the global economy worse (occidental west; Chinese middle class might disagree) and has paralleled the destabilization of geopolitics. I am not luddite but until we can, "get our moral shit together" new technologies aren't but fuel on the proverbial fire.
Then why has my experience with AI started to see such dramatically diminishing returns?
2022-2023 AI changed enough to be me to convert from skeptic, to a believer. I started working as an AI Engineer and wanted to be on the front lines.
2023-2024 Again, major changes, especially as far as coding goes. I started building very promising prototypes for companies, was able to build a laundry list of projects that were just boring to write.
2024-2025 My day to day usage has decreased. The models seem better at fact finding but worse for code. None of those "cool" prototypes from myself or anyone else I knew seemed to be able to become more than just that. Many of the cool companies I started learning about in 2022 started to reduce staff and are running into financial troubles.
The only area where I've been impressed is the relatively niche improvements in open source text/image to video models. It's wild that you can make sure animated films on a home computer now.
But even there I'm seeing no signs of "exponential improvement".
Very few people predicted LLMs, yet lots of people are now very certain they know what the future of AI holds. I have no idea why so many people have so much faith in their ability to predict the future of technology, when the evidence that they can't is so clear.
It's certainly possible that AI will improve this way, but I'd wager it's extremely unlikely. My sense is that what people are calling AI will later be recognized as obviously steroidal statistical models that could do little else than remix and regurgitate in convincing ways. I guess time will tell which of us is correct.
I'm paying for 3 different AI services and our company and most of my team is also paying money for various AI stuff. Sounds like a real industry to me. There's just going to be VC losers as always, where usually "losing" is getting bought by a bigger company or aquihires instead of 100xing or going public.
My team is doing the same, and yet all of us still aren't sure that we're actually more productive overall.
If anything it seems to me like we've just swapped coding with what is effectively a lot more code review (of whatever the LLM spits out), at the cost of also losing that long term understanding of a block of code that actually comes from writing it yourself (let's not pretend that a reviewer has the same depth of understanding of a piece of code as an author).
If you work in a team then you are likely already not writing most of the code yourself.
There will be point where ai will consistently write better prs - you can already start to see it here and there - finding and fixing bugs in existing code, refactoring, writing tests, writing and updating documentation and prototyping are some examples of areas where it often surpasses human contribution.
Yes there is a very real trade off between labour and capital.
In the past the tradeoff has been very straight forward. But this is a unique situation because it involves knowledge and not just the physicality of the human in regards to productivity.
> Why three? Will you ever be in a position where one will do it for you?
I believe LLMs will be niche tools like databases, you pay for the product not 'gpt' vs 'claude'. You choose the right tool for the job.
I have a feeling coding tool with be separate a niche like Cursor, which LLM it uses doesn't matter. It's the integration, guard rails, prompt seeding, and general software stuff like autocomplete and managing long todos.
Then I pay for ChatGPT because that's my "personal" chat LLM that knows me and knows how I like curt short responses for my dumb questions.
Finally I pay for https://www.warp.dev/terminal as a terminal which replaced Kitty terminal on macos (don't use it for coding) which is another niche. Cursor could enter that arena but VSCode terminal is kinda limited for day-to-day stuff given it's hidden in an larger IDE. Maybe a pure CLI tool will do both better.
there's some lock-in for both chatgpt (the history and natural chat personalization feature is super useful) and with Cursor I'm fully invested in the IDE experience.
The lie is that LLMs are the product itself rather than the endless integration opportunities via APIs and online services.
Seems like a measured approach- my read is him saying it’s probably a bubble in that bad ideas are being funded too, but there are a lot of really good ideas doing well.
Also nit: Typo right in the digest I assume, assuming “suring” is “during”, does cnbc proofread their content?
A market for lemons means there is an information asymmetry. Sellers know what they have and try to offload their lemons on clueless buyers. I don't think that's the case here.
Do I have this right that there have been no or at least very few pure AI IPO's during this cycle (I can't actually think of a single one)? So it's dissimilar to dotcom in that regard because during that time countless dotcoms went public with sky-high valuations and then failed. A bunch of reputable companies also went or were already public during that time and those saw huge valuation drops so that's more analogous to what could happen in the public market (NVDA, for instance, could pull a Cisco and drop "catastrophically," but survive just fine).
That would cause a lot of pain for those shareholders, but would that be somewhat contained given the public "AI" companies for the most part have strong businesses outside of AI? Or are these market caps at this point so large for some of these AI public companies that anything that happens to them will cause some kind of contagion? And then the follow up is if the private AI companies collapse en masse is that market now also so big that it would cause contagion beyond venture capital and their investors (fully aware that pensions and the such are material investors in VC funds, but they're diversified so even though they'd see those losses maybe the broader market would keep them from taking major hits).
Not giving an opinion here, though my knee jerk is to think we're due for a massive drop, but I've literally been saying that for so long that I'm starting to (stupidly) think this time is different (which typically is when all hell breaks loose of course).
IPOs aren't what they once were. The burden of being a public company has increased (SOX and related public company costs are $5-10M/year), so companies are far more likely to stay private. That has created a positive feedback cycle as the private funding ecosystem has become increasingly robust, which is why you see so many $100B+ private companies.
Also keep in mind that the biggest companies during that bubble had peak market caps of ~500B and then lost ~90%, so 400-500B in losses each and total internet related losses of a couple trillion. If NVDA lost 90%, it would be down 4 trillion dollars, or twice that total just by itself.
AI company valuations collapsing would have meaningful impacts on the broader market. Big pension/mutual funds are important sources of capital across every sector, and if they're taking big losses on NVDA, GOOG, and a portfolio of privates, it will have a chilling effect on their other activity.
The costs are a weak argument. The more stronger argument for why they arent going public any time soon is that OAI in particular is a corporate governance nightmare, in which the way they transmit information about their firm and financials will have to completely change.
Theres also plenty of money washing around in private markets so no need to go public. Staying private is an advantage.
There aren't very many IPOs in general. There were about 8000 publicly traded companies in the US in Jan 2000. Today there are about 3950. A lot of the AI related IPOs have been the infra like CoreWeave and Nebius.
However, it is different from the internet bubble partially for the reason you describe.
There have been a few IPOs, but they perhaps happened earlier in the cycle, or companies are pivoting into AI. I'm thinking companies like Palantir, which was always AI, or Salesforce which is making a big AI pivot.
Most of the funding is not coming from public sectors. There is so much private capital available that it isn't necessary. I believe the bubble is in VC, which some would think is find because it protects public markets from the crash, but I'm not sure that is correct.
When the VC money stops flowing into AI, I think it will send a shockwave through the public markets. The huge valuations of companies like OpenAI, Anthropic, etc will be repriced, which will probably force a re-pricing of public darlings like Palantir, Microsoft, NVIDIA.
If VC funds aren't buying NVIDIA chips and building data centers, everyone will feel the need to re-price.
It may be true that OAI et al are raising money in private markets, but does that matter? Ultimately they are still just raising money. Ultimately returns need to show up. You cannot escape that. If you cannot do that nobody will eventually supply the funds to keep operating.
The big advantage of staying private is controlling the narrative.
Historically, the worst busts following the bursting of an asset price bubble, in terms of real economic impact, have been from debt fueled bubbles (Great Depression, Global Financial Crisis). You can read Hyman Minsky and Irving Fisher for a detailed analysis of why, but it mainly comes down to the fact that the financial obligations remain once prices and expectations have reset.
Then you have the busts that follow public equity fueled bubbles (Dotcom crash). Nowhere near as bad as the former, but still a moderate impact on the economy due to the widely dispersed nature of the equity holdings and the resulting wealth effect.
What we have now is more of a narrowly held private equity bubble (acknowledging that there's still an impact through the SP500 given widespread index investing). If OpenAI, Anthropic, Perplexity, and a bunch of AI startups go bust, who loses money and what impact does it have on the rest of the economy?
there is no way they could raise that much money from public markets.
Also, there s no need to invent new tech names anymore. Marketing can add "AI" to the company name, or (as they say), change the wording from "Loading..." to "Thinking.."
My understanding is that the cost of training each next model is very very large, and a half trained model is worthless.
Thus when it is realised that this investment cannot produce the necessary returns, there will simply be no next model. People will continue using the old models, but they will become more and more out of date, and less and less useful, until they are not much more than historical artifacts.
My point is that the threshold for continuing this process (new models) is very big (getting bigger each time?), so the 'pop' will be a step function to zero.
Why do you think the models will become out of date and less useful? Like, compared to what? What external factor makes the models less useful?
If it's just to catch up with newly discovered knowledge or information then that's not the model, they can just train again with an updated dataset and probably not need to train from scratch.
> What external factor makes the models less useful?
Life. A great example can be seen in the AI-generated baseball-related news articles that involve the Athletics organization. AI articles this year have been generating articles that incorrectly state that the Atlanta Braves played in games that were actually played by the Athletics, and the reason is due to the outdated training model. For the last 60 years before 2025, the Athletics played in Oakland, and during that time their acronym was OAK. In 2025, they left Oakland for Sacramento, and changed their acronym to ATH. The problem is that AI models are trained on 60 years of data where 1. team acronyms are always based on the city, rather than the mascot of the team, and 2. acronyms OAK = Athletics, ATL = Atlanta Braves, and ATH = nothing. As a result, an AI model that doesnt have context "OAK == ATH in the 2025 season" will see ATH in the input data, associates ATH with nothing in it's model, and will then erroneously assume ATH is a typo for ATL.
If they stop getting returns in intelligence, they will switch to returns in efficiency and focus on integration, with new models being trained for new data cutoffs if nothing else. Even today there is at least a 5-10 year integration/adoption period if everything halts tomorrow.
There is no reality in which LLMs go away (shy of being replaced).
> If they stop getting returns in intelligence, they will switch to returns in efficiency
I don't think we can assume that people producing what appear to be addictive services are going to do that, especially when they seem to be addicted themselves.
Many argument the current batch of models provide a large capability overhang. That is, we are still learning how to get the most out of these models in various applications
So with every prompt you are expected to wait that long?
I highly doubt general people will be willing to wait, it also doesn't seem entirely viable if you want to do it locally, less bandwidth and no caching akin to what a literal search engine of data can do.
What you mean is those with some form of ownership of the technology. If development eventually results in full automation, with the expense of production reduced to zero, money will be irrelevant.
The wealth gap widening is quite independent from AI being involved. A natural progression which was always happening and continues to be happening. Entil some sort of catastrophe reshuffles the cards. Usually a war or revolution. The poor simply rising up or a lazy and corrupt ruling class depriving their country of enough resources and will to defend itself that some outside power can take it.
Is the average person actually better off after the late 90’s internet is probably a harder question than it might seem.
The long tail may be closer to what I want, but the quality is also generally lower. YouTube just doesn’t support a team of talented writers, Amazon is mostly filled with junk, etc.
Social media and gig work is a mixed bag. Junk e-mail etc may not be a big deal, but those kinds of downsides do erode the net benefit.
Would you rather be a 22 year old starting in life in 2025 or 1995? Unless you pick one of the few countries that underwent a drastic change of regime in that time, the answer’s pretty clear to me.
Bezos didn't define "society", but knowing Devil is what Devil does, we can infer:
1. Amazon files the most petitions for H1-B work visas after Indian IT shops.
2. Amazon opposed minimum wage increase to $15/hr until 2018!
3. Amazon not only fires union organizers, it's claiming National Labor Relations Board is unconstitutional!
It is all society as long as they have access, and they do. Even if the big labs get more closed off, open source is right there and won’t die.
AI increases everyone’s knowledge and ultimately productivity. It’s on every person the learn to leverage it. The dynamics don’t need to change, we just move faster and smarter
> AI increases everyone’s knowledge and ultimately productivity. It’s on every person the learn to leverage it. The dynamics don’t need to change, we just move faster and smarter
This is incomplete in key ways: it only increases knowledge if people practice information literacy and validate AI claims, which we know is an unevenly-distributed skill. Similarly, by making it easier to create disinformation and pollute public sources of information, it can make people less knowledgeable at the same time they believe they are more informed. Neither of those problems are new, of course, but they’re moving from artisanal to industrial scale.
Another area where this is begging questions is around resource allocation. The best AI models and integrations cost money and the ability to leverage them requires you to have an opportunity to acquire skills and use them to make a living. The more successfully businesses are able to remove or deprofessionalize jobs, the smaller the pool will be of people who can afford to build skills, compete with those businesses, or contribute to open source software. Twenty years ago, professional translators made a modest white collar income; when AI ate those jobs, the workers didn’t “learn to leverage” AI, they had to find new jobs in different fields and anyone who didn’t have the financial reserves to do that might’ve ended up in a retail job questioning whether it’s even possible to re-enter the professional class. That’s great for people like Bezos until nobody can afford to buy things, but it’s worse for society since it accelerates the process of centralizing money and power.
Open source in particular seems likely to struggle here: with programmers facing financial downturns, fewer people have time to contribute and if AI is being trained on your code, you’re increasingly going to ask whether it’s in your best interests to literally train your replacement.
Once upon a time, society was all of us, but Society were the filks that held coming out parties and gossiped about whose J-class yacht was likely to defend the America's cup.
Society with a capital S are the beneficiaries of the bubble.
Counter prediction. AI is going to reduce the (relative) wealth of the tech companies.
AWS and Facebook have extremely low running costs per VPS or Ad sold. That IMO is one of the major reasons tech has received its enormously high valuation.
There is nuance to that, but average investors are dumb and don't care.
Add in a relatively high fixed-cost commodity into the accounting, and intuitively the pitch of "global market domination at ever lower costs" will be a much harder sell. Especially if there is a bubble pop that hurts them.
The fact that Bezos is saying this is precisely why the commenter is asking this. He clearly stands to benefit massively from the bubble. Statements like this are meant to encourage buy-in from others to maximize his exit. Presumably "rich" refers to those, like Bezos, who already have incredibly disproportionate wealth and power compared to the majority of people in the US. I'm honestly not sure what the thrust of your comment even is.
That's a very relevant question. And as your question implies, we all know which society the billionaires talk about. But AI is just a technology like any other. It does have the potential to bring great benefits to humanity if developed with that intent. It's the corruptive influence of the billionaire and autocrat greed that turns all technologies against us.
When I say benefits to humanity, I don't mean the AI slop, deepfakes and laziness enabler that we have today. There are niche applications of AI that already show great potential. Like developing new medicines to devising new treatments for dangerous diseases, solving long standing mathematical problems, creating new physics theories. And who knows? Perhaps even create viable solutions for the climate crisis that we are in. They don't receive as much attention as they deserve, because that's not where the profit lies in AI. Solving real problems require us to forgo profits in the short term. That's why we can't leave this completely up to the billionaires. They will just use it to transfer even more wealth from the poor and middle classes to themselves.
What are the actual benefits? Where are all these medicines that humans couldn’t develop on their own? Have we not been able to develop medicine? What theorems are meaningful and impactful that humans can’t prove without AI? I don’t know what a solution to the climate crisis is but what would it even say that humans wouldn’t have realistically thought of?
I have a phd in mathematics and I assure you I am not happy that AI is going to make doing mathematics a waste of time. Go read Gower's essay on it from the 90s. He is spot on.
It’s fast because there already gobs of people on the internet because of all the other products that came before. Facebook didn’t grow as fast because there weren’t as many people on line then. Gmail didn’t grow as fast because there weren’t as many people online then.
I don't understand this argument. Speaking as a kid who grew up middle-class as an 80's teen obsessed with (the then still new) computers, a non-rich person has access to more salient power today than ever in history, and largely for no or low cost. There are free AI's available that can diagnose illnesses for people in remote areas, non-Western nations, etc. and which can translate (and/or summarize) anything to anything with high quality. Etc. etc. AI will help anyone with an idea execute on it.
The only thing you have to worry about are not non-rich people, but people without any motivation. The difference of course is that the framing you're using makes it easy to blame "The System", while a motivation-based framing at least leaves people somewhat responsible.
Wealth may get you a seat closer to the table, but everyone is already invited into the room.
The problem is if the system leads to demotivating people more than motivating them on average, which risks a negative feedback loop where people demotivate each other further and so on
You have an incorrect reading of history and economy. Basically none of the wealth and comfort we (regular people) enjoy were "gifted" or "left over" willingly by the owner class. Everything had to be fought for: minimum wage, reasonable weekly hours, safe workplaces, child labor, retirement, healthcare...
Now, ask yourself, what happens when workers lose the only leverage they have against the owner class: their labor? A capitalist economy can only function if workers are able to sell their labor for wages to the owner class, creating a sort of equilibrium between capital and work.
Once AI is able to replace a significant part of workers, 99% of humans on Earth become redundant in the eyes of the owner class, and even a threat to their future prosperity. And they own everything, the police and army included.
All the smartest people I know in finance are preparing for AI to be a huge bust that wipes out startups. To them, everyone in tech is just a pawn in their money moving finance games. They view Silicon Valley at the moment salivating on what there plays are to make money off the coming hype cycle implosion. The best finance folks make the most money when the tide goes out… and they’re making moves now to be ready.
Especially startups in AI are at risk, because their specific niche projects can easily be outcompeted when BigTech comes with more generalized AI models.
Broader impact, but while the big players will take a hit the new wave of startups stands to take the brunt of the impact. VCs and startups will suffer the most.
At the end of the day though it’s how the system is designed. It’s the needed forrest fire that wipes out overgrowth and destroys all but the strongest trees.
My experience has gone the other way than OOP: Anecdotally, I have had VCs ask me to review AI companies to tell them what they do so they can invest. The VC said VCs don't really understand what they're investing in and just want to get in on anything AI due to FOMO.
The company I reviewed didn't seem like a great investment, but I don't even think that matters right now.
Adding to that, I love how his analysis is completely detached from the consequences this burst will impose on the working man. As it did in the dotcom bubble.
Of course they will, the ultra wealthy are too big to fail. In a bubble like this, they just invest in pretty much everything and take the losses on the 99% of failures to get the x1000 multiples on the 1% of successes. While the rest of us take the hit.
"During bubbles, every experiment or idea gets funded, the good ideas and the bad ideas. And investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. ... But that doesn't mean anything that is happening isn't real."
Remind me again why we need investors to fund bad ideas? The whole premise of western capitalism is that investors can better align with the needs of the society and the current technological reality.
If the investors aren't the gurus we make them to be, we might as well do with a planning committee. We could actually end up with more diversified research.
"Under socialism, a lot of experimental ideas get funded, the good ideas and the bad ideas. And the planning committee have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. ... But that doesn't mean anything that is happening isn't real."
Committees' investment tends to be not very diversified and often too risk-averse because of how blame placing works. E. g. most of soviet's cloning of chip (and many other products) wasn't due to lack of engineering skill - it's just less risk for the bureaucrats running the show. R&D for an original chip is risky and timing it to the next November the seventh is likely to not work. Cloning is a guaranteed success.
The whole point of capitalism is that one is entitled to the consequences of their own stupidity or the lack thereof. The investors are more willing to take the risks because their losses are bounded - they are risking only as much as they are willing, rather than their status in an organization. Of course once all investors ends up investing into the same bubble there is no real advantage over a committee.
> Remind me again why we need investors to fund bad ideas?
Early stage investors generally fund a portfolio of multiple ideas, where each idea faces great uncertainty - some investments will do tremendously well, some won't. Invstors don't need every investment to do well due to the assymmetry of outcomes (a bad investment can at worst go down 100%, a good investment can go up 10,000%, paying off many bad investments).
> The whole premise of western capitalism is that investors can better align with the needs of the society and the current technological reality.
This is not the premise of capitalism, it's the justification for it - it's generally believed that capitalism leads to better outcomes over time than communism, but that doesn't mean capitalism has 0 wasteage or results in 0 bad decisions.
There is a very important difference: real investors risk their own money, the money they saved over their entire life making decisions.
Under socialism bureaucrats risk someone else's money.
We are not in a pure capitalistic society, we also have States, central Banks with central planning expending over half the money in Europe and USA and more than half in Asia.
As a European myself that see the public money being wasted by incompetent people and filling the pockets of politicians, specially marxist ones. For example, the money Spain received after COVID filled so many socialist pockets and has not given information back to Europe as of how it was spent(it was spent on their own companies of friend and family).
I'd love to see those "gigantic benefits". Currently, the only positive thing about LLMs I hear regularly is that it made people more productive. They don't get paid more (of course), they're just more productive.
People in the comments seem to have forgotten about, or never lived through, the dot com bubble. Amazon’s shares fell from $113 to $6 in 2000.
So yes, the internet survived and some companies did well after, but a lot of people were hurt badly.
I see that my Echos want me to enable the new Alexa. I confess a great deal of trepidation on these. Fairly confident that this is not going to go smoothly.
And again I'm baffled on how they would light such good will and functionality on fire.
This shouldn't be a surprise - capitalism always overshoots. Anything worth investing in will generally receive too much investment in because very few people can tell where the line is.
And that's what causes bubbles but at this point it should be clear that AI will make a substantial impact - at least as great as the internet, likely larger
Gotta say I'm pessimistic about the future of AI. At least until its adopted by public sector, schools and societies. Right now its just enhancing what we already have. More "efficient" meetings with auto-note-taking so we can have more meetings. More productivity so we can make less people do more, increase the workload and make them burn out faster. Better and more sophisticated scammers. More sophisticated propaganda by various legal and illegal actors. Cheaper and more grotesque looking vidz and music. More software slop. Not to mention all the "cool" improvements coming our way in the form of smart rockets, bombs and drones and of course I drool for all the smart improvements in surveillance capitalism 2.0. They call it a revolution but for now its just a catalyst for more of all the things we "love".
If you're asking yourself why hasn't the bubble burst when everyone is calling it a bubble it's because no one wants to stop dancing until the music stops. If you told an investor the market will collapse tomorrow with 100% certainty they will invest today like there is 0% certainty of it happening tomorrow.
Yesterday someone's uncle tried the same thing and now his bank account's drained because the 2% that wasn't working was the 2% that prevented his password store from being posted online.
See how that works? A few nerds think it's great while everyone else gets screwed by it.
It's very hard to find anything other than half saying AI is in a bubble, and we will pop any day now, and the other half declaring AGI by 2029, when a new revolution will begin. If you follow the hard science and not the money, you can see we're somewhere in-between these two takes. AI datacenters requiring new power plants is unsustainable for long-term growth. Meanwhile, we have LLMs accepting bullshit tasks and completing them. This is very hard to ignore.
Bullshit tasks are the modern TPS reports. Tasks that create no real value to anyone, but are necessary because management likes to think it is progress.
> “That is what is going to happen here too. This is real, the benefits to society from AI are going to be gigantic.”
As an owner of a web host that probably sees advantage to increased bot traffic, this statement is just more “just wait AI will be gigantic any minute now, keep investing in it for me so my
investments stay valuable”.
I think most level-headed people can see this is a giant bubble which will eventually burst like the dot-com crash. And AI is technology that's hard to understand to non-technical (and even some technical) investors.
But of course, every company needs to slap AI on their product now just to be seen as a viable product.
Personally, I look forward to seeing the bubble burst and being left with a more rational view of AI and what it can (and can not) do.
I too am waiting for the bubble to burst. Particularly because I think it's doing real harm to the industry.
Every company seems to be putting all their eggs in the AI basket. And that is causing basic usability and feature work to be neglected. Nobody cares because they are betting that AI agents will replace all that. But it won't and meanwhile everything else about these products will stagnate.
It's a disasterous strategy and when it comes crashing down and the layoffs start, every CEO will get a pass on leading this failure because they were just doing what everyone else is doing.
OpenAI has a reasonable path to reduce their costs by 10-100x over the next 5 years if they stop improving the models. That would make them an extremely profitable company with their only real risk being “local ai”. However customers have wanted their data in the cloud for years, local inference would likely just mean 0 cost tokens for OpenAI.
The challenge is the rest of the industry funding dead companies with billions of dollars on the off chance they replicate OpenAI’s success.
I don't see how this works though. OpenAI doesn't exist in a vacuum, it has competitors, and the first company to stop improving their model will get obliterated by the others. It seems like they are doomed to keep retraining right up until the VC funding runs out, at which point they go bankrupt.
Some other company, that doesn't have a giant pile of debt will then pick up the pieces and make some money though. Once we dig out of the resulting market crash.
The problem is OAI has very firece competition - folks who are willing to absorb losses to put them out of business.
Uber and Amazon are really bad examples. Who was Amazons competition? Nobody. By the time anyone woke up and took them seriously it was too late.
Uber only had to contend with Lyft and a few other less funded firms. Less funded being a really important thing to consider. Not to mention the easy access to immense amounts of funding Uber had.
OpenAI is trying to launch a hardware product with Jony Ive, an ads company, a AI slop-native version of TikTok and several other "businesses". They look well on their way to turning into a Yahoo! than a Cisco or VMWare.
The part that’s hard to understand is how the sunk costs of hundreds of billions in capex gets repaid by all those people in cafes paying hundreds of dollars per month to use those LLMs.
People have no idea how much concern there was around whether FB would ever be able to monetize social media. That company went public at $15, and nearly closed below that on IPO day.
AI is more useful than social media. This is not financial advice, but I lean more toward not a bubble.
Not hard to use. I meant hard to understand what the limitations are for non tech users. E.g people who think AGI is just around the corner because we now have stochastic parrots.
Every bubble looks obvious in hindsight. The dot-com crash left behind Amazon and Google. The crypto crash left behind Coinbase and a few real revenue generating companies. If this is the AI bubble, then the survivors are going to look very obvious in a decade, we just don’t know which ones yet.
Crypto isn't bullshit well most of it is but the utility is still used by millions of people around the world, specifically USDT and USDC which they proven a good way to move your assets without too much regulations.
What is going on with AI right now could be a bubble just like there was the dotcom bubble. But it isn't like the internet went away after the dotcom burst. The largest companies in existence today are internet companies or have products that wouldn't make sense without the internet.
Sure, many of these "thin prompt wrapper around the OpenAI API" product "businesses" will all be gone within a few years. But AI? That is going to be here indefinitely.
The technology - for what it is being used vs what is invested - does not match up at all. This is what happened to the dot-com bubble. Theres was a whole bunch of innovation that was needed to come to bring a delightful UX to bring swathes of people onto the internet.
So far this is true about LLMs. Could this change? Sure. Will it change meaningful? Personally I dont believe so.
The internet at its core was all about hooking up computers so they they could transform from just computational beasts to communication. There was a tremendous amount of potentitial that was very very real. It just so happens if computers can communicate we can do a whole bunch of stuff - as is going on today.
What are LLMS? Can someone please explain in a succint way...? Im yet to see something super crystal clear.
Things like recommendations, ads, and search will always be around because they were money printers before VCs found out about AI and they will continue to be long after.
The dotcom bubble was not about "the internet" itself. The Internet was fine and pretty much already proven as a very useful communication tool. It was about business that made absolutely no sense getting extremely high valuations just because they operated - however vaguely - over the internet.
Generative AI have never reached the level of usability of the Internet itself, and likely never will.
By society, he means the oligarchy will get more power and control.
Yeah, sure some side benefits to people. AI is still a nuclear weapon against labor in the capital - labor (haves - haves not), and will start pushing wealth inequality to the Egyptian pharoah.
The only good news for plebians is that virtual reality entertainment means you just need a little closet to live in.
Overall, this just leads up to further demographic decline, which as an environmental malthusian I would welcome in the initial stages to get us down from our current level, but I also suspect it would turn into an economic downward spiral, especially with AI, where the oligarchs have such total authoritarian control and monopoly on resources that humanity basically stops having kids at all.
"AI is in a bubble but billionaires will get 'gigantic' benefits"
I see no benefit to anyone unless you can live off your stock portfolio and can easily ride through periods where your portfolio can suffer a 50% loss.
Honestly, during the dotcom bubble at least workers were getting paid and jobs were abundant. Things didn't start getting bad for workers until it popped. We're supposed to be in the 'positive' part of the AI bubble and people already seem desperate and out of hope.
Everyone not directly involved seems to want AI to pop. I'm not sure if that says anything about its longevity. Not very fun to have a bubble that feels bad on both sides.
He may be right, everything points to that conclusion. My main issue is - why the fuck do we care what Bezos thinks on this matter? His ML efforts all lag behind the competition and he’s certainly not an expert in the field of deep learning. Why?
He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct. There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
The biggest problem is the infrastructure left behind from the Dotcom boom that laid the path for the current world (the high speed fiber) doesn't translate to computer chips. Are you still using intel chips from 1998? And the chips are such a huge cost, and being backed by debt but they depreciate in value exponentially. It's not the same because so much of the current debt fueled spending is on an asset that has very short shelf life. I think AI will be huge, I don't doubt the endgame once it matures. But the bubble now, spending huge amounts on these data centers using debt without a path to profitability (and inordinate spending on these chips) is dangerous. You can think AI will be huge and see how dangerous the current manifestation of the bubble is. A lot of people will get hurt very very badly. This is going to maim the economy in a generational way.
And a lot of the gains from the Dotcom boom are being paid back in negative value for the average person at this point. We have automated systems that waste our time when we need support, product features that should have a one-time-cost being turned into subscriptions, a complete usurping of the ability to distribute software or build compatible replacements, etc..
The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
27 replies →
That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.
10 replies →
There are probably a lot of cool and useful things you could do with a bunch of data centers full of GPUs.
- better weather forecasts
- modeling intermittent generation on the grid to get more solar online
- drug discovery
- economic modeling
- low cost streaming games
- simulation of all types
3 replies →
If you look at year over year chip improvements in 2025 vs 1998, it's clear that modern hardware just has a longer shelf life than it used to. The difficulties in getting more performance for the same power expenditure are just very different than back in the day.
There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.
3 replies →
> This is going to maim the economy in a generational way.
Just as I'm getting to the point where I can see retirement coming from off in the distance. Ugh.
One thing to note about modern social media is that the most negative comment tends to become the most upvoted.
You can see that all across this discussion.
> the current debt fueled spending
Honestly I think the most surprising thing about this latest investment boom has been how little debt there is. VC spending and big tech's deep pockets keep banks from being too tangled in all of this, so the fallout will be much more gentle imo.
We don’t have moores law anymore. Why are the chips obseleting so quickly?
2 replies →
Companies want GW data centers, which are a new thing that will last decades, even if GPUs are consumable and have high failure rates. Also, depending on how far it takes us, it could upgrade the electric grid, make electricity cheaper.
And there will also be software infrastructure which could be durable. There will be improvements to software tooling and the ecosystem. We will have enormous pre-trained foundation models. These model weight artifacts could be copied for free, distilled, or fine tuned for a fraction of the cost.
About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.
That 40% has a very long shelf life.
Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.
At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.
1 reply →
Intel chips from 2008, as there is no real improvements.
While yes, I sure look forward to the flood of cheap graphics cards we will see 5-10 years from now. I don't need the newest card, but I don't mind the five-year old top-of-the-line at discount prices.
They're only replacing GPUs because investors will give "free" money to do so. Once the bubble pops people will realize that GPUs actually last a while.
I think you partially answer to yourself though. Is the value in the depreciating chips, or in the huge datacenters, with cooling, energy supply, at such scale etc. ?
I am not still using the same 1Mbps token ring from 1998 or the same dial up connecting to some 10Mbps backbone.
I am using x86 chips though.
A lot of the infrastructure made during the Dotcom boom was shortly discarded. How many dial-up modems were sold in the 90s?
The current AI bubble is leading to trained models that won't be feasible to retrain for a decade or longer after the bubble bursts.
12 replies →
my 486sx with math co-processor is long gone.
Personally I think people should stop trying to reason from the past.
As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.
Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.
4 replies →
This won’t even come close to maiming the economy, that’s one of the more extreme takes I’ve heard.
AI is already making us wildly more productive. I vibe coded 5 deep ML libraries over the last month or so. This would have taken me maybe years before when I was manually coding as an MLE.
We have clearly hit the stage of exponential improvement, and to not invest basically everything we have in it would be crazy. Anyone who doesn’t see that is missing the bigger picture.
7 replies →
Video game crash followed by video games taking off and eclipsing most other forms of digital entertainment.
Dot com crash followed by the web getting pretty popular and a bit central to business.
To all those betting big on AI before the crash:
Careful, Icarus.
Bad comparison.
The leap of faith necessary in LLMs to achieve the same feat is so large its very difficult to imagine it happening. Particularly due to the well known constraints on what the technology is capable of.
The whole investment thesis of LLMs is that it will be able to a) be intelligent b) produce new knowledge. If those two things that dont happen, what has been delivered is not commensurate to the risk in regards to the money invested.
1 reply →
Many AI startups around LLMs are going to crash and burn.
This is because many people have mistaken LLMs for AI, when they’re just a small subset of the technology - and this has driven myopic focus in a lot of development, and has lead to naive investors placing bets on golden dog turds.
I disagree on AI as a whole, however - as unlike previous technologies this one can self-ratchet and bootstrap. ML designed chips, ML designed models, and around you go until god pops out the exit chute.
> Careful, Icarus.
What does that even mean?
pets.com was a fat loser only telling telling people that were going to fly.
Amazon was Icarus, they did something.
Vs weak commentators going on about the wax melting from their parents root cellar while Icarus was soaring.
Most of Y Combinator are not using AI they just say that and you're worried about the people who do things?
1 reply →
Dotcom mania companies were not Internet providers. They tried making money on the internet, something people already saw as worth paying for.
Cisco, Level3 and WorldCom all saw astronomical valuation spikes during the dotcom bubble and all three saw their stock prices and actual business prospects collapse in the aftermath of it.
Perhaps the most famous implosion of all was AOL who merged (sort of) with TimeWarner gaining the lion's share of control through market cap balancing. AOL fell so destructively that it nearly wiped out all the value of the actual hard assets that TW controlled pre-merger.
This is not really true, e.g. Cogent was basically created by buying bankrupt dotcum-bubble network providers for cents on the dollar.
3 replies →
I would add more metrics to think about. For example, very few people used Internet in the dotcom era while now the AI use is distributed into all the population using the Internet that will probably not growth too much. In this case, if Internet population is the driver, and it will not growth significantly we are redistributing the attention. Assuming "all" society will be more productive we will all be in the same train at the relatively same speed.
And what were the societal benefits of the internet?
That everybody all over the world can instantly connect with each other?
The 90s bubble also had massive financial fraud and laid capital that wasn’t used at 100% utilization when it hit the ground like what we are seeing now.
It’s different enough that it probably isn’t relevant.
> [At dotcom time] There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
It did, but not for the better. Quality of life and standard of living both declined while income inequality skyrocketed and that period of time is now known as The Great Divergence.
> He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct.
He's got no downside if he's wrong or doesn't deliver, he's promising an analogy to selling you a brand new bridge in exchange for taking half of your money... and you're ecstatic about it.
Thank you for acknowledging this. The internet was created around a lot of lofty idealism and none of that has been realized other than opening up the world's information to a great many. It made society and the global economy worse (occidental west; Chinese middle class might disagree) and has paralleled the destabilization of geopolitics. I am not luddite but until we can, "get our moral shit together" new technologies aren't but fuel on the proverbial fire.
1 reply →
The analogy to the dot com bubble is leaky at best. AI will hit a point of exponential improvement, we are already in the outer parts of this loop.
It will become so valuable so fast we struggle to comprehend it.
Then why has my experience with AI started to see such dramatically diminishing returns?
2022-2023 AI changed enough to be me to convert from skeptic, to a believer. I started working as an AI Engineer and wanted to be on the front lines.
2023-2024 Again, major changes, especially as far as coding goes. I started building very promising prototypes for companies, was able to build a laundry list of projects that were just boring to write.
2024-2025 My day to day usage has decreased. The models seem better at fact finding but worse for code. None of those "cool" prototypes from myself or anyone else I knew seemed to be able to become more than just that. Many of the cool companies I started learning about in 2022 started to reduce staff and are running into financial troubles.
The only area where I've been impressed is the relatively niche improvements in open source text/image to video models. It's wild that you can make sure animated films on a home computer now.
But even there I'm seeing no signs of "exponential improvement".
4 replies →
Very few people predicted LLMs, yet lots of people are now very certain they know what the future of AI holds. I have no idea why so many people have so much faith in their ability to predict the future of technology, when the evidence that they can't is so clear.
It's certainly possible that AI will improve this way, but I'd wager it's extremely unlikely. My sense is that what people are calling AI will later be recognized as obviously steroidal statistical models that could do little else than remix and regurgitate in convincing ways. I guess time will tell which of us is correct.
2 replies →
While this remains possible my main impression now is that progress seems to be slowing down rather than accelerating.
6 replies →
I'm paying for 3 different AI services and our company and most of my team is also paying money for various AI stuff. Sounds like a real industry to me. There's just going to be VC losers as always, where usually "losing" is getting bought by a bigger company or aquihires instead of 100xing or going public.
My team is doing the same, and yet all of us still aren't sure that we're actually more productive overall.
If anything it seems to me like we've just swapped coding with what is effectively a lot more code review (of whatever the LLM spits out), at the cost of also losing that long term understanding of a block of code that actually comes from writing it yourself (let's not pretend that a reviewer has the same depth of understanding of a piece of code as an author).
If you work in a team then you are likely already not writing most of the code yourself.
There will be point where ai will consistently write better prs - you can already start to see it here and there - finding and fixing bugs in existing code, refactoring, writing tests, writing and updating documentation and prototyping are some examples of areas where it often surpasses human contribution.
1 reply →
Yes there is a very real trade off between labour and capital.
In the past the tradeoff has been very straight forward. But this is a unique situation because it involves knowledge and not just the physicality of the human in regards to productivity.
Those companies are also likely still in the red. They're banking on the hope that one day they will be profitable. I'm sure one of them will be.
HN said the same thing about Uber forever.
19 replies →
> 3 different AI services
Why three? Will you ever be in a position where one will do it for you?
> and most of my team is also paying money for various AI stuff.
And what are they using it for?
> Sounds like a real industry to me.
Sounds like early adopter syndrome to me. We'd have to know more about your business to take this out of the realm of hazy anecdotes.
> Why three? Will you ever be in a position where one will do it for you?
I believe LLMs will be niche tools like databases, you pay for the product not 'gpt' vs 'claude'. You choose the right tool for the job.
I have a feeling coding tool with be separate a niche like Cursor, which LLM it uses doesn't matter. It's the integration, guard rails, prompt seeding, and general software stuff like autocomplete and managing long todos.
Then I pay for ChatGPT because that's my "personal" chat LLM that knows me and knows how I like curt short responses for my dumb questions.
Finally I pay for https://www.warp.dev/terminal as a terminal which replaced Kitty terminal on macos (don't use it for coding) which is another niche. Cursor could enter that arena but VSCode terminal is kinda limited for day-to-day stuff given it's hidden in an larger IDE. Maybe a pure CLI tool will do both better.
The problem is the hype machine that is claiming it will replace human labor which justifies the insane losses.
These companies are burning cash to support the current formulation of AI services.
These services will survive because they are useful but probably not at its current cost
are you paying them enough to have profitable unit economics? How hard would it be for you to switch from one provider to another?
there's some lock-in for both chatgpt (the history and natural chat personalization feature is super useful) and with Cursor I'm fully invested in the IDE experience.
The lie is that LLMs are the product itself rather than the endless integration opportunities via APIs and online services.
A real industry can be a bubble too. Not every bubble looks like tulips.
For example:
- Dotcom bubble. Of course making website was and is a real industry.
- Japanese real estate bubble. Of course building houses was and still is a real industry. It's so real people call it real estate, right.
Seems like a measured approach- my read is him saying it’s probably a bubble in that bad ideas are being funded too, but there are a lot of really good ideas doing well.
Also nit: Typo right in the digest I assume, assuming “suring” is “during”, does cnbc proofread their content?
Typos are proof something wasn't Ai written. Everyone should make at least one tyop in their writing.
"Claude, make sure to sprinkle at least one typo in here. And no em dashes you hear?"
11 replies →
Only worms until the typos get into the training data. You need to say at least one thing no one has ever said before in every post.
1 reply →
I see what ya did tbere :)
Then it’s a market for lemons. Buyers can’t (or choose not to) tell a good AI idea from a bad one.
A market for lemons means there is an information asymmetry. Sellers know what they have and try to offload their lemons on clueless buyers. I don't think that's the case here.
1 reply →
irony level IV
Apparently not. An LLM proofread would've definitely caught that
Do I have this right that there have been no or at least very few pure AI IPO's during this cycle (I can't actually think of a single one)? So it's dissimilar to dotcom in that regard because during that time countless dotcoms went public with sky-high valuations and then failed. A bunch of reputable companies also went or were already public during that time and those saw huge valuation drops so that's more analogous to what could happen in the public market (NVDA, for instance, could pull a Cisco and drop "catastrophically," but survive just fine).
That would cause a lot of pain for those shareholders, but would that be somewhat contained given the public "AI" companies for the most part have strong businesses outside of AI? Or are these market caps at this point so large for some of these AI public companies that anything that happens to them will cause some kind of contagion? And then the follow up is if the private AI companies collapse en masse is that market now also so big that it would cause contagion beyond venture capital and their investors (fully aware that pensions and the such are material investors in VC funds, but they're diversified so even though they'd see those losses maybe the broader market would keep them from taking major hits).
Not giving an opinion here, though my knee jerk is to think we're due for a massive drop, but I've literally been saying that for so long that I'm starting to (stupidly) think this time is different (which typically is when all hell breaks loose of course).
IPOs aren't what they once were. The burden of being a public company has increased (SOX and related public company costs are $5-10M/year), so companies are far more likely to stay private. That has created a positive feedback cycle as the private funding ecosystem has become increasingly robust, which is why you see so many $100B+ private companies.
Also keep in mind that the biggest companies during that bubble had peak market caps of ~500B and then lost ~90%, so 400-500B in losses each and total internet related losses of a couple trillion. If NVDA lost 90%, it would be down 4 trillion dollars, or twice that total just by itself.
AI company valuations collapsing would have meaningful impacts on the broader market. Big pension/mutual funds are important sources of capital across every sector, and if they're taking big losses on NVDA, GOOG, and a portfolio of privates, it will have a chilling effect on their other activity.
The costs are a weak argument. The more stronger argument for why they arent going public any time soon is that OAI in particular is a corporate governance nightmare, in which the way they transmit information about their firm and financials will have to completely change.
Theres also plenty of money washing around in private markets so no need to go public. Staying private is an advantage.
There aren't very many IPOs in general. There were about 8000 publicly traded companies in the US in Jan 2000. Today there are about 3950. A lot of the AI related IPOs have been the infra like CoreWeave and Nebius.
This time is always different, until it isn't.
However, it is different from the internet bubble partially for the reason you describe.
There have been a few IPOs, but they perhaps happened earlier in the cycle, or companies are pivoting into AI. I'm thinking companies like Palantir, which was always AI, or Salesforce which is making a big AI pivot.
Most of the funding is not coming from public sectors. There is so much private capital available that it isn't necessary. I believe the bubble is in VC, which some would think is find because it protects public markets from the crash, but I'm not sure that is correct.
When the VC money stops flowing into AI, I think it will send a shockwave through the public markets. The huge valuations of companies like OpenAI, Anthropic, etc will be repriced, which will probably force a re-pricing of public darlings like Palantir, Microsoft, NVIDIA.
If VC funds aren't buying NVIDIA chips and building data centers, everyone will feel the need to re-price.
It's emotional, not logical.
It may be true that OAI et al are raising money in private markets, but does that matter? Ultimately they are still just raising money. Ultimately returns need to show up. You cannot escape that. If you cannot do that nobody will eventually supply the funds to keep operating.
The big advantage of staying private is controlling the narrative.
1 reply →
Historically, the worst busts following the bursting of an asset price bubble, in terms of real economic impact, have been from debt fueled bubbles (Great Depression, Global Financial Crisis). You can read Hyman Minsky and Irving Fisher for a detailed analysis of why, but it mainly comes down to the fact that the financial obligations remain once prices and expectations have reset.
Then you have the busts that follow public equity fueled bubbles (Dotcom crash). Nowhere near as bad as the former, but still a moderate impact on the economy due to the widely dispersed nature of the equity holdings and the resulting wealth effect.
What we have now is more of a narrowly held private equity bubble (acknowledging that there's still an impact through the SP500 given widespread index investing). If OpenAI, Anthropic, Perplexity, and a bunch of AI startups go bust, who loses money and what impact does it have on the rest of the economy?
How was Palantir always AI?
there is no way they could raise that much money from public markets.
Also, there s no need to invent new tech names anymore. Marketing can add "AI" to the company name, or (as they say), change the wording from "Loading..." to "Thinking.."
Here's a link to the full interview Bezos had yesterday
https://www.youtube.com/watch?v=E0x3UZDKSNo
https://www.billjaneway.com/productive-bubbles this is a good essay on this topic
My understanding is that the cost of training each next model is very very large, and a half trained model is worthless.
Thus when it is realised that this investment cannot produce the necessary returns, there will simply be no next model. People will continue using the old models, but they will become more and more out of date, and less and less useful, until they are not much more than historical artifacts.
My point is that the threshold for continuing this process (new models) is very big (getting bigger each time?), so the 'pop' will be a step function to zero.
Why do you think the models will become out of date and less useful? Like, compared to what? What external factor makes the models less useful?
If it's just to catch up with newly discovered knowledge or information then that's not the model, they can just train again with an updated dataset and probably not need to train from scratch.
> What external factor makes the models less useful?
Life. A great example can be seen in the AI-generated baseball-related news articles that involve the Athletics organization. AI articles this year have been generating articles that incorrectly state that the Atlanta Braves played in games that were actually played by the Athletics, and the reason is due to the outdated training model. For the last 60 years before 2025, the Athletics played in Oakland, and during that time their acronym was OAK. In 2025, they left Oakland for Sacramento, and changed their acronym to ATH. The problem is that AI models are trained on 60 years of data where 1. team acronyms are always based on the city, rather than the mascot of the team, and 2. acronyms OAK = Athletics, ATL = Atlanta Braves, and ATH = nothing. As a result, an AI model that doesnt have context "OAK == ATH in the 2025 season" will see ATH in the input data, associates ATH with nothing in it's model, and will then erroneously assume ATH is a typo for ATL.
If they stop getting returns in intelligence, they will switch to returns in efficiency and focus on integration, with new models being trained for new data cutoffs if nothing else. Even today there is at least a 5-10 year integration/adoption period if everything halts tomorrow.
There is no reality in which LLMs go away (shy of being replaced).
> If they stop getting returns in intelligence, they will switch to returns in efficiency
I don't think we can assume that people producing what appear to be addictive services are going to do that, especially when they seem to be addicted themselves.
Is adding new data to a model a full retraining from scratch? Or can it be added on top of an existing model?
If it costs $10B to add 1 year of data to an existing model, every year, that doesn’t sound too good.
They probably are getting a bonanza of intelligence from people using them.
Many argument the current batch of models provide a large capability overhang. That is, we are still learning how to get the most out of these models in various applications
Models don't become out of date now that deep research exists.
So with every prompt you are expected to wait that long? I highly doubt general people will be willing to wait, it also doesn't seem entirely viable if you want to do it locally, less bandwidth and no caching akin to what a literal search engine of data can do.
Until costs of utilities, food, medical, insurance are dropped by technology, there are no societal gains, just loss of human jobs.
The big question is what "society" he is talking about? Is it the "society" that includes all people, or the "society" that includes only rich people?
It will benefit all people but it will disproportionally benefit more rich people.
> It will benefit all people but it will disproportionally benefit more rich people.
Yes: you'll be homeless and living under a bridge, but you'll have an LLM therapist on your phone to console you. That's a benefit!
58 replies →
Not necessarily. The benefits may barely trickle down, and the conditions for the majority could degrade overall.
19 replies →
What you mean is those with some form of ownership of the technology. If development eventually results in full automation, with the expense of production reduced to zero, money will be irrelevant.
12 replies →
> It will benefit all people but it will disproportionally benefit more rich people
The rich already have a diminishing returns situation with money. Everyone else has much more upswing.
Rich people will enjoy additional monetary benefits, but everyone will still enjoy the same standard benefits.
1 reply →
The wealth gap widening is quite independent from AI being involved. A natural progression which was always happening and continues to be happening. Entil some sort of catastrophe reshuffles the cards. Usually a war or revolution. The poor simply rising up or a lazy and corrupt ruling class depriving their country of enough resources and will to defend itself that some outside power can take it.
If be benefitting you mean displacing all reason to live, then yes, it will solve that problem I face. Now I will be certain.
It will not benefit the rich as disproportionately as the covid pandemic did.
1 reply →
And we're not talking a 51-49 disproportion, we're talking about a 99.9999 percent of the benefit will go to the richest people
86 replies →
Manna ponders the same question. I don't think we're headed towards the "good" ending.
https://marshallbrain.com/manna1
What an incredible read - thanks for the share
Which of those did the Internet benefit?
Is the average person actually better off after the late 90’s internet is probably a harder question than it might seem.
The long tail may be closer to what I want, but the quality is also generally lower. YouTube just doesn’t support a team of talented writers, Amazon is mostly filled with junk, etc.
Social media and gig work is a mixed bag. Junk e-mail etc may not be a big deal, but those kinds of downsides do erode the net benefit.
39 replies →
Would you rather be a 22 year old starting in life in 2025 or 1995? Unless you pick one of the few countries that underwent a drastic change of regime in that time, the answer’s pretty clear to me.
4 replies →
Is AI at the same level of global instant comunication and gigabites of bandwidth?
3 replies →
Bezos didn't define "society", but knowing Devil is what Devil does, we can infer:
1. Amazon files the most petitions for H1-B work visas after Indian IT shops. 2. Amazon opposed minimum wage increase to $15/hr until 2018! 3. Amazon not only fires union organizers, it's claiming National Labor Relations Board is unconstitutional!
It is all society as long as they have access, and they do. Even if the big labs get more closed off, open source is right there and won’t die.
AI increases everyone’s knowledge and ultimately productivity. It’s on every person the learn to leverage it. The dynamics don’t need to change, we just move faster and smarter
> AI increases everyone’s knowledge and ultimately productivity. It’s on every person the learn to leverage it. The dynamics don’t need to change, we just move faster and smarter
This is incomplete in key ways: it only increases knowledge if people practice information literacy and validate AI claims, which we know is an unevenly-distributed skill. Similarly, by making it easier to create disinformation and pollute public sources of information, it can make people less knowledgeable at the same time they believe they are more informed. Neither of those problems are new, of course, but they’re moving from artisanal to industrial scale.
Another area where this is begging questions is around resource allocation. The best AI models and integrations cost money and the ability to leverage them requires you to have an opportunity to acquire skills and use them to make a living. The more successfully businesses are able to remove or deprofessionalize jobs, the smaller the pool will be of people who can afford to build skills, compete with those businesses, or contribute to open source software. Twenty years ago, professional translators made a modest white collar income; when AI ate those jobs, the workers didn’t “learn to leverage” AI, they had to find new jobs in different fields and anyone who didn’t have the financial reserves to do that might’ve ended up in a retail job questioning whether it’s even possible to re-enter the professional class. That’s great for people like Bezos until nobody can afford to buy things, but it’s worse for society since it accelerates the process of centralizing money and power.
Open source in particular seems likely to struggle here: with programmers facing financial downturns, fewer people have time to contribute and if AI is being trained on your code, you’re increasingly going to ask whether it’s in your best interests to literally train your replacement.
1 reply →
Once upon a time, society was all of us, but Society were the filks that held coming out parties and gossiped about whose J-class yacht was likely to defend the America's cup.
Society with a capital S are the beneficiaries of the bubble.
Counter prediction. AI is going to reduce the (relative) wealth of the tech companies.
AWS and Facebook have extremely low running costs per VPS or Ad sold. That IMO is one of the major reasons tech has received its enormously high valuation.
There is nuance to that, but average investors are dumb and don't care.
Add in a relatively high fixed-cost commodity into the accounting, and intuitively the pitch of "global market domination at ever lower costs" will be a much harder sell. Especially if there is a bubble pop that hurts them.
History would indicate that if the you make a bubble big enough and then it pops, you don't have to clean up the mess. 2008 wasn't that long ago.
When billionaires talk about “benefits”, I’m getting very suspicious.
What value does your question add, if it wasn't Bezos saying this would you have the same question?
How are you defining rich, Billionaires? It's sad that your comment is the top post.
The fact that Bezos is saying this is precisely why the commenter is asking this. He clearly stands to benefit massively from the bubble. Statements like this are meant to encourage buy-in from others to maximize his exit. Presumably "rich" refers to those, like Bezos, who already have incredibly disproportionate wealth and power compared to the majority of people in the US. I'm honestly not sure what the thrust of your comment even is.
Cheaper cloud costs
That's a very relevant question. And as your question implies, we all know which society the billionaires talk about. But AI is just a technology like any other. It does have the potential to bring great benefits to humanity if developed with that intent. It's the corruptive influence of the billionaire and autocrat greed that turns all technologies against us.
When I say benefits to humanity, I don't mean the AI slop, deepfakes and laziness enabler that we have today. There are niche applications of AI that already show great potential. Like developing new medicines to devising new treatments for dangerous diseases, solving long standing mathematical problems, creating new physics theories. And who knows? Perhaps even create viable solutions for the climate crisis that we are in. They don't receive as much attention as they deserve, because that's not where the profit lies in AI. Solving real problems require us to forgo profits in the short term. That's why we can't leave this completely up to the billionaires. They will just use it to transfer even more wealth from the poor and middle classes to themselves.
What are the actual benefits? Where are all these medicines that humans couldn’t develop on their own? Have we not been able to develop medicine? What theorems are meaningful and impactful that humans can’t prove without AI? I don’t know what a solution to the climate crisis is but what would it even say that humans wouldn’t have realistically thought of?
4 replies →
I have a phd in mathematics and I assure you I am not happy that AI is going to make doing mathematics a waste of time. Go read Gower's essay on it from the 90s. He is spot on.
2 replies →
Business. People in the human centipede who wont fight fascism
[dead]
ChatGPT currently has
~ 120 to 190 million daily active users
~ 800 million weekly active users
~ 450 million to over 800 million depending on the data source and methodology.
Get a grip. Hundreds of millions of people are using it, most of them for free. I would say "society" has benefited.
This has to be peak HN.
Create the fastest growing consumer product in history.
HN anon: yes, but who will benefit?
Facebook has more. Was it a benefit? Does the benefit outweigh the harms?
22 replies →
It’s fast because there already gobs of people on the internet because of all the other products that came before. Facebook didn’t grow as fast because there weren’t as many people on line then. Gmail didn’t grow as fast because there weren’t as many people online then.
I use it for free, and when it stops being free, I will stop using it.
I don't understand this argument. Speaking as a kid who grew up middle-class as an 80's teen obsessed with (the then still new) computers, a non-rich person has access to more salient power today than ever in history, and largely for no or low cost. There are free AI's available that can diagnose illnesses for people in remote areas, non-Western nations, etc. and which can translate (and/or summarize) anything to anything with high quality. Etc. etc. AI will help anyone with an idea execute on it.
The only thing you have to worry about are not non-rich people, but people without any motivation. The difference of course is that the framing you're using makes it easy to blame "The System", while a motivation-based framing at least leaves people somewhat responsible.
Wealth may get you a seat closer to the table, but everyone is already invited into the room.
The problem is if the system leads to demotivating people more than motivating them on average, which risks a negative feedback loop where people demotivate each other further and so on
3 replies →
This is a simplistic, individualistic view of the impact of AI.
You’re imagining the world we have today, but with AI.
In reality it’ll be a world that’s completely different, and most likely in a worse way, and AI is the tool used to make it worse.
1 reply →
You have an incorrect reading of history and economy. Basically none of the wealth and comfort we (regular people) enjoy were "gifted" or "left over" willingly by the owner class. Everything had to be fought for: minimum wage, reasonable weekly hours, safe workplaces, child labor, retirement, healthcare...
Now, ask yourself, what happens when workers lose the only leverage they have against the owner class: their labor? A capitalist economy can only function if workers are able to sell their labor for wages to the owner class, creating a sort of equilibrium between capital and work.
Once AI is able to replace a significant part of workers, 99% of humans on Earth become redundant in the eyes of the owner class, and even a threat to their future prosperity. And they own everything, the police and army included.
1 reply →
“Free ai” lol
1 reply →
Onion Headline 2026: “Unemployed data scientist glad society is benefiting from AI without him after layoffs.”
All the smartest people I know in finance are preparing for AI to be a huge bust that wipes out startups. To them, everyone in tech is just a pawn in their money moving finance games. They view Silicon Valley at the moment salivating on what there plays are to make money off the coming hype cycle implosion. The best finance folks make the most money when the tide goes out… and they’re making moves now to be ready.
Take that for what it is.
Especially startups in AI are at risk, because their specific niche projects can easily be outcompeted when BigTech comes with more generalized AI models.
> To them, everyone in tech is just a pawn in their money moving finance games.
Don't they think everyone is a pawn in their money moving games?
Just startups? Really?
Broader impact, but while the big players will take a hit the new wave of startups stands to take the brunt of the impact. VCs and startups will suffer the most.
At the end of the day though it’s how the system is designed. It’s the needed forrest fire that wipes out overgrowth and destroys all but the strongest trees.
I'll take it for finance people being totally clueless, right? Have you talked to these people about AI?
My experience has gone the other way than OOP: Anecdotally, I have had VCs ask me to review AI companies to tell them what they do so they can invest. The VC said VCs don't really understand what they're investing in and just want to get in on anything AI due to FOMO.
The company I reviewed didn't seem like a great investment, but I don't even think that matters right now.
7 replies →
Society as a whole, or one percenters like him?
Bezos is a 0.0001-percenter. Most of people on HN are probably in the top 1% to 5%.
You're in the top 10% in the US if you make $170k/yr.
True. But that’s income, to be top 10% in your net worth you need 1.5m USD and 12m USD for top 1%
3 replies →
are most people on HN making 170? I feel a bit less privileged now.
7 replies →
The massive economic productivity gains from AI will improve the lives of very few if some form of fair fiscal redistribution is not be put in place.
Seems like society isn't going to benefit much, but the ultrawealthy will benefit enormously.
To those who have everything more will be given; from those who have nothing everything will be taken.
Adding to that, I love how his analysis is completely detached from the consequences this burst will impose on the working man. As it did in the dotcom bubble.
Of course they will, the ultra wealthy are too big to fail. In a bubble like this, they just invest in pretty much everything and take the losses on the 99% of failures to get the x1000 multiples on the 1% of successes. While the rest of us take the hit.
"During bubbles, every experiment or idea gets funded, the good ideas and the bad ideas. And investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. ... But that doesn't mean anything that is happening isn't real."
Remind me again why we need investors to fund bad ideas? The whole premise of western capitalism is that investors can better align with the needs of the society and the current technological reality.
If the investors aren't the gurus we make them to be, we might as well do with a planning committee. We could actually end up with more diversified research.
"Under socialism, a lot of experimental ideas get funded, the good ideas and the bad ideas. And the planning committee have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. ... But that doesn't mean anything that is happening isn't real."
We need investors to fund ideas to find out if they are good or bad. It is called testing.
> Remind me again why we need investors to fund bad ideas?
A lot of good ideas only look bad in hindsight. It costs time and money to determine goodness, and that deserves funding.
> A lot of good ideas only look bad in hindsight. It costs time and money to determine goodness, and that deserves funding.
Governments can also do that funding. The most pivotal technologies in recent history have been a result of government investment.
Private capital has a role, but it's mostly at the productization phase, not fundamental research.
Committees' investment tends to be not very diversified and often too risk-averse because of how blame placing works. E. g. most of soviet's cloning of chip (and many other products) wasn't due to lack of engineering skill - it's just less risk for the bureaucrats running the show. R&D for an original chip is risky and timing it to the next November the seventh is likely to not work. Cloning is a guaranteed success.
The whole point of capitalism is that one is entitled to the consequences of their own stupidity or the lack thereof. The investors are more willing to take the risks because their losses are bounded - they are risking only as much as they are willing, rather than their status in an organization. Of course once all investors ends up investing into the same bubble there is no real advantage over a committee.
> Remind me again why we need investors to fund bad ideas?
Early stage investors generally fund a portfolio of multiple ideas, where each idea faces great uncertainty - some investments will do tremendously well, some won't. Invstors don't need every investment to do well due to the assymmetry of outcomes (a bad investment can at worst go down 100%, a good investment can go up 10,000%, paying off many bad investments).
> The whole premise of western capitalism is that investors can better align with the needs of the society and the current technological reality.
This is not the premise of capitalism, it's the justification for it - it's generally believed that capitalism leads to better outcomes over time than communism, but that doesn't mean capitalism has 0 wasteage or results in 0 bad decisions.
There is a very important difference: real investors risk their own money, the money they saved over their entire life making decisions.
Under socialism bureaucrats risk someone else's money.
We are not in a pure capitalistic society, we also have States, central Banks with central planning expending over half the money in Europe and USA and more than half in Asia.
As a European myself that see the public money being wasted by incompetent people and filling the pockets of politicians, specially marxist ones. For example, the money Spain received after COVID filled so many socialist pockets and has not given information back to Europe as of how it was spent(it was spent on their own companies of friend and family).
>> There is a very important difference: real investors risk their own money, the money they saved over their entire life making decisions.
Except for the "institutional investors".
1 reply →
I'd love to see those "gigantic benefits". Currently, the only positive thing about LLMs I hear regularly is that it made people more productive. They don't get paid more (of course), they're just more productive.
more productivity at same cost = more stuff, cheaper
More productivity at same cost = More profit for companies.
Also, the cost of most of the stuff I (have to) buy (i.e. rent, groceries, ...) is not dominated by the wage of knowledge workers.
Or to put it differently: If AI makes me lose my job but doesn't decrease my rent, I'm in a really bad position.
2 replies →
Soo where is that deflation? Bc from what I see all of that extra productivity is landing in the pockets of shareholders
1 reply →
I don't think I need email marketing spam or useless apps to be cheaper.
The hard trades and manual labor services we consume for everything that matters daily? That's not going to be made cheaper by AI.
*To sell, not to buy.
This lie is still being spread in economics classes?
2 replies →
[dead]
If you listen to the speech it's a bit more nuanced than that headline.
People in the comments seem to have forgotten about, or never lived through, the dot com bubble. Amazon’s shares fell from $113 to $6 in 2000. So yes, the internet survived and some companies did well after, but a lot of people were hurt badly.
Just like how sending Katy Perry to space hugely benefitted society.
Could some let Bezos know that he doesn’t truly represent society in any way?
Please, Mister Bezos: fix amazon shop to work again with noscript/basic (x)html browsers (like lynx).
(oh, and keep on eye on all those 'internet scanners', 'script kiddies' using AWS for scans/attacks, honey pot time?).
I see that my Echos want me to enable the new Alexa. I confess a great deal of trepidation on these. Fairly confident that this is not going to go smoothly.
And again I'm baffled on how they would light such good will and functionality on fire.
This shouldn't be a surprise - capitalism always overshoots. Anything worth investing in will generally receive too much investment in because very few people can tell where the line is.
And that's what causes bubbles but at this point it should be clear that AI will make a substantial impact - at least as great as the internet, likely larger
As FB IPO’d at 100bn, and marched to over 1T, when would you have considered it a bubble? You can ask the LLM if you need to … catch my drift?
I'm not sure I see the point. The march to $1T took 10 years with available financial statements.
1 reply →
Gotta say I'm pessimistic about the future of AI. At least until its adopted by public sector, schools and societies. Right now its just enhancing what we already have. More "efficient" meetings with auto-note-taking so we can have more meetings. More productivity so we can make less people do more, increase the workload and make them burn out faster. Better and more sophisticated scammers. More sophisticated propaganda by various legal and illegal actors. Cheaper and more grotesque looking vidz and music. More software slop. Not to mention all the "cool" improvements coming our way in the form of smart rockets, bombs and drones and of course I drool for all the smart improvements in surveillance capitalism 2.0. They call it a revolution but for now its just a catalyst for more of all the things we "love".
If you're asking yourself why hasn't the bubble burst when everyone is calling it a bubble it's because no one wants to stop dancing until the music stops. If you told an investor the market will collapse tomorrow with 100% certainty they will invest today like there is 0% certainty of it happening tomorrow.
Yesterday, I asked an LLM to create a browser plugin and it got 98% correct in its first try.
How many people can recognize that 2% is wrong? Of those, how many can fix the 2% without introducing more errors?
Yesterday someone's uncle tried the same thing and now his bank account's drained because the 2% that wasn't working was the 2% that prevented his password store from being posted online.
See how that works? A few nerds think it's great while everyone else gets screwed by it.
You are talking like no company lost passwords to hackers till now.
I would use AI to create extensions for my use than trust someone else's.
I feel better knowing that someone that is as in touch with society as he is, is saying this.
> Jeff Bezos says AI is in a bubble but society will get 'gigantic' benefits
Yes. Cheap, second hand, datacenters.
It's very hard to find anything other than half saying AI is in a bubble, and we will pop any day now, and the other half declaring AGI by 2029, when a new revolution will begin. If you follow the hard science and not the money, you can see we're somewhere in-between these two takes. AI datacenters requiring new power plants is unsustainable for long-term growth. Meanwhile, we have LLMs accepting bullshit tasks and completing them. This is very hard to ignore.
> Meanwhile, we have LLMs accepting bullshit tasks and completing them.
Would you mind elaborating on that? I’m not quite sure what you mean.
Bullshit tasks are the modern TPS reports. Tasks that create no real value to anyone, but are necessary because management likes to think it is progress.
> “That is what is going to happen here too. This is real, the benefits to society from AI are going to be gigantic.”
As an owner of a web host that probably sees advantage to increased bot traffic, this statement is just more “just wait AI will be gigantic any minute now, keep investing in it for me so my investments stay valuable”.
I think most level-headed people can see this is a giant bubble which will eventually burst like the dot-com crash. And AI is technology that's hard to understand to non-technical (and even some technical) investors.
But of course, every company needs to slap AI on their product now just to be seen as a viable product.
Personally, I look forward to seeing the bubble burst and being left with a more rational view of AI and what it can (and can not) do.
I too am waiting for the bubble to burst. Particularly because I think it's doing real harm to the industry.
Every company seems to be putting all their eggs in the AI basket. And that is causing basic usability and feature work to be neglected. Nobody cares because they are betting that AI agents will replace all that. But it won't and meanwhile everything else about these products will stagnate.
It's a disasterous strategy and when it comes crashing down and the layoffs start, every CEO will get a pass on leading this failure because they were just doing what everyone else is doing.
OpenAI has a reasonable path to reduce their costs by 10-100x over the next 5 years if they stop improving the models. That would make them an extremely profitable company with their only real risk being “local ai”. However customers have wanted their data in the cloud for years, local inference would likely just mean 0 cost tokens for OpenAI.
The challenge is the rest of the industry funding dead companies with billions of dollars on the off chance they replicate OpenAI’s success.
I don't see how this works though. OpenAI doesn't exist in a vacuum, it has competitors, and the first company to stop improving their model will get obliterated by the others. It seems like they are doomed to keep retraining right up until the VC funding runs out, at which point they go bankrupt.
Some other company, that doesn't have a giant pile of debt will then pick up the pieces and make some money though. Once we dig out of the resulting market crash.
2 replies →
The problem is OAI has very firece competition - folks who are willing to absorb losses to put them out of business.
Uber and Amazon are really bad examples. Who was Amazons competition? Nobody. By the time anyone woke up and took them seriously it was too late.
Uber only had to contend with Lyft and a few other less funded firms. Less funded being a really important thing to consider. Not to mention the easy access to immense amounts of funding Uber had.
The problem is that their competitor is Google and they are much better at most of the things that OpenAI needs to be good at.
OpenAI is trying to launch a hardware product with Jony Ive, an ads company, a AI slop-native version of TikTok and several other "businesses". They look well on their way to turning into a Yahoo! than a Cisco or VMWare.
1 reply →
How is AI hard to understand when you enter a cafe and everyone is chatting with their LLM?
dotcom bubble had tons of activity, too.
it was making money off those idea at the valuations expected that was problem.
the Internet really did revolutionize things, in substantial ways, but not to the tune of millions of dollars for pets.com
3 replies →
The part that’s hard to understand is how the sunk costs of hundreds of billions in capex gets repaid by all those people in cafes paying hundreds of dollars per month to use those LLMs.
1 reply →
Where do you see that?
1 reply →
Chat rooms webs were sold by millions in the dotcom era.
And everybody used them.
Nowdays everybody see them as useless.
7 replies →
People have no idea how much concern there was around whether FB would ever be able to monetize social media. That company went public at $15, and nearly closed below that on IPO day.
AI is more useful than social media. This is not financial advice, but I lean more toward not a bubble.
1 reply →
Not hard to use. I meant hard to understand what the limitations are for non tech users. E.g people who think AGI is just around the corner because we now have stochastic parrots.
This seems obviously and trivially correct to me...
Every bubble looks obvious in hindsight. The dot-com crash left behind Amazon and Google. The crypto crash left behind Coinbase and a few real revenue generating companies. If this is the AI bubble, then the survivors are going to look very obvious in a decade, we just don’t know which ones yet.
> The crypto crash left behind Coinbase and a few real revenue generating companies.
A lot of us clocked the crypto bullshit waaaay before the crash.
> A lot of us clocked the crypto bullshit waaaay before the crash.
I'm sorry what crash are you talking about?
Crypto isn't bullshit well most of it is but the utility is still used by millions of people around the world, specifically USDT and USDC which they proven a good way to move your assets without too much regulations.
1 reply →
But google and amazon were obvious to those in the know, in terms of what they delivered.
S Jobs called it back in 1995-97 - he referred to it as shopping for information and shopping for good and services.
Nobody has this crystal clear, tangible vision re. LLMs. Nobody at all. That is a big problem.
I found the interview: https://www.youtube.com/watch?v=MqSfFcaluHc&t=1700s
Personally I never tougth of Google as a dotcom era company. Most of those were landing pages with news and chat rooms.
It was more of web 2.0 company.
6 replies →
Because LLMs are a technology, not a business.
Ultimately it doesn't matter who survives the AI bubble, because they are all more or less equivalent, proposing the same technical solution.
Poor Jeff gonna miss that sweet AGI gravy
What is going on with AI right now could be a bubble just like there was the dotcom bubble. But it isn't like the internet went away after the dotcom burst. The largest companies in existence today are internet companies or have products that wouldn't make sense without the internet.
Sure, many of these "thin prompt wrapper around the OpenAI API" product "businesses" will all be gone within a few years. But AI? That is going to be here indefinitely.
The bubble is in the valuations on the stock market, not in the technology.
And their promises.
The "it'll make all your devs 6x as productive by the end of the year" types of promises. But those probably explain the valuations
9 replies →
Thats not true actually.
The technology - for what it is being used vs what is invested - does not match up at all. This is what happened to the dot-com bubble. Theres was a whole bunch of innovation that was needed to come to bring a delightful UX to bring swathes of people onto the internet.
So far this is true about LLMs. Could this change? Sure. Will it change meaningful? Personally I dont believe so.
The internet at its core was all about hooking up computers so they they could transform from just computational beasts to communication. There was a tremendous amount of potentitial that was very very real. It just so happens if computers can communicate we can do a whole bunch of stuff - as is going on today.
What are LLMS? Can someone please explain in a succint way...? Im yet to see something super crystal clear.
AI has been here well before transformers. Please, let's not pretend that AI started with LLM chatbots.
Things like recommendations, ads, and search will always be around because they were money printers before VCs found out about AI and they will continue to be long after.
This is always a bad take.
The dotcom bubble was not about "the internet" itself. The Internet was fine and pretty much already proven as a very useful communication tool. It was about business that made absolutely no sense getting extremely high valuations just because they operated - however vaguely - over the internet.
Generative AI have never reached the level of usability of the Internet itself, and likely never will.
There’s a typo. “Society” should be written as “five billionaires”
By society, he means the oligarchy will get more power and control.
Yeah, sure some side benefits to people. AI is still a nuclear weapon against labor in the capital - labor (haves - haves not), and will start pushing wealth inequality to the Egyptian pharoah.
The only good news for plebians is that virtual reality entertainment means you just need a little closet to live in.
Overall, this just leads up to further demographic decline, which as an environmental malthusian I would welcome in the initial stages to get us down from our current level, but I also suspect it would turn into an economic downward spiral, especially with AI, where the oligarchs have such total authoritarian control and monopoly on resources that humanity basically stops having kids at all.
AI is in a bubble like we don't need more than 16kb.
My translation of what Jeff says:
"AI is in a bubble but billionaires will get 'gigantic' benefits"
I see no benefit to anyone unless you can live off your stock portfolio and can easily ride through periods where your portfolio can suffer a 50% loss.
Honestly, during the dotcom bubble at least workers were getting paid and jobs were abundant. Things didn't start getting bad for workers until it popped. We're supposed to be in the 'positive' part of the AI bubble and people already seem desperate and out of hope.
Everyone not directly involved seems to want AI to pop. I'm not sure if that says anything about its longevity. Not very fun to have a bubble that feels bad on both sides.
"Commodity Tokens"
“Billionaire known for exploiting workers says ‘trust me, you all will benefit from this’”
[dead]
[stub for offtopicness]
(title fixed now)
Totally wrong title
@dang
Title needs to be changed to something like
"Bezos says AI is in industrial bubble yet promises huge benefits"
Tagging doesn't work on HN.
Very misleading clipping of the headline imo
[dead]
[dead]
[dead]
[flagged]
He may be right, everything points to that conclusion. My main issue is - why the fuck do we care what Bezos thinks on this matter? His ML efforts all lag behind the competition and he’s certainly not an expert in the field of deep learning. Why?
Well I say nuh-huh
Oh if Jeff Bezos says it then it must be true
AI is a solution searching for a problem.
This is what he acknowledges.