Comment by blast
10 days ago
He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct. There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
The biggest problem is the infrastructure left behind from the Dotcom boom that laid the path for the current world (the high speed fiber) doesn't translate to computer chips. Are you still using intel chips from 1998? And the chips are such a huge cost, and being backed by debt but they depreciate in value exponentially. It's not the same because so much of the current debt fueled spending is on an asset that has very short shelf life. I think AI will be huge, I don't doubt the endgame once it matures. But the bubble now, spending huge amounts on these data centers using debt without a path to profitability (and inordinate spending on these chips) is dangerous. You can think AI will be huge and see how dangerous the current manifestation of the bubble is. A lot of people will get hurt very very badly. This is going to maim the economy in a generational way.
And a lot of the gains from the Dotcom boom are being paid back in negative value for the average person at this point. We have automated systems that waste our time when we need support, product features that should have a one-time-cost being turned into subscriptions, a complete usurping of the ability to distribute software or build compatible replacements, etc..
The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
If you're ever been to a third world country then you'd see how this is completely untrue. The dotcom boom has revolutionized the way of life for people in countries like India.
Even for the average person in America, the ability to do so many activities online that would have taken hours otherwise (eg. shopping, research, DMV/government activities, etc). The fact that we see negative consequences of this like social network polarization or brainrot doesn't negate the positives that have been brought about.
20 replies →
AI itself is a manifestation of that too, a huge time waster for a lot of people. Getting randomly generated wrong but sounding right information is very frustrating. Start asking AI questions you already know the answer too and the issues can become very obvious.
I know HN and most younger people or people with otherwise political leanings always push narratives pointing at rich people bad but I feel a lot of tech has made our lives easier and better. It's also made it more complicated and worse in some ways. That effect has applied to everyone.
In poor countries, they may not have access to clean running water but it's almost guaranteed they have cell phones. We saw that in a documentary recently. What's good about that? They use cell phones not only to stay in touch but to carry out small business and personal sales. Something that wouldn't have been possible before the Internet age.
Don’t say that, you’ll hasten the Antichrist!
1 reply →
> The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
You are describing platform capture. Be it Google Search, YouTube, TikTok, Meta, X, App Store, Play Store, Amazon, Uber - they have all made themselves intermediaries between public and services, extracting a huge fee. I see it like rent going up in a region until it reaches maximum bearable level, making it almost not worth it to live and work there. They extract value both directions, up and down, like ISPs without net-neutrality.
But AI has a different dynamic, it is not easy to centrally control ranking, filtering and UI with AI agents. You can download a LLM, can't download a Google or Meta. Now it is AI agents that got the "ear" of the user base.
It's not like before it was good - we had a generation of people writing slop to grab attention on web and social networks, from the lowest porn site to CNN. We all got prompted by the Algorithm. Now that Algorithms is replaced by many AI agents that serve users more directly than before.
1 reply →
That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.
this is a good point, and it would be interesting to see the relative value of this building and housing 'plumbing' overhead Vs the chips themselves.
I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.
5 replies →
How much more of centralized data center capacity we actually need outside AI? And how much more we would need if we used slightly more time on doing things more efficiently?
This is true. It’s probably 2-3 times as long as a GPU chip. But it’s still probably half or a quarter of the depreciation timeline of a carrier fiber line.
2 replies →
There are probably a lot of cool and useful things you could do with a bunch of data centers full of GPUs.
- better weather forecasts
- modeling intermittent generation on the grid to get more solar online
- drug discovery
- economic modeling
- low cost streaming games
- simulation of all types
- cloud gaming service? :D
2 replies →
If you look at year over year chip improvements in 2025 vs 1998, it's clear that modern hardware just has a longer shelf life than it used to. The difficulties in getting more performance for the same power expenditure are just very different than back in the day.
There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.
In 1998 to transfer a megabyte over telephone lines was expensive and 5 years later is was almost free.
I have not seen the prices of GPUs, CPU or RAM going down, on the contrary, each day it gets more expensive.
2 replies →
> This is going to maim the economy in a generational way.
Just as I'm getting to the point where I can see retirement coming from off in the distance. Ugh.
One thing to note about modern social media is that the most negative comment tends to become the most upvoted.
You can see that all across this discussion.
> the current debt fueled spending
Honestly I think the most surprising thing about this latest investment boom has been how little debt there is. VC spending and big tech's deep pockets keep banks from being too tangled in all of this, so the fallout will be much more gentle imo.
We don’t have moores law anymore. Why are the chips obseleting so quickly?
FLOP/s/$ is still increasing exponentially, even if the specific components don't match Moore's original phrasing.
Markets for electronics have momentum, and estimating that momentum is how chip producers plan for investment in manufacturing capacity, and how chip consumers plan for deprecation.
They kind of aren't. If you actually look at "how many dollars am I spending per month on electricity", there's a good chance it's not worth upgrading even if your computer is 10 years old.
Of course this does make some moderate assumptions that it was a solid build in the first place, not a flimsy laptop, not artificially made obsolete/slow, etc. Even then, "install an SSD" and "install more RAM" is most of everything.
Of course, if you are a developer you should avoid doing these things so you won't get encouraged to write crappy programs.
Companies want GW data centers, which are a new thing that will last decades, even if GPUs are consumable and have high failure rates. Also, depending on how far it takes us, it could upgrade the electric grid, make electricity cheaper.
And there will also be software infrastructure which could be durable. There will be improvements to software tooling and the ecosystem. We will have enormous pre-trained foundation models. These model weight artifacts could be copied for free, distilled, or fine tuned for a fraction of the cost.
About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.
That 40% has a very long shelf life.
Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.
At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.
Interesting. Do you have any sources for this 60/40 split? And while I agree that the infrastructure has a long shelf life, it seems to me like an AI bubble burst would greatly depreciate the value of this infrastructure as the demand for it plummets, no?
Intel chips from 2008, as there is no real improvements.
While yes, I sure look forward to the flood of cheap graphics cards we will see 5-10 years from now. I don't need the newest card, but I don't mind the five-year old top-of-the-line at discount prices.
They're only replacing GPUs because investors will give "free" money to do so. Once the bubble pops people will realize that GPUs actually last a while.
I think you partially answer to yourself though. Is the value in the depreciating chips, or in the huge datacenters, with cooling, energy supply, at such scale etc. ?
I am not still using the same 1Mbps token ring from 1998 or the same dial up connecting to some 10Mbps backbone.
I am using x86 chips though.
A lot of the infrastructure made during the Dotcom boom was shortly discarded. How many dial-up modems were sold in the 90s?
The current AI bubble is leading to trained models that won't be feasible to retrain for a decade or longer after the bubble bursts.
The wealth the Dotcom boom left behind wasn't in dial up modems or internet over the telephone, it was in the huge amounts of high speed fiber optic networks that were laid down. I think a lot of that infrastructure is still in use today, fiber optic cables can last 30 years or more.
3 replies →
Honestly, not that many people had modems.
7 replies →
my 486sx with math co-processor is long gone.
Personally I think people should stop trying to reason from the past.
As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.
Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.
Doesn't that line of reasoning leave you in danger of being largely ignorant? There's a wonderful quote from Twain "History doesn't repeat itself but it often rhymes" there are two critical things I'd highlight in that quote - first off the contrast between repetition and rhyming is drawing attention to the fact that things are never exactly the same - there's just a gist of similarities - the second is that it often but doesn't always rhyme - this sure looks like a bubble but it might not be and it might be something entirely new. _That all said_ it's important to learn from history because there are clear echoes of history in events because we, people in general, don't change that fundamentally.
IME the number of times where people have said "this time it's different" and been wrong is a lot higher than the number of times they've said "this time is the same as the last" and been wrong. In fact, it is the increasing prevalence of the idea that "this time it's different" that makes me batten down the hatches and invest somewhere with more stability.
With all due respect, Im not here to teach people how to think.
This guy gets it - https://www.youtube.com/watch?v=kxLCTA5wQow
Instead of plainly jumping on the bubble bandwagon he actually goes through a thorough analysis.
This won’t even come close to maiming the economy, that’s one of the more extreme takes I’ve heard.
AI is already making us wildly more productive. I vibe coded 5 deep ML libraries over the last month or so. This would have taken me maybe years before when I was manually coding as an MLE.
We have clearly hit the stage of exponential improvement, and to not invest basically everything we have in it would be crazy. Anyone who doesn’t see that is missing the bigger picture.
Go ahead and post what youve done fella so we can critique.
Why is it all these kinds of posts never come with any attachments? We are all interested to see it m8.
6 replies →
Video game crash followed by video games taking off and eclipsing most other forms of digital entertainment.
Dot com crash followed by the web getting pretty popular and a bit central to business.
To all those betting big on AI before the crash:
Careful, Icarus.
Bad comparison.
The leap of faith necessary in LLMs to achieve the same feat is so large its very difficult to imagine it happening. Particularly due to the well known constraints on what the technology is capable of.
The whole investment thesis of LLMs is that it will be able to a) be intelligent b) produce new knowledge. If those two things that dont happen, what has been delivered is not commensurate to the risk in regards to the money invested.
Given they're referencing Icarus, they seem to agree with you.
Past bubbles leaving behind something of value is indeed no guarantee the current bubble will do so. For as many times as people post "but dotcom produced Amazon" to HN, people had posted that exact argument about the Blockchain, the NFT, or the "Metaverse" bubbles.
Many AI startups around LLMs are going to crash and burn.
This is because many people have mistaken LLMs for AI, when they’re just a small subset of the technology - and this has driven myopic focus in a lot of development, and has lead to naive investors placing bets on golden dog turds.
I disagree on AI as a whole, however - as unlike previous technologies this one can self-ratchet and bootstrap. ML designed chips, ML designed models, and around you go until god pops out the exit chute.
> Careful, Icarus.
What does that even mean?
pets.com was a fat loser only telling telling people that were going to fly.
Amazon was Icarus, they did something.
Vs weak commentators going on about the wax melting from their parents root cellar while Icarus was soaring.
Most of Y Combinator are not using AI they just say that and you're worried about the people who do things?
> commentators going on about the wax melting from their parents root cellar while Icarus was soaring.
Icarus drowned in the sea.
Even if you want to put the world into only two lumps of cellar dwellers and Icaruses it is still a group of living people on one side and a floating/semi-submerged pile of dead bodies that are literally only remembered for how stupid their deaths were on the other.
Dotcom mania companies were not Internet providers. They tried making money on the internet, something people already saw as worth paying for.
Cisco, Level3 and WorldCom all saw astronomical valuation spikes during the dotcom bubble and all three saw their stock prices and actual business prospects collapse in the aftermath of it.
Perhaps the most famous implosion of all was AOL who merged (sort of) with TimeWarner gaining the lion's share of control through market cap balancing. AOL fell so destructively that it nearly wiped out all the value of the actual hard assets that TW controlled pre-merger.
This is not really true, e.g. Cogent was basically created by buying bankrupt dotcum-bubble network providers for cents on the dollar.
Also AOL was a mix of dialup provider and Dotcom service. There were many other popular examples of such.
2 replies →
I would add more metrics to think about. For example, very few people used Internet in the dotcom era while now the AI use is distributed into all the population using the Internet that will probably not growth too much. In this case, if Internet population is the driver, and it will not growth significantly we are redistributing the attention. Assuming "all" society will be more productive we will all be in the same train at the relatively same speed.
And what were the societal benefits of the internet?
That everybody all over the world can instantly connect with each other?
The 90s bubble also had massive financial fraud and laid capital that wasn’t used at 100% utilization when it hit the ground like what we are seeing now.
It’s different enough that it probably isn’t relevant.
> [At dotcom time] There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.
It did, but not for the better. Quality of life and standard of living both declined while income inequality skyrocketed and that period of time is now known as The Great Divergence.
> He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct.
He's got no downside if he's wrong or doesn't deliver, he's promising an analogy to selling you a brand new bridge in exchange for taking half of your money... and you're ecstatic about it.
Thank you for acknowledging this. The internet was created around a lot of lofty idealism and none of that has been realized other than opening up the world's information to a great many. It made society and the global economy worse (occidental west; Chinese middle class might disagree) and has paralleled the destabilization of geopolitics. I am not luddite but until we can, "get our moral shit together" new technologies aren't but fuel on the proverbial fire.
Glad to be in agreement. The higher message here is that technology is no substitute for politics, cue crypto-hype which produced little more than crime and money-laundering. Without proper policies, corruption invades every strata of society.
The analogy to the dot com bubble is leaky at best. AI will hit a point of exponential improvement, we are already in the outer parts of this loop.
It will become so valuable so fast we struggle to comprehend it.
Then why has my experience with AI started to see such dramatically diminishing returns?
2022-2023 AI changed enough to be me to convert from skeptic, to a believer. I started working as an AI Engineer and wanted to be on the front lines.
2023-2024 Again, major changes, especially as far as coding goes. I started building very promising prototypes for companies, was able to build a laundry list of projects that were just boring to write.
2024-2025 My day to day usage has decreased. The models seem better at fact finding but worse for code. None of those "cool" prototypes from myself or anyone else I knew seemed to be able to become more than just that. Many of the cool companies I started learning about in 2022 started to reduce staff and are running into financial troubles.
The only area where I've been impressed is the relatively niche improvements in open source text/image to video models. It's wild that you can make sure animated films on a home computer now.
But even there I'm seeing no signs of "exponential improvement".
I vibe coded 5 deep ML libraries this month. I'm an MLE by trade and it would have taken me ages without AI. This wasn't possible even a year ago. I have no idea how anyone thinks the models haven't improved
3 replies →
Very few people predicted LLMs, yet lots of people are now very certain they know what the future of AI holds. I have no idea why so many people have so much faith in their ability to predict the future of technology, when the evidence that they can't is so clear.
It's certainly possible that AI will improve this way, but I'd wager it's extremely unlikely. My sense is that what people are calling AI will later be recognized as obviously steroidal statistical models that could do little else than remix and regurgitate in convincing ways. I guess time will tell which of us is correct.
If those statistical models are helping you do better research, or basically doing most of it better than you can, does it matter? People act like models are implicitly bad because they are statistical, which makes no sense at all.
If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement. And yes it will progress because thats the nature of it doing meaningful work.
1 reply →
While this remains possible my main impression now is that progress seems to be slowing down rather than accelerating.
Not even remotely. In LLM land, the progress seems slow the past few years, but a lot has happened under the hood.
Elsewhere in AI however progress has been enormous, and many projects are only now reaching the point where they are starting to have valuable outputs. Take video gen for instance - it simply did not exist outside of research labs a few years ago, and now it’s getting to the point where it’s actually useful - and that’s just a very visible example, never mind the models being applied to everything from plasma physics to kidney disease.
2 replies →
If you keep up with the research this isn't the case, ML timelines have always been slower than anyone likes
I'm not so sure about this.
First were the models. Then the APIs. Then the cost efficiencies. Right now the tooling and automated workflows. Next will be a frantic effort to "AI-Everything". A lot of things won't make the cut, but absolutely many tasks, whole jobs, and perhaps entire subsets of industries will flip over.
For example you might say no AI can write a completely tested, secure, fully functional mobile app with one prompt (yet). But look at the advancements in Cline, Claude code, MCPs, code execution environments, and other tooling in just the last 6 months.
The whole monkeys typewriters shakespeare thing starts to become viable.
Maybe once we get another architecture breakthrough, it won't feel so slow.