← Back to context

Comment by zerosizedweasle

10 days ago

The biggest problem is the infrastructure left behind from the Dotcom boom that laid the path for the current world (the high speed fiber) doesn't translate to computer chips. Are you still using intel chips from 1998? And the chips are such a huge cost, and being backed by debt but they depreciate in value exponentially. It's not the same because so much of the current debt fueled spending is on an asset that has very short shelf life. I think AI will be huge, I don't doubt the endgame once it matures. But the bubble now, spending huge amounts on these data centers using debt without a path to profitability (and inordinate spending on these chips) is dangerous. You can think AI will be huge and see how dangerous the current manifestation of the bubble is. A lot of people will get hurt very very badly. This is going to maim the economy in a generational way.

And a lot of the gains from the Dotcom boom are being paid back in negative value for the average person at this point. We have automated systems that waste our time when we need support, product features that should have a one-time-cost being turned into subscriptions, a complete usurping of the ability to distribute software or build compatible replacements, etc..

The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.

  • If you're ever been to a third world country then you'd see how this is completely untrue. The dotcom boom has revolutionized the way of life for people in countries like India.

    Even for the average person in America, the ability to do so many activities online that would have taken hours otherwise (eg. shopping, research, DMV/government activities, etc). The fact that we see negative consequences of this like social network polarization or brainrot doesn't negate the positives that have been brought about.

    • I think you’re putting too much weight on cost (time, money), and not enough weight on “quality of life”, in your analysis.

      For sure, we can shop faster, and (attempt) research and admin faster. But…

      Shopping: used to be fun. You’d go with friends or family, discuss the goods together, gossip, bump into people you knew, stop for a sandwich, maybe mix shopping and a cinema or dinner trip. All the while, you’d be aware of other peoples’ personal space, see their family dynamics. Queuing for event tickets brought you shoulder to shoulder with the crowd before the event began… Today, we do all this at home; strangers (and communities) are separated from us by glass, cables and satalites, rather than by air and shouting distance. I argue that this time saving is reducing our ability to socialise.

      Research: this is definitely accelerated, and probably mostly for the better. But… some kinds of research were mingled with the “shopping” socialisation described above.

      Admin: the happy path is now faster and functioning bureaucracy is smoother in the digital realm. But, it’s the edge cases which are now more painful. Elderly people struggle with digital tech and prefer face to face. Everyone is more open to more subtle and challenging threats (identity theft, fraud); we all have to learn complex and layered mitigation strategies. Also: digital systems are very fragile: they leak private data, they’re open to wider attack surfaces, they need more training and are harder to intuit without that training; they’re ripe for capture by monopolists (Google, Palantir).

      The time and cost savings of all these are not felt by the users, or even the admins of these systems. The savings are felt only by the owners of the systems.

      Technologgy has saved billions of person-hours individual costs, in travel, in physical work. Yet, wemre working longer, using fewer ranges of motions, are less fit, less able to tolerste others’ differences and the wealth gap is widening.

      8 replies →

    • It seems the crux is that we needed X people to produce goods, and we had Y demand.

      Now we need X*0.75 people to do meet Y demand.

      However, those savings are partially piped to consumers, and partially piped to owners.

      There is only so much marginal propensity to spend that rich people have, so that additional wealth is not resulting in an increase in demand, at least commensurate enough to absorb the 25% who are unemployed or underemployed.

      Ideally that money would be getting ploughed back into making new firms, or creating new work, but the work being created requires people with PHDs, and a few specific skills, which means that entire fields of people are not in the work force.

      However all that money has to go somewhere, and so asset classes are rising in value, because there is no where else for it to go.

      7 replies →

    • With telecom, we benefited from skipping generations. I got into a telecom management program because in 2001-ish, I was passed by on a village street by a farmer bicycling while talking on his cellphone. Mind you my family could not afford cellphone call rates at the time.

      In fact, the technology was introduced out here assuming corporate / elite users. The market reality became such that telcos were forced kicking and screaming to open up networks to everybody. The Telecom Regulatory Authority of India (back then) mandated rural <> urban parity of sorts. This eventually forced telcos to share infrastructure costs (share towers etc.) The total call and data volumes are eye-watering, but low-yield (low ARPU). I could go on and on but it's just batshit crazy.

      Now UPI has layered on top of that---once again, benefiting from Reserve Bank of India's mandate for zero-fee transactions, and participating via a formal data interchange protocol and format.

      Speaking from India, having lived here all my life, and occasionally travelled abroad (USAmerica, S.E. Asia).

      We, as a society and democracy, are also feeling the harsh, harsh hand of "Code is Law", and increasingly centralised control of communication utilities (which the telecoms are). The left hand of darkness comes with a lot of darkness, sadly.

      Which brings me to the moniker of "third world".

      This place is insane, my friend --- first, second, third, and fourth worlds all smashing into each others' faces all the time. In so many ways, we are more first world here than many western countries. I first visited USAmerica in 2015, and I could almost smell an empire in decline. Walking across twitter headquarters in downtown SF of all the places, avoiding needles and syringes strewn on the sidewalk, and avoiding the completely smashed guy just barely standing there, right there in the middle of it all.

      That was insane.

      2 replies →

  • AI itself is a manifestation of that too, a huge time waster for a lot of people. Getting randomly generated wrong but sounding right information is very frustrating. Start asking AI questions you already know the answer too and the issues can become very obvious.

  • I know HN and most younger people or people with otherwise political leanings always push narratives pointing at rich people bad but I feel a lot of tech has made our lives easier and better. It's also made it more complicated and worse in some ways. That effect has applied to everyone.

    In poor countries, they may not have access to clean running water but it's almost guaranteed they have cell phones. We saw that in a documentary recently. What's good about that? They use cell phones not only to stay in touch but to carry out small business and personal sales. Something that wouldn't have been possible before the Internet age.

  • > The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.

    You are describing platform capture. Be it Google Search, YouTube, TikTok, Meta, X, App Store, Play Store, Amazon, Uber - they have all made themselves intermediaries between public and services, extracting a huge fee. I see it like rent going up in a region until it reaches maximum bearable level, making it almost not worth it to live and work there. They extract value both directions, up and down, like ISPs without net-neutrality.

    But AI has a different dynamic, it is not easy to centrally control ranking, filtering and UI with AI agents. You can download a LLM, can't download a Google or Meta. Now it is AI agents that got the "ear" of the user base.

    It's not like before it was good - we had a generation of people writing slop to grab attention on web and social networks, from the lowest porn site to CNN. We all got prompted by the Algorithm. Now that Algorithms is replaced by many AI agents that serve users more directly than before.

    • >You can download a LLM, can't download a Google or Meta.

      You can download a model. That doesn't necessarily mean you can download the best model and all the ancillary systems attached to it by whatever service. Just like you can download a web index but you probably cannot download google's index and certainly can't download their system of crawlers for keeping it up to date.

That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.

  • this is a good point, and it would be interesting to see the relative value of this building and housing 'plumbing' overhead Vs the chips themselves.

    I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.

    • The data centers built in 1998 don't have nearly enough power or cooling capacity to run today's infrastructure. I'd be surprised if very many of them are even still in use. Cheaper to build new than upgrade.

      4 replies →

  • How much more of centralized data center capacity we actually need outside AI? And how much more we would need if we used slightly more time on doing things more efficiently?

  • This is true. It’s probably 2-3 times as long as a GPU chip. But it’s still probably half or a quarter of the depreciation timeline of a carrier fiber line.

    • Even if the building itself is condemnable, what it took to build it out is still valuable.

      To give a different example, right now, some of the most prized sites for renewable energy are former coal plant sites, because they already have big fat transmission lines ready to go. Yesterday's industrial parks are now today's gentrifying urban districts, and so on.

      1 reply →

There are probably a lot of cool and useful things you could do with a bunch of data centers full of GPUs.

- better weather forecasts

- modeling intermittent generation on the grid to get more solar online

- drug discovery

- economic modeling

- low cost streaming games

- simulation of all types

  • - cloud gaming service? :D

    • Eh not really. Maybe retro cloud gaming services. But games haven't stopped getting more demanding every year. Not only are the AI GPUs focused on achieving clusters with great compute performance per watt and dollar rather than making singular GPUs with great raster performance; even the GPUs which are powerful enough for current games won't be powerful enough for games in 5 years.

      Not to mean that we're still nowhere near close to solving the broadband coverage problem, especially in less developed countries like the US and most of the third world. If anything, it seems like we're moving towards satellite internet and cellular for areas outside of the urban centers, and those are terrible for latency-sensitive applications like game streaming.

      1 reply →

If you look at year over year chip improvements in 2025 vs 1998, it's clear that modern hardware just has a longer shelf life than it used to. The difficulties in getting more performance for the same power expenditure are just very different than back in the day.

There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.

  • In 1998 to transfer a megabyte over telephone lines was expensive and 5 years later is was almost free.

    I have not seen the prices of GPUs, CPU or RAM going down, on the contrary, each day it gets more expensive.

    • In 1998, 16 MiB of RAM was ~$200, in 2025, 16 GiB of ram is about $50. A Pentium II in 1998 at 459 MHz was $600. Today, a AMD Ryzen 7 9800X can be had for $500. That Ryzen is maybe 100 times as powerful as the Pentium II. What's available at what price point has changed, but it's ridiculous how much computing I can get for $150 at Best Buy, and it's also ridiculous how little I can do with that much computing power. Wirth’s law still holds: software is getting slower more rapidly than hardware is getting faster.

    • But why not measure $ per intelligence. In 2020 you'd need a billion dollars to get your computer to write good code, now it is practically free.

> This is going to maim the economy in a generational way.

Just as I'm getting to the point where I can see retirement coming from off in the distance. Ugh.

One thing to note about modern social media is that the most negative comment tends to become the most upvoted.

You can see that all across this discussion.

> the current debt fueled spending

Honestly I think the most surprising thing about this latest investment boom has been how little debt there is. VC spending and big tech's deep pockets keep banks from being too tangled in all of this, so the fallout will be much more gentle imo.

We don’t have moores law anymore. Why are the chips obseleting so quickly?

  • FLOP/s/$ is still increasing exponentially, even if the specific components don't match Moore's original phrasing.

    Markets for electronics have momentum, and estimating that momentum is how chip producers plan for investment in manufacturing capacity, and how chip consumers plan for deprecation.

  • They kind of aren't. If you actually look at "how many dollars am I spending per month on electricity", there's a good chance it's not worth upgrading even if your computer is 10 years old.

    Of course this does make some moderate assumptions that it was a solid build in the first place, not a flimsy laptop, not artificially made obsolete/slow, etc. Even then, "install an SSD" and "install more RAM" is most of everything.

    Of course, if you are a developer you should avoid doing these things so you won't get encouraged to write crappy programs.

Companies want GW data centers, which are a new thing that will last decades, even if GPUs are consumable and have high failure rates. Also, depending on how far it takes us, it could upgrade the electric grid, make electricity cheaper.

And there will also be software infrastructure which could be durable. There will be improvements to software tooling and the ecosystem. We will have enormous pre-trained foundation models. These model weight artifacts could be copied for free, distilled, or fine tuned for a fraction of the cost.

About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.

That 40% has a very long shelf life.

Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.

At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.

  • Interesting. Do you have any sources for this 60/40 split? And while I agree that the infrastructure has a long shelf life, it seems to me like an AI bubble burst would greatly depreciate the value of this infrastructure as the demand for it plummets, no?

While yes, I sure look forward to the flood of cheap graphics cards we will see 5-10 years from now. I don't need the newest card, but I don't mind the five-year old top-of-the-line at discount prices.

They're only replacing GPUs because investors will give "free" money to do so. Once the bubble pops people will realize that GPUs actually last a while.

I think you partially answer to yourself though. Is the value in the depreciating chips, or in the huge datacenters, with cooling, energy supply, at such scale etc. ?

I am not still using the same 1Mbps token ring from 1998 or the same dial up connecting to some 10Mbps backbone.

I am using x86 chips though.

A lot of the infrastructure made during the Dotcom boom was shortly discarded. How many dial-up modems were sold in the 90s?

The current AI bubble is leading to trained models that won't be feasible to retrain for a decade or longer after the bubble bursts.

  • The wealth the Dotcom boom left behind wasn't in dial up modems or internet over the telephone, it was in the huge amounts of high speed fiber optic networks that were laid down. I think a lot of that infrastructure is still in use today, fiber optic cables can last 30 years or more.

Personally I think people should stop trying to reason from the past.

As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.

Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.

  • Doesn't that line of reasoning leave you in danger of being largely ignorant? There's a wonderful quote from Twain "History doesn't repeat itself but it often rhymes" there are two critical things I'd highlight in that quote - first off the contrast between repetition and rhyming is drawing attention to the fact that things are never exactly the same - there's just a gist of similarities - the second is that it often but doesn't always rhyme - this sure looks like a bubble but it might not be and it might be something entirely new. _That all said_ it's important to learn from history because there are clear echoes of history in events because we, people in general, don't change that fundamentally.

  • IME the number of times where people have said "this time it's different" and been wrong is a lot higher than the number of times they've said "this time is the same as the last" and been wrong. In fact, it is the increasing prevalence of the idea that "this time it's different" that makes me batten down the hatches and invest somewhere with more stability.

This won’t even come close to maiming the economy, that’s one of the more extreme takes I’ve heard.

AI is already making us wildly more productive. I vibe coded 5 deep ML libraries over the last month or so. This would have taken me maybe years before when I was manually coding as an MLE.

We have clearly hit the stage of exponential improvement, and to not invest basically everything we have in it would be crazy. Anyone who doesn’t see that is missing the bigger picture.

  • Go ahead and post what youve done fella so we can critique.

    Why is it all these kinds of posts never come with any attachments? We are all interested to see it m8.

    • Because they are private and I can’t make them public. I have no self interest here, I’m an MLE by trade and this technology threatens my job.

      I had been mostly opposed to vibe coding for a long time, autocomplete was fine but full agentic coding just made a mess.

      That’s changed now, this stuff is genuinely working on really hard problems.

      5 replies →