After the Bubble

4 months ago (tbray.org)

> Nobody who is doing this is willing to come clean with hard numbers but there are data points, for example from Meta and (very unofficially) Google.

The Meta link does not support the point. It's actually implying a MTBF of over 5 years at 90% utilizization even if you assume there's no bathtub curve. Pretty sure that lines up with the depreciation period.

The Google link is even worse. It links to https://www.tomshardware.com/pc-components/gpus/datacenter-g...

That article makes a big claim, does not link to any source. It vaguely describes the source, but nobody who was actually in that role would describe themselves as the "GenAI principal architect at Alphabet". Like, those are not the words they would use. It would also be pointless to try to stay anonymous if that really were your title.

It looks like the ultimate source of the quote is this Twitter screenshot of an unnamed article (whose text can't be found with search engines): https://x.com/techfund1/status/1849031571421983140

That is not merely an unofficial source. That is just made up trash that the blog author just lapped up despite its obviously unreliable nature, since it confirmed his beliefs.

  • Besides, if the claim about GPU wear-and-tear was true, this would show up consistently in GPUs sourced from cryptomining (which was generally done in makeshift compute centers with terrible cooling and other environmental factors) and it just doesn't.

  • > It's actually implying a MTBF of over 5 years [...] Pretty sure that lines up with the depreciation period.

    You're assuming this is normal, for the MTBF to line up with the depreciation schedule. But the MTBF of data center hardware is usually quite a bit longer than the depreciation schedule right? If I recall correctly, for servers it's typically double or triple, roughly. Maybe less for GPUs, I'm not directly familiar, but a quick web search suggests these periods shouldn't line up for GPUs either.

  • On top of that, Google isn't using NVIDIA GPUs, they have their own TPU.

    • Google is using nVidia GPUs. More than that, I'd expect Google to still be something like 90% on nVidia GPUs. You can't really check of course. Maybe I'm an idiot and it's 50%.

      But you can see how that works: go to colab.research.google.com. Type in some code ... "!nvidia-smi" for instance. Click on the down arrow next to "connect", and select change runtime type. 3 out of 5 GPU options are nVidia GPUs.

      Frankly, unless you rewrite your models you don't really have a choice but using nVidia GPUs, thanks to, ironically, Facebook (authors of pytorch). There is pytorch/XLA automatic translation to TPU but it doesn't work for "big" models. And as a point of advice: you want stuff to work on TPUs? Do what Googlers do: use Jax ( https://github.com/jax-ml/jax ), oh, and look at the commit logs of that repository to get your mind blown btw.

      In other words, Google rents out nVidia GPUs to their cloud customers (with the hardware physically present in Google datacenters).

      2 replies →

There's many questions about the overall economics of AI, its value, is it overvalued, is it not, etc. but this is a very poor article I suspect made by someone with little to no financial or accounting knowledge with a strong "uh big tech bad" bias.

> When companies buy expensive stuff, for accounting purposes they pretend they haven’t spent the money; instead they “depreciate” it over a few years.

There's no pretending. It's accounting. When you buy an asset, you own it, it is now part of your balance sheet. You incur a cost when the value of the asset falls, i.e. it depreciates. If you spend 20k on a car you are not pretending to not having spent 20k by considering it an asset, you spent money but now you have something of similar value as an asset. Your cost is the depreciation as years go by and the car becomes less valuable. That's a very misleading way to put it.

> Management gets to pick your depreciation period, (...)

They don't. GAAP, IFRS, or whatever other accounting rules that apply to the company do. There's some degree of freedom in certain situations but it's not "management wants". And it's funny that the author thinks that companies in general are interested in defining longer useful lives when in most cases (this depends on other tax considerations) it's the opposite because while depreciation is a non-cash expense you can get real cash by reducing your taxable income and the sooner you get that money the better. There's some more nuance to this, tax vs accounting, how much freedom management has vs what is industry practice and auditors will allow you to do... my point is, again, "management gets to pick" is not an accurate representation of what goes on.

> It’s like this. The Big-Tech giants are insanely profitable but they don’t have enough money lying around to build the hundreds of billions of dollars worth of data centers the AI prophets say we’re going to need.

Actually they do, Meta is the one that has the least but it could still easily raise that money. Meta in this case just thinks it's a better deal to share risk with investors that at the moment have a very strong appetite to own these assets. Meta is actually paying a higher rate through these SPVs compared to funding them outright. Now, personally I don't know how I would feel about that deal in particular if I was an investor just because you need to dig a little deeper in their balance sheet to have a good snapshot of what is going on but it's not any trick, arguably it can make economic sense.

  • > I suspect made by someone with little to no financial or accounting knowledge with a strong "uh big tech bad" bias.

    Actually the author has worked for Google, Amazon (VP-level), Sun, and DEC; and was a co-creator of XML.

    • 1. Being a VP in these companies does not imply they have an understanding of financing, accounting or data-center economics unless their purview covered or was very close to the teams procuring and running the infrastructure.

      2. That level of seniority does, on the other hand, expose them to a lot of the shenanigans going on in those companies, which could credibly lead them to develop a "big tech bad" mindset.

      7 replies →

    • There are many legitimate concerns about the financial implications of these huge investments in AI. In fact the podcast that he references is great at providing _informed_ and _nuanced_ observations about all of this - Paul Kedrosky is great.

      BUT (my point)

      Is that the article is terrible at reflecting all of that and makes wrong and misleading comments about it.

      The idea that companies depreciating assets is them "pretending they haven't spent the money" or that "management gets to pick your depreciation period" is simply wrong.

      Do you think any of those two statements are accurate?

      P.S. Maybe you make a good point, I said that I suspected based on those statements that he had little financial knowledge. tbh I didn't know the author, hence the "suspect". But now that you say that it might be that he is so biased in this particular topic that he can't make a fair representation of his point. Irrespective of that, I will say it again: statements like the ones I've commented are absurd.

      2 replies →

I have been thinking. The reality is that in general employees are not paid for value/revenue/profit they generate. That sets the floor. But they are paid what market sets as rate for their demand. See people putting together high cost electronics. Clearly lot of value there with margins what they are, but not lot of pay.

Wouldn't AI largely be race to bottom? As such even if expensive employees get replaced, the cost of replacing them might not be that big. It might only barely cover the costs of interference for example. So might it be that profits will actually be lot lower than costs of employees that are being replaced?

  • To your first point, yes we're moving slowly towards a more general awareness that most employees are paid market (replacement) rate, not their share of value generated. As the replacement rate drops, so will wages, even if the generated value skyrockets. Unsurprisingly, business owners and upper management love this.

    To the second point, the race to the bottom won't be evenly distributed across all markets or market segments. A lot of AI-economy predictions focus on the idea that nothing else will change or be affected by second and third order dynamics, which is never the case with large disruptions. When something that was rare becomes common, something else that was common becomes rare.

"The financial voodoo runs deep here."

"Special Purpose Vehicles" reminds me of "Special Purpose Entities" from the 90s and 00s, e.g., for synthentic leases

Its the whole layer of feature companies built on top of the core models what will pop (openai, anthropic, google resellers). Existing services will add AI bit by bit for features and survive. The core owners of models I think will survive (exception might be open ai, maybe). For the hardware there really are not that many TSMC wafers going around so max sales are capped for everyone at the newest nodes.

>When companies buy expensive stuff, for accounting purposes they pretend they haven’t spent the money; instead they “depreciate” it over a few years

I thought there was a US IRS Law that was changed sometime in the past 10/15 years that made companies depreciate computer hardware in 1 year. Am I misremembering ?

I thought that law was the reason why many companies increased the life time of employee Laptops from 3 to 5 years.

I am not effected by this version of bubble, except for I want RAM prices to come down for my new PC build.

  • All, you have to do is wait. Seriously, just wait. if the tech deflationists are right, you'll get more cost effective memory every several years or decades at least.

    In somehwere around 1999, my high school buddy, worked overtime shifts to afford a CPU he had waited forever to buy! Wait for it, it was a 1 GHZ CPU!

> When the early-electrification bubble built, we were left with the grid. And when the dot-com bubble burst, we were left with a lot of valuable infrastructure whose cost was sunk, in particular dark fibre. The AI bubble? Not so much.

Except for the physical buildings, permitting, and power grid build-out.

  • Thinking about data center capacity. How much new capacity we actually need? Ignoring AI that is. Will there be demand for this and will that demand cover the maintenance in the interim. Power-grid capacity as long as it is not too expensive could be reasonable use. But buildings and permits and so on might be less.

  • I was also wondering if the GPUs that die and need to be replaced actually become inert blocks of fused silicon or do they work at half speed or something? A data center full of half speed GPUs is still a lot of computing power waiting for somebody to use.

  • > Except for the physical buildings, permitting, and power grid build-out.

    Those are extremely localized at a bunch of data centers and how much of that will see further use? And how much grid work has really happened (there are a lot of announcement about plans to maybe build nuclear reactor etc., but those projects take a lot of time, if ever done)

    nVidia managed to pivot their customer base from crypto mining to AI.

    • > Those are extremely localized at a bunch of data centers and how much of that will see further use?

      As much as there is market for somewhat-less-expensive data centers. (Data centers where somebody else already paid the cost of construction.)

      And where they are doesn't matter. The internet is good at shipping bits to various places.

The AI bubble is causing large scale resource mis-allocation. That means that resources that could and should have been allocated to useful things have instead gone into data centre build outs and the panoply of marketing, executive hot air, forced/frivolous use of LLMs etc etc - the list goes on.

This means that society as a whole is perhaps significantly poorer than if LLMs had been properly valued (i.e. not a bubble), or had simply never happened at all.

Unfortunately it will likely be the poorest and most vulnerable in our societies that will bear the brunt. 'Twas ever thus.

> The GenAI bubble is going to pop. Everyone knows that.

I think the first part of this is probably true, but I don’t think everyone knows it. A lot of people are acting like they don’t know it.

It feels like a bubble to me, but I don’t think anyone can say to a certainty that it is, or that it will pop.

  • > A lot of people are acting like they don’t know it.

    Or they're acting like they think there's going to be significant stock price growth between now and the bubble popping. Behaviors aren't significantly different between those two scenarios.

    • It's always been an interesting mental exercise for me to try and measure the unknowable gap between what people say they believe and the motivated reasoning that might drive their stated beliefs.

      Putting your statement another way, if you and I can see the bubble, then it's almost a certainty that the average tech CEO also sees a bubble. They're just hoping that when the music stops, they won't be the one left holding the bag.

      1 reply →

  • It’s a non sequitur, like saying “Nobody goes there anymore. It's too crowded.”

    I’m guessing the author meant it tongue in cheek but really meant “everyone I know or follow knows it’s a bubble”

    • No, it can still be a bubble when everybody knows it's a bubble. If the price is still going up, I may know it's a bubble, and still not get out, because I'm still making money. But it's a hair-trigger thing, where everybody gets more and more ready to run for the exits at the first sign of trouble.

  • No one can see a bubble. That's what makes it a bubble.

    • That's the conventional wisdom, undercut by the fact that people have guessed (and bet their fortunes) that previous bubbles were bubbles well before they popped.

      Its more accurate to say that bubbles rely on most people being blind to the bubble's nature.

      2 replies →

I think the interesting thing to think about its that we _already_ fired people in the name of AI (because AI was supposed to be this huge efficiency gain).

When the bubble pops, do you fire _even more_ people? What does that look like given the decimation in the job market already?

  • When the bubble pops, AI companies fire people (up to everyone). Non-AI companies hire people back, because it becomes clear that AI won't do their job for them.

    • That is a lossy process though. The total number of people fired is never the total number rehired. And at the time of a bubble pop, we would have gone through multiple rounds of firings.

> I think the people telling us that genAI is the future and we must pay it fealty richly deserve their impending financial wipe-out.

I think people need to realize that if the bubble gets bad enough, there will absolutely, positively, 100% be a bailout. Trump doesn't care who you are or what you did, as long as you pay enough (both money and praise) you get whatever you want, and Big Tech has already put many down payments. I mean, they ask him "Why did you pardon CZ after he defrauded people? Why did you pardon Hernandez after he smuggled tons of cocaine in?" and he plainly says he doesn't know who they are. And why should he? They paid, there's no need to know your customers personally, there's too many of them.

> If the genAI fanpholks are right, all the debt-only-don’t-call-it-that will be covered by profits and everyone can sleep sound. Only it won’t.

[citation needed]

> I wonder who, after the loss of those tens of millions of high-paid jobs, are going to be the consumers who’ll buy the goods that’ll drive the profits that’ll pay back the investors. But that problem is kind of intrinsic to Late-stage Capitalism.

> Anyhow, there will be a crash and a hangover. I think the people telling us that genAI is the future and we must pay it fealty richly deserve their impending financial wipe-out. But still, I hope the hangover is less terrible than I think it will be.

Yup. We really seem to be at a point where everyone has their guns drawn under the table and we're just waiting for the first shot—like we're living in a real-world, global version of Uncut Gems.

There is no AI bubble. The housing bubble was about artificially inflated housing assets.

People have been calling Bitcoin a bubble since it was introduced. Has it popped? No. Has it reached the popularity and usability crypto shills said it would? Also no.

AI on the other hand has the potential to put literally millions of individuals out of work. At a minimum, it is already augmenting the value of highly-skilled intellectual workers. This is the final capitalism cheat code. A worker who does not sleep or take time off.

There will be layoffs and there will be bankruptcies. Yes. But AI is never going to be rolled back. We are never going to see a pre-AI world ever again, just like Bitcoin never really went away.

  • Being a bubble does not mean there is no value in the thing, only that investment is outpacing the intrinsic value of the thing which is inherently unsustainable and will cause a collapse at some point. This also does not imply that the collapse will lead to a future value of 0.

    • I mean, at this point we are arguing semantics and speculating on future events. My opinion is that there's no bubble. Yours is that there is a bubble. That's fine, we will see probably sooner rather than later (I expect the bubble / no bubble scenario to materialise within the next 10 years, probably even the next 3).

  • Bitcoin has had the fortune of being one of the settlement methods for sanctions evasion. Its price has almost certainly profited from the Ukraine war.

    Renewed interest by the Trump clan with Lutnick's Cantor & Fitzgerald handling Tether collateral in Nayib Bukele's paradise wasn't easy to predict either.

    Neither was the recent selloff. It would be hilarious if it was for a slush fund for Venezuelan rebels or army generals (bribing the military was the method of choice in Syria before the fall of Assad).

    • I am aware of using crypto for money laundering, sanctions evasions and even buying drugs online. I wonder what would have happened without Bitcoin. There would never have been a Silk Road, and ransomware would have had to use vouchers and so, right? Everything would be so different.

      1 reply →

  • But the days when every HN submission was about some blockchain thing did end -- and hopefully that'll happen for AI too. I'm sick of reading about it at this point, but it sure is most news stories (at the outlet's I'm usually interested in)

  • I'd argue the crypto bubble did pop.

    Bitcoin/crypto doesn't have earnings reports, but many crypto-adjacent companies have crashed down to earth. It would have been worse but regulation, or sometimes lack thereof, stopped them from going public so the bleeding was limited.

    • I would consider a "popping" event to be a dramatic one that sends unemployment to record highs, and regular people are panicking, and losing their savings or mortgages.

      The Bitcoin bubble, if anything, deflated. But I'd still disagree with this characterisation because the market capitalisation of Bitcoin only seems to be going up.

      Going by the logic of supply and demand, as more and more Bitcoin is mined, the price should drop because there's more availability. But what I've observed is the value has been climbing over the past few years, and remained relatively stable.

      In any case, it's hard to argue that more people are using Bitcoin and crypto now compared to 5 years ago. Sure, NFTs ended up fizzling out, but, to be honest, they were a stupid idea from the beginning, anyway.

  • There is an AI bubble just like there was a dotcom bubble; the fact that it is a real technology with real uses we with world changing long-term impacts does not mean that the recent investment hype will not soon be recognized as excessively exuberant given the actual payoffs to investors and the timelines on which they will be realized.

    (And putting masses of people out of work and and thereby radically destabilizing capitalist societies, to the extent it is a payoff, is a payoff with a bomb attached.)

    • The dotcom bubble was a bunch of Silicon Valley types buying fancy domain names and getting showered in money before they even released anything remotely useful.

      AI companies are releasing useful things right this second, even if they still require human oversight, they are also able to significantly accelerate many tasks.

      7 replies →

  • > AI on the other hand has the potential to put literally millions of individuals out of work. At a minimum, it is already augmenting the value of highly-skilled intellectual

    This has been true since, say, 1955.

    > This is the final capitalism cheat code. A worker who does not sleep or take time off.

    That’s the hope that is driving the current AI Bubble. It has neither ever been true nor will be true with the current state of the art in AI. This realization is what is deflating the bubble.

  • >> We are never going to see a pre-AI world ever again

    I mean, to one degree or another, this is correct. somethings are not going back into the genie bottle.

  • Even if all that were true, where is the path to profitability for OpenAI, and all the rest? If they can't pivot to making a profit, then they are going to pop, and their investors will lose all their money. That's the AI bubble... investors losing all their money.

    The technology will remain, of course, just like we still have railways, and houses.

    • That is a valid and interesting discussion point. I do concede that OpenAI will probably not survive in the long term. Google and Microsoft on the other hand have dozens of independent verticals and revenue sources. Google can afford to be a loss leader with Gemini AI, and when every competitor goes out of business, they will jack up prices and enshittify a tool everyone depends on. Like what happened with YouTube once Google Video, DailyMotion, Vimeo, and the other sites essentially stopped being relevant.

      But, and this is key, AI is not going away for as long as the potential to replace human labour remains there.