Comment by darth_avocado

4 months ago

Isn’t that what Michael Burry is complaining about? That five years is actually too generous when it comes to depreciation of these assets and that companies are being too relaxed with that estimate. The real depreciation is more like 2-3 years for these GPUs that cost tens of thousands of dollars a piece.

https://x.com/michaeljburry/status/1987918650104283372

That's exactly the thing. It's only about bookkeeping.

The big AI corps keep pushing depreciation for GPUs into the future, no matter how long the hardware is actually useful. Some of them are now at 6 years. But GPUs are advancing fast, and new hardware brings more flops per watt, so there's a strong incentive to switch to the latest chips. Also, they run 24/7 at 100% capacity, so after only 1.5 years, a fair share of the chips is already toast. How much hardware do they have in their books that's actually not useful anymore? Noone knows! Slower depreciation means more profit right now (for those companies that actually make profit, like MS or Meta), but it's just kicking the can down the road. Eventually, all these investments have to get out of the books, and that's where it will eat their profits. In 2024, the big AI corps invested about $1 trillion in AI hardware, next year is expected to be $2 trillion. Only the interest payments for that are crazy. And all of this comes on top of the fact that none of the these companies actually make any profit at all with AI. (Except Nvidia of course) There's just no way this will pan out.

  • > It's only about bookkeeping.

    > Some of them are now at 6 years.

    There are three distinct but related topics here, it's not "just about bookkeeping" (though Michael Burry may be specifically pointing to the bookkeeping being misquoted):

    1. Financial depreciation - accounting principals typically follow the useful life of the capital asset (simply put, if an airplane typically gets used for 30 years, they'll split the cost of purchasing an airplane across 30 years equally on their books). Getting this right has more to do with how future purchases get financed due to how the bookkeepers show profitability, balance sheets, etc.. Cashflow is ultimately what might create an insolvent company.

    2. Useful life - per number 1 above - this is the estimated and actual life of the asset. So if the airplane actually is used over 35 years, not 30, it's actual useful life is 35 years. This is to your point of "some of them are 6 years old". Here is where this is going to get super tricky with GPUs. We (a) don't actually know what the useful life is or is going to be (hence Michael Burry's question) for these GPUs (b) the cost of this is going to get complicated fast. Let's say (I'm making these up) GPU X2000 is 2x the performance of GPU X1000 and your whole data center is full of GPU X1000. Do you replace all of those GPUs to increase throughput?

    3. Support & maintenance - this is what actually gets supported by the vendor. There doesn't seem to be any public info about the Nvidia GPUs but typically these are 3-5 years (usually tied to the useful life) and often can be extended. Again, this is going to get super complicated to financially because we don't know what future advancements might happen to performance improvements to GPUs (and therefore would necessitate replacing old ones and therefore creating renewed maintenance contracts).

  • > Also, they run 24/7 at 100% capacity, so after only 1.5 years

    How does OpenAI keep this load? I would expect the load at 2pm Eastern to be WAY bigger than the load after California goes to bed.

    • Typical load management that’s existed for 70 years: when interactive workloads are off-peak, you do batch processing. For OpenAI that’s anything from LLM evaluation of the days’ conversations to user profile updates.

  • Flops per watt is relevant for a new data center build-out where you're bottlenecked on electricity, but I'm not sure it matters so much for existing data centers. Electricity is such as small percentage of total cost of ownership. The marginal cost of running a 5 year old GPU for 2 more years is small. The husk of a data center is cheap. It's the cooling, power delivery equipment, networking, GPUs etc that costs money, and when you retrofit data centers for the latest and greatest GPUs you have to throw away a lot of good equipment. Makes more sense to build new data centers as long as inference demand doesn't level off.

How different is this from rental car companies changing over their fleets? I don't know, this is a genuine question. The cars cost 3-4x as much and last about 2x as far as I know, and the secondary market is still alive.

  • > How different is this from rental car companies changing over their fleets?

    New generations of GPUs leapfrog in efficiency (performance per watt) and vehicles don't? Cars don't get exponentially better every 2–3 years, meaning the second-hand market is alive and well. Some of us are quite happy driving older cars (two parked outside our home right now, both well over 100,000km driven).

    If you have a datacentre with older hardware, and your competitor has the latest hardware, you face the same physical space constraints, same cooling and power bills as they do? Except they are "doing more" than you are...

    Would we could call it "revenue per watt"?

    • The traditional framing would be cost per flop. At some point your total costs per flop over the next 5 years will be lower if you throw out the old hardware and replace it with newer more efficient models. With traditional servers that's typically after 3-5 years, with GPUs 2-3 years sounds about right

      The major reason companies keep their old GPUs around much longer with now are the supply constraints

    • The used market is going to be absolutely flooded with millions of old cards. I imagine shipping being the most expensive cost for them. The supply side will be insane.

      Think 100 cards but only 1 buyer as a ratio. Profit for ebay sellers will be on "handling", or inflated shipping costs.

      eg shipping and handling.

      4 replies →

  • Rental car companies aren’t offering rentals at deep discount to try to kickstart a market.

    It would be much less of a deal if these companies were profitable and could cover the costs of renewing hardware, like car rental companies can.

  • I think it's a bit different because a rental car generates direct revenue that covers its cost. These GPU data centers are being used to train models (which themselves quickly become obsolete) and provide inference at a loss. Nothing in the current chain is profitable except selling the GPUs.

    • > and provide inference at a loss

      You say this like it's some sort of established fact. My understanding is the exact opposite and that inference is plenty profitable - the reason the companies are perpetually in the red is that they're always heavily investing in the next, larger generation.

      I'm not Anthropic's CFO so i can't really prove who's right one way or the other, but I will note that your version relies on everyone involved being really, really stupid.

      5 replies →

  • > the secondary market is still alive.

    this is the crux. Will these data center cards, if a newer model came out with better efficiency, have a secondary market to sell to?

    It could be that second hand ai hardware going into consumers' hands is how they offload it without huge losses.

    • The GPUs going into data centers aren't the kind that can just be reused by putting them into a consumer PC and playing some video games, most don't even have video output ports and put out FPS similar to cheap integrated GPUs.

      2 replies →

    • I would presume that some tier shaped market will arise where the new cards are used for the most expensive compute tasks like training new models, the slightly used for inference, older cards for inference of older models, or applied to other markets that have less compute demand (or spend less $ per flop, like someone else mentioned).

      It would be surprising to me that all this capital investment just evaporates when a new data center gets built or refitted with new servers. The old gear works, so sell it and price it accordingly.