← Back to context

Comment by syntaxing

3 months ago

Matt Levine tangentially talked about this during his podcast this past Friday (or was it the one before?). It was a good way to value these companies according to their compute size since those chips are very valuable. At a minimum, the chips are an asset that acts as a collateral.

I hear this a lot, but what the hell. It's still computer chips. They depreciate. Short supply won't last forever. Hell, GPUs burn out. It seems like using ice sculptures as collateral, and then spring comes.

  • If so wouldn’t it be the first time in history when more processing power is not used?

    In my experience CPU/GPU power is used up as much as possible. Increased efficiency just leads to more demand.

    • I think you're missing the point: H100 isn't going to remain useful for a long time, would you consider Tesla or Pascal graphic cards a collateral? That's what those H100 will look like in just a few years.

      6 replies →

  • That is the wrong take. Depreciated and burned out chips are replaced and a total compute value is typically increased over time. Efficiency gains are also calculated and projected over time. Seasons are inevitable and cyclical. Spring might be here but winter is coming.

  • Year over year gains in computing continue to slow. I think we keep forgetting that when talking about these things as assets. The thing controlling their value is the supply which is tightly controlled like diamonds.

    • Honestly, I don't fully understand the reason for this shortage.

      Isn't it because we insist on only using the latest nodes from a single company for manufacture?

      I don't understand why we can't use older process nodes to boost overall GPU making capacity.

      Can't we have tiers of GPU availability?

      Why is Nvidia not diversifying aggressively to Samsung and Intel no matter the process node.

      Can someone explain?

      I've heard packaging is also a concern, but can't you get Intel to figure that out with a large enough commitment?

      1 reply →

    • > Year over year gains in computing continue to slow.

      This isn't true in the AI chip space (yet). And so much of this isn't just about compute but about the memory.

      1 reply →

> It was a good way to value these companies according to their compute size since those chips are very valuable.

Are they actually, though? Presently yes, but are they actually driving ROI? Or just an asset nobody really is meaningfully utilizing, but helps juice the stocks?

I asked this elsewhere, but, I don't fully understand the reason for the critical GPU shortage.

Isn't it because NVIDIA insists on only using the latest nodes from a single company (TSMC) for manufacture?

I don't understand why we can't use older process nodes to boost overall GPU making capacity.

Can't we have tiers of GPU availability some on cutting edge nodes, others built on older Intel and Samsung nodes?

Why is Nvidia not diversifying aggressively to Samsung and Intel no matter the process node.

Can someone explain?

I've heard packaging is also a concern, but can't you get Intel to figure that out with a large enough commitment?

(Also, I know NVIDIA has some capacity on Samsung. But why not go all out, even using Global Foundries?)

That's a great way to value a company that is going bankrupt.

But, I'm not going to value an operating construction company based on how many shovels or excavators they own. I'm going to want to see them putting those assets to productive use.

If you are a cloud provider renting them out

Otherwise you better keep them humming trying to find a business model because they certainly aren't getting any newer as chips