Comment by moffkalast
6 hours ago
If there's really such a bottleneck around ASML, why not design some extra chips for legacy processes that presumably already have well known design workflows?
I mean we're not talking AMD FX and Core 2 Duo here, it's Raptor Lake and Zen 3, it's perfectly viable and still being sold in droves right now.
That’s what the likes of AMD with their chiplet design have been doing.
There’s also the issue of older process nodes not being profitable enough anymore, which explaines why at the height of the chip supply crunch older ARM chips were in short supply but there was ample stock of the 20nm feature-sized RP2040.
This is gonna sound super dumb, but I'm not sure how they aren't being profitable if there are shortages, just price things beyond break even level? The average person can't even tell the difference between a Core 5 and a Core 5 Ultra, you can practically sell them at the same price and I'm not even sure they'd notice when actually using them. The performance jump is relatively minor and the bottlenecks are elsewhere.
Part of those prices aren't something the manufacturer can adjust. Whether you're building 60nm or 20nm chips, you need pretty much the same silicon wafers, the same ultra pure water, the same chemicals and the same personnel. And as a bonus, you're not gonna be getting as many of the same chips on that wafer.
And sure, a chip layout can be shrunk; but that requires a whole new recertification cycle.
It mostly comes down to the consumer market not being significant enough by itself. A consumer may not notice a 10% increase in performance per watt or dollar. A large office building probably will, and a datacenter definitely will.
I don't think I'm being entirely hyperbolic when I say the consumer market only exists to put devices that can connect to and feed the datacenter loads into the general populations hands.
Isn't exactly this what China is doing? Apart from poaching ex ASML employees? Now reaching 7nm, and just throwing up more energy to catch up in FLOPS like Jensen said?
Because very large share of market now are datacenters. Difference from desktop is dramatic - for desktop really acceptable very simple chips with bad energy efficiency, but DCs already deal with extremely high power consumption, as they typically "compress" so much consumption in one rack, that constantly working near to physical constraints.
That's the AI hype narrative, but aren't server CPUs only like 25% of the total market? That's tiny compared to consumer volume, though revenue is likely on par given the higher cost per unit.
> aren't server CPUs only like 25% of the total market?
Yes and no. If just formally calculate, yes, servers are small market volumes. But, they are much less constrained financially, than private person, so from same fab one could earn much more money if sell to server market, than if sell to consumer market.
2 replies →
You can't make desktop computer 4 times larger but there's very little preventing you form putting 4 racks where you had 1 before. If the floor space is the expensive part of data center then probably some incentives are misaligned.
For about price of land and connectivity - in large city land price begin on few millions dollars per square kilometer, and usage of cable channels could cost from 50$ per meter (easy could be 200$/m).
Plus, space arrange could last years.
Heat dissipation in range of megawatts could be just prohibited by local regulations.
So, space in large cities is very serious problem, and for business it is usually easier to "compress" as much computing power as possible in one rack.
1 reply →
Bigger chips = more distance to cover for your electrons = more power required = more generated heat = slower throughput for your data.
Surely you don't believe that the entire chip industry had not thought of "wait what if we just make the chips bigger".
2 replies →
You cannot place dc anywhere, in large cities space is extremely constrained, and land is extremely expensive.
Also big problem - connectivity - you cannot place DC where it cannot be connected to power grid and to very powerful network.
So yes, DC floor space is severely limited.
And the third issue - last decades, rack servers dissipate extremely large amounts of heat, I hear numbers up to tens Kilowatts per rack, which is just hard to dissipate with air cooling (as example, all IBM Power servers have option of liquid cooling, but this is totally different price range).