Comment by m4r1k
1 day ago
Check the specs again. Per chip, TPU 7x has 192GB of HBM3e, whereas the NVIDIA B200 has 186GB.
While the B200 wins on raw FP8 throughput (~9000 vs 4614 TFLOPs), that makes sense given NVIDIA has optimized for the single-chip game for over 20 years. But the bottleneck here isn't the chip—it's the domain size.
NVIDIA's top-tier NVL72 tops out at an NVLink domain of 72 Blackwell GPUs. Meanwhile, Google is connecting 9216 chips at 9.6Tbps to deliver nearly 43 ExaFlops. NVIDIA has the ecosystem (CUDA, community, etc.), but until they can match that interconnect scale, they simply don't compete in this weight class.
Isn’t the 9000 TFLOP/s number Nvidia’s relatively useless sparse FLOP count that is 2x the actual dense FLOP count?
Correct --- found a remark on Twitter calling this "Jenson Math".
Same logic when NVidia quote the "bidirectional bandwidth" of high speed interconnects to make the numbers look big, instead of the more common BW per direction, forcing everyone else to adopt the same metric in marketing materials.
I guess “this weight class” is some theoretical class divorced from any application? Almost all players are running Nvidia other than Google. The other players are certainly more than just competing with Google.
> Almost all players are running Nvidia other than Google.
No surprises there, Google is not the greatest company at productizing their tech for external consumption.
> The other players are certainly more than just competing with Google.
TBF, its easy to stay in the game when you're flush with cash, and for the past N-quarters, investors have been throwing money at AI companies, Nvidia's margins have greatly benefited from this largesse. There will be blood on the floor once investors start demanding returns to their investments.
Ok? The person I was replying to was saying that Google’s compute offering is substantially superior to Nvidia’s. What do your comments about market positioning have to do with that?
If Google’s TPUs were really substantially superior, don’t you think that would result in at least short term market advantages for Gemini? Where are they?
3 replies →
Wow, no, not at all. It’s better to have a set of smaller, faster cliques connected by a slow network than a slower-than-clique flat network that connects everything. The cliques connected by a slow DCN can scale to arbitrary size. Even Google has had to resort to that for its biggest clusters.
Is this claim based on observed comm patterns in some particular AI architecture?
Yet everyone uses NVIDIA and Google is at catchup position.
Ecosystem is MASSIVE factor and will be a massive factor for all but the biggest models
Catch-up in what exactly? Google isn't building hardware to sell, they aren't in the same market.
Also I feel you completely misunderstand that the problem isn't how fast is ONE gpu vs ONE tpu, what matters is the costs for the same output. If I can fill a datacenter at half the cost for the same output, does it matters I've used twice the TPUs and that a single Nvidia Blackwell was faster? No...
And hardware cost isn't even the biggest problem, operational costs, mostly power and cooling are another huge one.
So if you design a solution that fits your stack (designed for it) and optimize for your operational costs you're light years ahead of your competition using the more powerful solution, that costs 5 times more in hardware and twice in operational costs.
All I say is more or less true for inference economics, have no clue about training.
Also, isn't memory a bit moot? At scale I thought that the ASICs frequently sat idle waiting for memory.
1 reply →