← Back to context

Comment by m4r1k

1 day ago

Google's real moat isn't the TPU silicon itself—it's not about cooling, individual performance, or hyper-specialization—but rather the massive parallel scale enabled by their OCS interconnects.

To quote The Next Platform: "An Ironwood cluster linked with Google’s absolutely unique optical circuit switch interconnect can bring to bear 9,216 Ironwood TPUs with a combined 1.77 PB of HBM memory... This makes a rackscale Nvidia system based on 144 “Blackwell” GPU chiplets with an aggregate of 20.7 TB of HBM memory look like a joke."

Nvidia may have the superior architecture at the single-chip level, but for large-scale distributed training (and inference) they currently have nothing that rivals Google's optical switching scalability.

Also, Google owns the entire vertical stack, which is what most people need. It can provide an entire spectrum of AI services far cheaper, at scale (and still profitable) via its cloud. Not every company needs to buy the hardware and build models, etc., etc.; what most companies need is an app store of AI offerings they can leverage. Google can offer this with a healthy profit margin, while others will eventually run out of money.

  • They just need to actually make and market a good product though, and they seem to really struggle with this. Maybe on a long enough timeline their advantages will make this one inevitable.

  • all this vertical integration no wonder Apple and Google have such a tight relationship.

That is comparing an all to all switched Nvlink fabric to a 3D torus for TPUs. Those are completely different network topologies with different tradeoffs.

For example the currently very popular Mixture of Experts architectures require a lot of all to all traffic (for expert parallelism) which works a lot better on the switched NVlink fabric as opposed where it doesn't need to traverse multiple links in the torus.

  • This is an underrated point. Comparing just the peak bandwidth is like saying Bulldozer was the far superior CPU of the era because it had a really high frequency ceiling.

  • Really? Fully-connected hardware is in buildable (at scale) which we already know from the HPC world. Fat trees and dragonfly networks are pretty scalable, but a 3d torus is a very good tradeofff, and respects the dimensionality of reality.

    Bisection bandwidth is a useful metric, but is hop count? Per-hop cost tends to be pretty small.

    • Latency (of different types), jitter, and guaranteed bandwidth are the real underlying metrics. Hop count is just one potential driver of those, but different approaches may or may not tackle each of these parts differently.

NVFP4 is the thing no one saw coming. I wasn't watching the MX process really, so I cast no judgements, but it's exactly what it sounds like, a serious compromise in resource constrained settings. And it's in the silicon pipeline.

NVFP4 is to put it mildly a masterpiece, the UTF-8 of its domain and in strikingly similar ways it is 1. general 2. robust to gross misuse 3. not optional if success and cost both matter.

It's not a gap that can be closed by a process node or an architecture tweak: it's an order of magnitude where the polynomials that were killing you on the way up are now working for you.

sm_120 (what NVIDIA's quiet repos call CTA1) consumer gear does softmax attention and projection/MLP blockscaled GEMM at a bit over a petaflop at 300W and close to two (dense) at 600W.

This changes the whole game and it's not clear anyone outside the lab even knows the new equilibrium points, it's nothing like Flash3 on Hopper, lotta stuff looks FLOPs bound, GDDR7 looks like a better deal than HBMe3. The DGX Spark is in no way deficient, it has ample memory bandwidth.

This has been in the pipe for something like five years and even if everyone else started at the beginning of the year when this was knowable, it would still be 12-18 months until tape out. And they haven't started.

Years Until Anyone Can Compete With NVIDIA is back up to the 2-5 it was 2-5 years ago.

This was supposed to be the year ROCm and the new Intel stuff became viable.

They had a plan.

It's fun when then you read last Nvidia tweet [1] suggesting that still their tech is better, based on pure vibes as anything in the (Gen)AI-era.

[1] https://x.com/nvidianewsroom/status/1993364210948936055

  • Not vibes. TPUs have fallen behind or had to be redesigned from scratch many times as neural architectures and workloads evolved, whereas the more general purpose GPUs kept on trucking and building on their prior investments. There's a good reason so much research is done on Nvidia clusters and not TPU clusters. TPU has often turned out to be over-specialized and Nvidia are pointing that out.

    • You say that like I d a bad thing. Nvidia architectures keep changing and getting more advanced as well, with specialized tensor operations, different accumulators and caches, etc. I see no issue with progress.

      7 replies →

  • > based on pure vibes

    The tweet gives their justification; CUDA isn't ASIC. Nvidia GPUs were popular for crypto mining, protein folding, and now AI inference too. TPUs are tensor ASICs.

    FWIW I'm inclined to agree with Nvidia here. Scaling up a systolic array is impressive but nothing new.

For all the excitement surrounding this, I fail to comprehend how Google can't even meet the current demand for Gemini 3^. Moreover, they are unwilling to invest in expansion directly (apparently have a mandate to double their compute every 6 months without spending more than their current budget). So, pardon me if I can't see how they will scale operations as demand grows while simultaneously selling their chips to competitors?! This situation doesn't make any sense.

^Even now I get capacity related error messages, so many days after the Gemini 3 launch. Also, Jules is basically unusable. Maybe Gemini 3 is a bigger resource hog than anyone outside of Google realizes.

  • I also suspect Google is launching models it can’t really sustain in volume or that are operating at a loss. Nothing preventing them from like doubling model size compared to the rest or allocating an insane amount of compute just to make the headlines on model performance (clearly it’s good for the stock). These things are opaque anyway, buried deep into the P&L.

OCS is indeed an engineering marvel, but look at NVIDIA's NVL72. They took a different path: instead of flexible optics, they used the brute force of copper, turning an entire rack into one giant GPU with unified memory. Google is solving the scale-out problem, while NVIDIA is solving the scale-up problem. For LLM training tasks, where communication is the bottleneck, NVIDIA's approach with NVLink might actually prove even more efficient than Google's optical routing.

No, not at all. If this were true Google would be killing it in MLPerf benchmarks, but they are not.

It’s better to have a faster, smaller network for model parallelism and a larger, slower one for data parallelism than a very large, but slower, network for everything. This is why NVIDIA wins.

100 times more chips for equivalent memory, sure.

  • Check the specs again. Per chip, TPU 7x has 192GB of HBM3e, whereas the NVIDIA B200 has 186GB.

    While the B200 wins on raw FP8 throughput (~9000 vs 4614 TFLOPs), that makes sense given NVIDIA has optimized for the single-chip game for over 20 years. But the bottleneck here isn't the chip—it's the domain size.

    NVIDIA's top-tier NVL72 tops out at an NVLink domain of 72 Blackwell GPUs. Meanwhile, Google is connecting 9216 chips at 9.6Tbps to deliver nearly 43 ExaFlops. NVIDIA has the ecosystem (CUDA, community, etc.), but until they can match that interconnect scale, they simply don't compete in this weight class.

    • I guess “this weight class” is some theoretical class divorced from any application? Almost all players are running Nvidia other than Google. The other players are certainly more than just competing with Google.

      5 replies →

    • Wow, no, not at all. It’s better to have a set of smaller, faster cliques connected by a slow network than a slower-than-clique flat network that connects everything. The cliques connected by a slow DCN can scale to arbitrary size. Even Google has had to resort to that for its biggest clusters.

      1 reply →