← Back to context

Comment by wtallis

13 hours ago

Whether or not SLI remained viable for gaming, Broadcom was going to jack up the prices on PCIe switches to the enterprise-only range. That's the real reason why consumer motherboards don't have more GPU slots. Mainstream consumer CPU sockets never had a wealth of PCIe lanes, there was just a brief span of years where PCIe switches were cheap so high-end consumer boards could offer several x8 or x16 slots (sharing bandwidth in ways that make diagrams like these important).

In previous decades, non-mainstream CPU sockets were also more accessible to consumer budgets; first-gen Threadripper started at only 8 cores, so it was possible to pay extra for more memory channels and IO lanes without also buying an excess of CPU cores. But that had little to do with the popularity or viability of multi-GPU consumer systems.

But PCIe switches are now more common than ever. How else do you think those high-end consumer boards are able to provide six M.2 slots?

  • PCIe switches with current-generation link speeds and high lane counts are prohibitively expensive and have been absent from consumer motherboards since PCIe gen3 showed up.

    The chipsets on consumer motherboards are pretty much PCIe switches plus some SATA and USB controllers, but they're clearly in a different league from anything that's relevant to connecting GPUs. The host interfaces are x4 or occasionally x8, and the downstream links don't support links wider than x4, and at most a few of those. The link speeds are often a generation (sometimes two) behind what the CPU's PCIe lanes support. The high-end motherboards for AMD's consumer platform support more SSD slots by daisy-chaining a second chipset off the first; you get more M.2 slots but it's all still sharing a single PCIe gen4 x4 link to the CPU.

    In the PCIe gen2 era, it was common to see high-end consumer motherboards include a 48-lane PCIe switch to take x16 from the processor and fan it out to two x16 slots or some combination of x16 and x8 slots. That kind of connectivity has vanished from consumer motherboards, and isn't really common in the server or workstation markets, either. 48-lane and larger PCIe switches exist, but are mostly just used for servers to connect lots of NVMe SSDs.

  • Not all PCIe switches are made equal. IIRC, there are two kinds, and only the expensive one can work without adding excessive latency, important for GPUs.

    NVMe traffic is very tame when compared to GPU traffic.