← Back to context

Comment by crote

8 hours ago

But PCIe switches are now more common than ever. How else do you think those high-end consumer boards are able to provide six M.2 slots?

PCIe switches with current-generation link speeds and high lane counts are prohibitively expensive and have been absent from consumer motherboards since PCIe gen3 showed up.

The chipsets on consumer motherboards are pretty much PCIe switches plus some SATA and USB controllers, but they're clearly in a different league from anything that's relevant to connecting GPUs. The host interfaces are x4 or occasionally x8, and the downstream links don't support links wider than x4, and at most a few of those. The link speeds are often a generation (sometimes two) behind what the CPU's PCIe lanes support. The high-end motherboards for AMD's consumer platform support more SSD slots by daisy-chaining a second chipset off the first; you get more M.2 slots but it's all still sharing a single PCIe gen4 x4 link to the CPU.

In the PCIe gen2 era, it was common to see high-end consumer motherboards include a 48-lane PCIe switch to take x16 from the processor and fan it out to two x16 slots or some combination of x16 and x8 slots. That kind of connectivity has vanished from consumer motherboards, and isn't really common in the server or workstation markets, either. 48-lane and larger PCIe switches exist, but are mostly just used for servers to connect lots of NVMe SSDs.

Not all PCIe switches are made equal. IIRC, there are two kinds, and only the expensive one can work without adding excessive latency, important for GPUs.

NVMe traffic is very tame when compared to GPU traffic.