Comment by wpm
4 months ago
As someone with those "special" needs (if 10Gb/25Gb Ethernet and HDMI-capture are that special), it is incredibly frustrating.
The CPUs all come with enough PCIe lanes for a single dGPU at x16, x4 for the PCH/chipset, and maybe another x4 for a single M.2 SSD. If you aren't building a bog standard gaming PC with one SSD, one huge GPU, and nothing else, you get a configuration that doesn't match what you need. Bifurcation is hit-or-miss, if you can even physically get to the second PCIe slot, if that slot is even big enough. Random M.2s are linked to the PCH with random modes and bandwidths that change based on other configuration options.
All due to the stingy lane count on consumer platforms, again, targeting the lowest common denominator. It was even worse before Ryzen came out and offered a generous 24 lanes (16 for a GPU, 4 for the PCH, and 4 for an SSD) vs Intel's 20.
Of course, PCIe lanes aren't free, but somehow, having "I/O" targeted workloads means you also must go and spend 2-5x as much for "workstation" or server class motherboards, which also are engineered to a common "usual needs" spec that add in a bunch of shit I don't need, and usually require sacrificing single-core speed unless you get top of the line $10K+ server CPUs that draw 5x the power.
What I'd really like is instead of 4 lanes going to the chipset, I wish all of them did. Or at least, all of them went from the CPU to some switch chip that would allow me to set which lanes go to what slots, and have a software configured lane/bandwidth allocation. 24 lanes of PCIe 5.0 is 48 lanes of PCIe 4.0 is 96 lanes of PCIe 3.0, which is more than enough, but trying to actually unlock all of that bandwidth is still limited to the hardware configuration of the motherboard, and no way to reallocate unspent bandwidth. Instead of it all being hardwired for specific configurations, to the CPU directly OR to the chipset, I wish they were all wired for x16 (or x4 for the M.2 slots) direcly to some switch chip, which is then fully wired to the CPU's remaining lanes after PCH/chipset connections. If I need to stuff 4 slots with x16 cards, but they only run at 3.0 speeds, that would still leave 8 lanes of PCIe 5.0 I could allocate elsewhere.
I'm sure this is probably technically impossible, or would be incredibly expensive, but a man can dream.
No comments yet
Contribute on Hacker News ↗