← Back to context

Comment by rkagerer

15 hours ago

Sky's the limit. (Short of hiring a team of engineers to design and fab a one-off board, anyway).

I appreciate your advice. I use the machine for a variety of different tasks, and am looking to accommodate at least two high-end GPU's (1 for passthrough to virtual machines for running things like Solidworks), a number of SSD's, and as many PCIe expansion cards as possible. Many of the cards are older-gen, so could be consolidated to just a few modern lanes if I could find an external expander with sufficiently generous capacity. Here's a quick inventory of what's in the existing box:

- Mellanox Infiniband. For high-speed, low-latency networking... these days, probably replaceable with integrated NIC's, particularly if they come with RDMA.

- High-performance RAID. I've found dedicated cards offer better features, performance, capacity, resilience and reliability than any of the mobo-integrated garbage I've tried over the years. Things like BBU's/SuperCaps, seamless migration and capacity upgrades, out-of-band monitoring, etc. e.g. I've taken my existing mass storage array created on a modest ARC-1231ML 15+ years ago, through several newer generations to an ARC-1883, with many disk and capacity upgrades along the way, but it's still the same array without ever having had to reformat and restore from scratch. Incidentally I've been particularly happy with Areca's hardware, and they've even implemented some features I requested over the years (like the ability to hot-clone a replacement disk for one expected to fail soon then swap in the new one, without having to degrade the array and wait for a lengthy rebuild process that reduces your fault tolerance while hammering all member disks; as well as some other tweaks for better compatibility with tools like Hard Disk Sentinel). I notice they're finally starting to come out with controllers oriented to SSD's, like a PCI 5.0 product (https://www.areca.com.tw/products/nvme-1689-8N.html) for up to 8 x4 M.2 SSD's that boasts up to 60 GB/s, which is interesting (though the high-queue-depth random performance still doesn't match directly-plugged drives). I know software-RAID for the solid state stuff is also an option (as is just living without redundancy), but it's been convenient outsourcing the complexity.

- Slim, low-performance accessory GPU for more displays

- A few others this crowd would just laugh at me for (e.g. a PCI I/O card that includes a true parallel port, because nothing is more fun™ for hobbyist stuff and USB-based alternatives were found to have too much abstraction or latency; a SCSI adapter for an archaic piece of vintage hardware I'd love to keep installed permanently but there ain't space, and occasional one-off use stuff like a high-bandwidth digitizer).

The motherboard had 6 PCIe slots, and I've got two more provided by an external PCIe expander (after accounting for the one lost for it's own connection). If I could find some kind of expander that took a single PCIe 5.0 slot and turned it into half a dozen PCIe 3.0 slots (some full-width) I'd be set.

I know I'm at the crazy end of how-much-crap-can-you-jam-in-one-PC, but it still seems bizarre to me that newer boards have so many fewer slots yet feel lane-constrained, when between leading-edge SSD's and high-bandwidth GPU's the demand for more lanes is skyrocketing. When I built the previous PC it felt tight but doable... these days it feels like I can barely accommodate the level of graphics and storage I'd like, and by the time I do, there's nothing left for anything else. Granted it's been a few years since I got my hands dirty with this stuff, so maybe I'm just doing it wrong?

And yes, I've heard of USB... and have a bazillion devices plugged in (including some of exotic ones like an LCD display, logic analyzer, and a legit floppy drive that does get used once in a blue moon like when I need to make a memtest86 boot disk for a vintage PC). I've actually found some motherboards have issues where the USB stack gets flakey once you have too many devices connected (even using powered hubs to mitigate power constraints).

Ok... go ahead and have at me; tell me I'm old and dusty and I should take my one GPU and one SSD and be happy with them ;-).

I just built a 4th gen Epyc build with 22x NVMe drives and a dual port 40G NIC. It was a FAR superior experience to trying to use prosumer parts, PCIe splitters, etc and didn't end up costing as much as I thought it would (though the DDR5 RAM has went up 2x in the last month since I bought haha).

Side note on RAID: Motherboard integrated garbage is still meh, but hardware options are also pretty meh. I just use software options like ZFS on Linux (or mdraid if I just need a fat RAID 0 with no protection) and get fantastic speeds, portability, no artificial drive topology restrictions, and no additional latency to accessing the drives over PCIe. On Windows the equivalents would be ReFS Storage Pools or whatever the software RAID 0 in the disk manager was called.

If you're looking for cheaper and "not the latest gen" the search tugm4770 on Ebay and filter to the CPU you want. They have MB bundles for standard EATX chassis or bundles with used supermicro servers+PSUs as well. I went the EATX route and put it in a standard PC case, it's crazy how quiet it runs (it's in my living room even)! I used to buy from this seller for the lab at my previous job, but decided to go all in at home this time. Never had trouble with them and it is by far the cheapest way to get these kinds of setups (just avoid the really old Epyc generations).

If the sky is truly the limit and you want the absolute best of the best, get a current gen high frequency optimized Epyc new (don't know the easiest place to do that). You can also do the Threadripper PRO, just depends.

Both of these Epyc options have a "how sky high?" option of getting a dual socket MB, which can give you up to 192 PCIe lanes instead of up to 128. You have to populate both sockets, and the physical footprint starts to become monstrous (especially if you use all the lanes).