Comment by angoragoats
2 months ago
> Also, as the OP noted, this setup can support up to 4 Mac devices because each Mac must be connected to every other Mac!! All the more reason for Apple to invest in something like QSFP.
This isn’t any different with QSFP unless you’re suggesting that one adds a 200GbE switch to the mix, which:
* Adds thousands of dollars of cost,
* Adds 150W or more of power usage and the accompanying loud fan noise that comes with that,
* And perhaps most importantly adds measurable latency to a networking stack that is already higher latency than the RDMA approach used by the TB5 setup in the OP.
Mikrotik has a switch that can do 6x200g for ~$1300 and <150W.
https://www.bhphotovideo.com/c/product/1926851-REG/mikrotik_...
Wow, this switch (MikroTik CRS812) is scary good for the price point. A quick Google search fails to find any online vendors with stock. I guess it is very popular! Retail price will be <= 1300 USD.
I did some digging to find the switching chip: Marvell 98DX7335
Seems confirmed here: https://cdn.mikrotik.com/web-assets/product_files/CRS812-8DS...
And here: https://cdn.mikrotik.com/web-assets/product_files/CRS812-8DS...
From Marvell's specs: https://www.marvell.com/content/dam/marvell/en/public-collat...
Again, those are some wild numbers if I have the correct model. Normally, Mikrotik includes switching bandwidth in their own specs, but not in this case.
They are very popular and make quite good products, but as you noticed it can be tricky to find them in stock.
Besides stuff like this switch they've also produced pretty cool little micro-switches you can PoE and run as WLAN hotspots, e.g. to distance your mobile user device from some network you don't really trust, or more or less maliciously bridge a cable network through a wall because your access to the building is limited.
That switch appears to have 2x 400G ports, 2x 200G ports, 8x 50G ports, and a pair of 10G ports. So unless it allows bonding together the 50G ports (which the switch silicon probably supports at some level), it's not going to get you more than four machines connected at 200+ Gbps.
As with most 40+GbE ports, the 400Gbit ports can be split into 2x200Gbit ports with the use of special cables. So you can connect a total of 6 machines at 200Gbit.
10 replies →
Cool! So for marginally less in cost and power usage than the numbers I quoted, you can get 2 more machines than with the RDMA setup. And you’ve still not solved the thing that I called out as the most important drawback.
how significant is the latency hit?
4 replies →
For RDMA you'd want Infiniband not Ethernet.
RDMA for new AI/HPC clusters is moving toward ethernet (the keyword to look for is RoCE). Ethernet gear is so much cheaper that you can massively over-provision to make up for some of the disadvantages of asynchronous networking, and it lets your run jobs on hyperscalers (only Azure ever supported actual IB). Most HPC is not latency-sensitive enough that it needs Infiniband’s lower jitter/median, and vendors have mostly caught up on the hardware acceleration front.