← Back to context

Comment by aDfbrtVt

7 months ago

To get a ballpark power usage, we can look at comparable (for some definition thereof) commercial offerings. Take a public datasheet from Arista[1], they quote 16W typical for a 400Gbps module for 120km of reach. You would need 2500 modems at 16W (38kW) jointly decoding (i.e. very close together) to process this data rate. GPU compute has really pushed the boundaries on thermal management, but this would be far more thermally dense.

[1] https://www.arista.com/assets/data/pdf/Datasheets/400ZR_DCI_...

It's important to note that wavelength channels are not coupled, so modems with different wavelengths don't need to be terribly close together (in fact one could theoretically do wavelength switching so they could be 100s of km apart). So the scaling we need to consider is the scaling of the MIMO which in current modems is 2x2. The difficulty is not necessarily just power consumption (also the power envelope of long haul modems is higher than the DCI modem you link, up to 70W IIRC), but also resourcing on the ASIC, your MIMO part (which needs to be highly parallel) will take up significant floorspace and you need to balance the delays.

The 38kW is not a very high number btw, the switches at the end points of submarine links are quite a bit more power hungry already.

  • Depending on phase matching criteria of lambda's on a given core, I would mostly agree that various wavelengths are not significantly coupled. I also agree there are a different power budget for LH modems vs. DCI, but power on LH modems is not something that often gets publicly disclosed. I am not too concerned with the overall power, more the power density (and component density) that 19 channel MIMO would require.

    The main point I was trying to make is the impracticality of MIMO SDM. The topic has been discussed to death (see the endless papers from Nokia) and has yet to be deployed because the spatial gain is never worth the real world implementation issues.

I think the scaling parameters are a bit different here since the primary concern is the DSP power processing and correlating for MIMO 19 signals simultaneously. But the 16W figure for a 120km 400Gbps module includes a high-powered¹ transmitter amplifier & laser, as well as receive amplifiers on top of the DSP. My estimate is based on O(n²) scaling for 19×19 MIMO (=361) and then assuming 2≈3W of DSP power per unit factor.

[but now that I think about it… I think my estimate is indeed too low; I was assuming commonplace transceivers for the unit factor, i.e. ≤1Tb; but a petabit on 19 cores is still 53Tb per core…]

¹ note the setup in this paper has separate amplifiers in 86.1km steps, so the transmitter doesn't need to be particularly high powered.

38kW ~= 50 HP ~= 45A at 480V three-phase, which is a relatively light load handled by 3#6 AWG conductors and a #10 equipment ground.

I mean, it’s a shitload more power than a simple media converter that takes in fiber and outputs to a RJ-45 but not all that much compared to other commercial electrical loads. This Eaton/Tripplite unit draws ~40W at 120V - https://tripplite.eaton.com/gigabit-multimode-fiber-to-ether...

A smallish commercial heat pump/CRAC unit (~12kW) can handle the cooling requirements (assuming a COP of 3)