Comment by DoctorOetker
10 hours ago
eldenring is slightly wrong: for reasonable temperatures the area of the radiating panels would have to be a bit more than 3 times the area of the solar panel, otherwise theres nothing wrong.
No need to apply at NASA, to the contrary, if you don't believe in Stefan Boltzmann law, feel free to apply for a Nobel prize with your favorite crank theory in physics.
Whats your definition for reasonable temp? my envelope math tells me at 82 celsius (right before h100s start to throttle) you'd need about 1.5x the surface area for radiators. Not exactly back to back, but even 3x surface area is reasonable.
Also this assumes a flat surface on both sides. Another commenter in this thread brought up a pyramid shape which could work.
Finally, these gpus are design for earth data centers where power is limited and heat sinks are abundant. In the case of space data centers you can imagine we get better radiators or silicon that runs hotter. Crypto miners often run asics very hot.
I just don't understand why every time this topic is brought up, everyone on HN wants to die on the hill that cooling is not possible. It is?? the primary issue if you do the math is clearly the cost of launch.
I am the person who gave the pyramid shape as a didactic example (convexity means we can ignore self obscuration, and giving up 2 of the 4 triangular side surfaces of the pyramid allows me to ignore the presence of lukewarm earth).
My example is optimized not for minimal radiator surface area, but for minimal mathematical and physical knowledge required to understand feasibility.
Your numbers are different because you chose 82 C (355 K) instead of my 26 C (300 K).
Near normal operating temperatures hardware lifetime roughly doubles for every 10 deg C/K decrease in temperature (this does not hold indefinitely of course).
You still need to move the heat from the GPU to the radiator so my example of 26 deg C at the radiator just leaves a lot of room against criticism ;)