Comment by dijit
2 years ago
I did some data processing at Ubisoft.
each node in our hadoop cluster had 64GiB of ram (which is the max amount you should have for a single node java application, where 32G is allocated for heap FWIW), we had I think 6 of these nodes for a total of 384GiB memory.
Our storage was something like 18TiB across all nodes.
It would be a big machine, but our entire cluster could easily fit. Largest machine on the market right now is something like 128CPU's and 20TiB of Memory.
384GiB was available in a single 1U rackmount server at least as early as 2014.
Storage is basically unlimited with direct-attached-storage controllers and rackmount units.
I had an HP from 2010 that supported 1.5TB of ram with 40 cores, but it was 4U. I'm not sure what the height has to do with memory other than a 1U doesn't have the luxury of the backplane(s) being vertical or otherwise above the motherboard, so maybe it's limited space?
Theres different classes of servers, the 4U ones are pretty much as powerful as it gets, many sockets (usually 4) and a huge fabric.
1Us are extremely commodity, basically as “low end” as it gets, so I like to use them as if they are a baseline.
A 1U that can take 1.5TiB of ram might be part of the same series of machines that might have a 4U machine that could do 10TiB. But those are hugely expensive. Both to buy and to run