Comment by Aerroon

2 days ago

We have faster memory, it's just all used in data center cards you can't buy (and can't afford to buy).

AMD actually used HBM2 memory in their Radeon VII card back in 2019 (!!) for $700. It had 16 GB of HBM2 memory with 1 TB/s throughput.

The RTX 5080 in conversion l comparison also has 16 GB of VRAM, but was released in 2025 and has 960 GB/s throughput. The RTX 5090 does have an edge at 1.8 TB/s bandwidth and 32 GB of VRAM but it also costs several times more. Imagine if GPUs had gone down the path of the Radeon VII.

That being said, the data center cards from both are monstrous.

The Nvidia B200 has 180 GB of VRAM (2x90GB) offering 8.2 TB/s bandwidth (4.1 TB/s x2) released in 2024. It just costs as much as a car, but that doesn't matter, because afaik you can't even buy them individually. I think you need to buy a server system from Nvidia or Dell that will come with like 8 of these and cost you like $600k.

AMD has the Mi series. Eg AMD MI325x. 288 GB of VRAM doing 10 TB/s bandwidth and released in 2024. Same story as Nvidia: buy from an OEM that will sell you a full system with 8x of these (and if you do get your hands on one of these you need a special motherboard for them since they don't do PCIe). Supposedly a lot cheaper than Nvidia, but still probably $250k.

These are not even the latest and greatest for either company. The B300 and Mi355x are even better.

It's a shame about the socket for the Mi series GPUs (and the Nvidia ones too). The Mi200 and Mi250x would be pretty cool to get second-hand. They are 64 GB and 128GB VRAM GPUs, but since they use OAP socket you need the special motherboard to run them. They're from 2021, so in a few years time they will likely be replaced, but as a regular joe you likely can't use them.

The systems exist, you just can't have them, but you can rent them in the cloud at about $2-4 per hour per GPU.