Comment by lm28469
15 hours ago
> NVIDIA RTX 5090 already offers 1,792 GB/s
You can buy two m5 pro base model for the same price as a single 5090...
15 hours ago
> NVIDIA RTX 5090 already offers 1,792 GB/s
You can buy two m5 pro base model for the same price as a single 5090...
That's a fun comparison, but can you run those 2 m5 pros in parallel to accomplish 2x the work? Otherwise, you just told me you can buy 2 toyota corollas for the price of 1 F-150 while trying to convince me you can haul your boat behind both corollas at the same time.
Maybe not 2x (scaling is never linear) but you can absolutely chain them, and macOS supports RDMA over TB5 for even better performance https://news.ycombinator.com/item?id=46248644
Maybe hold back on the attitude
Their point stands. People are just not going to daisy-chain these together for datacenter use. Apple does not take the workload seriously and macOS is not a suitable OS for mass deployment.
RDMA is the bare minimum we should expect from a system that doesn't support eGPUs and treats PCI like a foreign language. It's not a long-term solution and even Apple themselves cannot deny this: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...
1 reply →
You can also buy a 64gb mini, save $1k and do more work than what you could do with a single 5090.
In Europe I can get a 128gb mac studio m4 max for 300 euros more than a 5090 (for which you still need to buy a power supply, motherboard, cpu , &c.)
But the inference on the mac studio m4 max will be slower than on the 5090, even though you can load larger models.
2 replies →