← Back to context

Comment by dylan604

13 hours ago

That's a fun comparison, but can you run those 2 m5 pros in parallel to accomplish 2x the work? Otherwise, you just told me you can buy 2 toyota corollas for the price of 1 F-150 while trying to convince me you can haul your boat behind both corollas at the same time.

Maybe not 2x (scaling is never linear) but you can absolutely chain them, and macOS supports RDMA over TB5 for even better performance https://news.ycombinator.com/item?id=46248644

Maybe hold back on the attitude

  • Their point stands. People are just not going to daisy-chain these together for datacenter use. Apple does not take the workload seriously and macOS is not a suitable OS for mass deployment.

    RDMA is the bare minimum we should expect from a system that doesn't support eGPUs and treats PCI like a foreign language. It's not a long-term solution and even Apple themselves cannot deny this: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...

    • No their point doesn’t stand because they questioned whether you can use them together. And yes you can. Don’t change the goalposts just because you don’t like the products. Nowhere in their comment does your interpretation even come into the mix.

You can also buy a 64gb mini, save $1k and do more work than what you could do with a single 5090.

In Europe I can get a 128gb mac studio m4 max for 300 euros more than a 5090 (for which you still need to buy a power supply, motherboard, cpu , &c.)

  • But the inference on the mac studio m4 max will be slower than on the 5090, even though you can load larger models.

    • All I'm saying is that the comparison doesn't make sense. The 5090 is faster on a small subset of tasks if attached to a computer which ends up being 3x the price of a m5 machine that fit the same model or the same price as a machine that fits models 5x bigger

      1 reply →