Comment by simonw
7 hours ago
Models of this size can usually be run using MLX on a pair of 512GB Mac Studio M3 Ultras, which are about $10,000 each so $20,000 for the pair.
7 hours ago
Models of this size can usually be run using MLX on a pair of 512GB Mac Studio M3 Ultras, which are about $10,000 each so $20,000 for the pair.
You might want to clarify that this is more of a "Look it technically works"
Not a "I actually use this"
The difference between waiting 20 minutes to answer the prompt '1+1='
and actually using it for something useful is massive here. I wonder where this idea of running AI on CPU comes from. Was it Apple astroturfing? Was it Apple fanboys? I don't see people wasting time on non-Apple CPUs. (Although, I did do this for a 7B model)
MLX uses the GPU.
That said, I wouldn't necessarily recommend spending $20,000 on a pair of Mac Studios to run models like this. The performance won't be nearly as good as the server-class GPU hardware that hosted models run on.
The reason Macs get recommended is the unified memory, which is usable as VRAM for the GPU. People are similarly using the AMD Strix Halo for AI which also has a similar memory architecture. Time to first token for something like '1+1=' would be seconds, and then you'd be getting ~20 tokens per second, which is absolutely plenty fast for regular use. Token/s slows down at the higher end of context, but it's absolutely still practical for a lot of usecases. Though I agree that agentic coding, especially over large projects, would likely get too slow to be practical.
Not too slow if you just let it run overnight/in the background. But the biggest draw would be no rate limits whatsoever compared to the big proprietary APIs, especially Claude's. No risk of sudden rugpulls either, and the model will have very consistent performance.
We are getting into a debate between particulars and universals. To call the 'unified memory' VRAM is quite a generalization. Whatever the case, we can tell from stock prices that whatever this VRAM is, its nothing compared to NVIDIA.
Anyway, we were trying to run a 70B model on a macbook(can't remember which M model) at a fortune 20 company, it never became practical. We were trying to compare strings of character length ~200. It was like 400-ish characters plus a pre-prompt.
I can't imagine this being reasonable on a 1T model, let alone the 400B models of deepseek and LLAMA.
2 replies →
Mac studio way is not "AI on CPU," as M2/M4 are complex SoC, that includes a GPU with unified memory access.
If it worked IRL for anything useful, I'd be more interested in the technical differences. But it was a mere toy for a few tests at my fortune 20 company.
Language is full of issues of particulars vs universals, and you could debate if its just an integrated GPU with different marketing.
Whatever the case, we couldn't use it in production, and NVIDIAs stock price reflects the reality on the ground.