Comment by ActorNightly
1 month ago
Are you aware that having ram doesn't matter when your tokens/second is slow as shit?
You don't need to run large models, Gemma QAT 27B fits on one GPU and is quite good. Other models like Qwen3 are great for coding.
3090 gets 100+ tokens/second for QWEN, very close to what you would see with a cloud based model.
M3 ultra gets ~30.
Congrats, you played yourself.
Did I? Not only are you comparing apples to oranges, you even provide misleading numbers.
3090 gets 20-30 tokens a second for dense ~30B models (QwQ 32B, Gemma 3 27B Q4), similar to M3 ultra. If you are talking about Qwen3-Coder 30B (MoE), then both 3090 and M3 Ultra are around ~70 tok/s.
But even if you were right about the speed - which you are not - speed is pointless if you need large model that wouldn't fit into your VRAM.