← Back to context

Comment by naikrovek

5 hours ago

Because like all other modern Macs, the GPU in my Mac uses the same API as the GPU in your Mac.

Also, on a Mac with 32GB of RAM, 24GB of that (75%) is available to the GPU, and that makes the models run much faster. On my 64GB MacBook Pro, 48GB is available to the GPU. Have you priced an nvidia GPU with 48GB of RAM? It’s simply cheaper to do this on Macs.

Macs are just better for getting started with this kind of thing.

Fair enough for GPU-intensive stuff like running Qwen locally. But do you really need a GPU for decent local TTS? I run parakeet just on CPU.