Windows has a native (cloud-based) dictation software built-in[1], so there's likely less demand for it. Nonetheless, there are still a handful of community options available to choose from.
Because like all other modern Macs, the GPU in my Mac uses the same API as the GPU in your Mac.
Also, on a Mac with 32GB of RAM, 24GB of that (75%) is available to the GPU, and that makes the models run much faster. On my 64GB MacBook Pro, 48GB is available to the GPU. Have you priced an nvidia GPU with 48GB of RAM? It’s simply cheaper to do this on Macs.
Macs are just better for getting started with this kind of thing.
Windows has a native (cloud-based) dictation software built-in[1], so there's likely less demand for it. Nonetheless, there are still a handful of community options available to choose from.
[1] https://support.microsoft.com/en-us/windows/use-voice-typing...
I've been using Chirp which uses parakeet on Windows. Learned about it here:
https://news.ycombinator.com/item?id=45930659
Works great for me!
Handy has Windows support. https://handy.computer/
Because like all other modern Macs, the GPU in my Mac uses the same API as the GPU in your Mac.
Also, on a Mac with 32GB of RAM, 24GB of that (75%) is available to the GPU, and that makes the models run much faster. On my 64GB MacBook Pro, 48GB is available to the GPU. Have you priced an nvidia GPU with 48GB of RAM? It’s simply cheaper to do this on Macs.
Macs are just better for getting started with this kind of thing.
Fair enough for GPU-intensive stuff like running Qwen locally. But do you really need a GPU for decent local TTS? I run parakeet just on CPU.