← Back to context

Comment by reactordev

1 day ago

I run local models on Mac studios and they are more than capable. Don’t spread fud.

You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.

  • You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.

  • Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.