Comment by jki275

1 day ago

I bought my M1 Max w/ 64gb of ram used. It's not that expensive.

Yes, the models it can run do not perform like chatgpt or claude 4.5, but they're still very useful.

I’m curious to hear more about how you get useful performance out of your local setup. How would you characterize the difference in “intelligence” of local models on your hardware vs. something like chatgpt? I imagine speed is also a factor. Curious to hear about your experiences in as much detail as you’re willing to share!