← Back to context

Comment by AussieWog93

1 day ago

An even easier way to get into this is simply by downloading a program called LM Studio. You can mount a model and chat to it within 10-15 mins with no experience whatsoever, and no configuration at all.

That said, last time I tried local LLMs (around when gpt-oss came out) it still seemed super gimmicky (or at least niche, I could imagine privacy concerns would be a big deal for some). Very few use cases where you want an LLM but can't benefit immensely from using SOTA models like Claude Opus.