← Back to context

Comment by xnzakg

4 hours ago

Local/open LLMs are a thing though. You can build a server for hosting decent sized (100-200B) models at home for a few k$. They may not be Opus-level, but hopefully we can get something matching current SOTA, but that we can run locally, before the megacorps get too greedy.

Alternatively you could find some other people to share the HW cost and run some larger models (like Kimi-K2.5 at 1.1T params).

> You can build a server for hosting decent sized (100-200B) models at home for a few k$.

That's definitely not an option for me :-D

True open LLMs could be a viable solution in the future, but only if they can be operated and sustained on a community basis. I have too little insights into the actual costs of running such models to judge whether this would be feasible. Then there is always the problem of how to deal with bad actors. This is all but trivial.

At the moment, I'd rather spend time working on sharpening my actual programming and thinking skills :) I actually enjoy the act of programming and see it as part of my creative expression. Fortunately, I don't code for a living (at least not directly), so nobody can tell me how to write my software.