← Back to context

Comment by spacechild1

8 hours ago

That's one of several reasons why I'm trying not to rely too much on LLMs. The prospect of only being able to code with a working internet connection and a subscription to some megacorp service is not particularly appealing to me.

Local/open LLMs are a thing though. You can build a server for hosting decent sized (100-200B) models at home for a few k$. They may not be Opus-level, but hopefully we can get something matching current SOTA, but that we can run locally, before the megacorps get too greedy.

Alternatively you could find some other people to share the HW cost and run some larger models (like Kimi-K2.5 at 1.1T params).