Comment by zamadatix
13 days ago
Local LLMs are just LLMs people run locally. It's not a definition of size, feature set, or what's most popular. What the "real" value is for local LLMs will depend on each person you ask. The person who runs small local LLMs will tell you the real value is in small models, the person who runs large local LLMs will tell you it's large ones, those who use cloud will say the value is in shared compute, and those who don't like AI will say there is no value in any.
LLMs which the weights aren't available are an example of when it's not local LLMs, not when the model happens to be large.
> LLMs which the weights aren't available are an example of when it's not local LLMs, not when the model happens to be large.
I agree. My point was that most aren't thinking of models this large when they're talking about local LLMs. That's what I said, right? This is supported by the download counts on hf: the most downloaded local models are significantly smaller than 1tln, normally 1 - 12bln.
I'm not sure I understand what point you're trying to make here?
Mostly a "We know local LLMs as being this, and all of the mentioned variants of this can provide real value regardless of which is most commonly referenced" point. I.e. large local LLMs aren't only something people mess with, they often provide a lot of value for a relative few people rather than a little value for a relative lot of people as small local LLMs do. Who thinks which modality and type brings the most value is largely a matter of opinion of the user getting the value, not just the option which runs on consumer hardware or etc alone.
You're of course accurate that smaller LLMs are more commonly deployed, it's just not the part I was really responding to.