Comment by 1dom
10 hours ago
> LLMs which the weights aren't available are an example of when it's not local LLMs, not when the model happens to be large.
I agree. My point was that most aren't thinking of models this large when they're talking about local LLMs. That's what I said, right? This is supported by the download counts on hf: the most downloaded local models are significantly smaller than 1tln, normally 1 - 12bln.
I'm not sure I understand what point you're trying to make here?
Mostly a "We know local LLMs as being this, and all of the variants of this can provide real value regardless of which is most commonly referenced" point. I.e. large local LLMs aren't only something people mess with, they often provide a lot of value for a relative few people rather than a little value for a relative lot of people. Who thinks which modality and type brings the most value is largely a matter of opinion of the user getting the value, not just the option which runs on consumer hardware alone.
You're of course accurate that smaller LLMs are more commonly deployed, it's just not the part I was really responding to.