Comment by nsingh2
1 day ago
Whats are some use cases for these local small models, for individuals? Seems like for programming related work, the proprietary models are significantly better and that's all I really use LLMs for personally.
Though I can imagine a few commercial applications where something like this would be useful. Maybe in some sort of document processing pipeline.
For me? Handling data like private voice memos, pictures, videos, calendar information, emails, some code etc. Stuff I wouldn't want to share on the internet / have a model potential slurp up and regurgitate as part of its memory when the data is invariably used in some future training process.
I think speech to text is the highlight used case for local models because they are now really good at it and there’s no network latency.
How does it compare to whisper? Does it hallucinate less or is more capable?
I just like having quick access to reasonable model that runs comfortably on my phone, even if I'm in a place without connectivity.
I’m thinking about building a pipeline to mass generate descriptions for the images in my photo collection, to facilitate search. Object recognition in local models is already pretty good, and perhaps I can pair it with models to recognize specific people by name as well.
Hoping to try it out with home assistant.
filtering out spam SMS messages without sending all SMS to the cloud