Comment by jasonjmcghee
13 hours ago
Throwing a few things out - HN has changed over the years, but people make stuff to make stuff. There don't need to be product use cases. The tone of the comment goes against the spirit of HN - likely the reason for downvotes.
That aside- a very small model that takes text and outputs structured json according to a spec is nice. It let's you turn natural language into a user action. For example, command palettes could benefit from this.
If you can do a tiny bit of planning (todo) and chain actions, it seems reasonable that you could traverse a rich state space to achieve some goal on behalf of a user.
Games could use something like it for free form dialog while stool enforcing predefined narrative graphs etc.
I'm sure you could come up with more. It's a fuzzy function.
> people make stuff to make stuff. There don't need to be product use cases.
OK. Great! So it doesn't need to be a commercial product. But does it do something (anything?) interesting? I'm interested in your games example, I'd love to see it done in real life. IIUC, game AIs are actually much more constrained and predictable for play-ability reasons. If you let it go all free form a plurality of players have a "WTF??!?" experience which is super Not Good.
It doesn't have to do any thing interesting - it's completely fascinating all on it's own. If you understand anything about the math and science behind LLMs, you'll understand that this is an achievement worthy of sharing to a community like HN.
That being said, small models like these have plenty of use cases. They allow for extra "slack" to be introduced into a programmatic workflow in a compute constrained environment. Something like this could help enable the "ever present" phone assistant, without scraping all your personal data and sending it off to Google/OpenAI/etc. Imagine if keywords in a chat would then trigger searches on your local data to bring up relevant notes/emails/documents into a cache, and then this cache directly powers your autocomplete (or just a sidebar that pops up with the most relevant information). Having flexible function calling in that loop is key for fault tolerance and adaptability to new content and contexts.
Its cool. Enjoy it.
> Something like this could help enable the "ever present" phone assistant, without scraping all your personal data and sending it off to Google/OpenAI/etc
OK so show me what that's for. Show me something useful you can do with that ability.
> Imagine if keywords in a chat would then trigger searches on your local data to bring up relevant notes/emails/documents into a cache, and then this cache directly powers your autocomplete (or just a sidebar that pops up with the most relevant information).
I'm really trying but.. idgi? I truly cannot imagine how this would improve my life in any way...
> Its cool. Enjoy it.
No. It sounds like a useless complication on my watch. I don't fucking care if it can tell me the phase of the moon. I can look up at the sky and see the moon and know what phase it is.
EDIT: You say:
> If you understand anything about the math and science behind LLMs, you'll understand that this is an achievement worthy of sharing to a community like HN.
OK. So educate me. Tell me what I'm missing.