← Back to context

Comment by z3ugma

14 hours ago

I don't really understand what this is for... there is a lot of ML-researcher talk on the GH page about the model architecture, but how should I use it?

Is it a replacement for Kimi 2.7, Claude Haiku, Gemini Flash 3.1 lite, a conversational LLM for the situations where it's mostly tool-calling like coding and conversational AI?

It is for building agentic capabilities into very small devices like phones, glasses, watches and more. Does that make sense?

  • [flagged]

    • A local model that can do better than Siri or Alexa as a personal or home assistant is, in my eyes, very useful. Being able to run on a phone or watch or glasses translates to me, low-powered AI, and not necessarily that I want my phone, or watch, or glasses to run things for me.

      My Siri use has narrowed down to just setting timers. And even then, I still have my phone call people in the middle of the night. Siri is pretty dumb and does not do what I want it. I’d rather be able to customize an assistant to myself.

      I am also thinking of automation in my day to day workflow for work.

      1 reply →

    • Throwing a few things out - HN has changed over the years, but people make stuff to make stuff. There don't need to be product use cases. The tone of the comment goes against the spirit of HN - likely the reason for downvotes.

      That aside- a very small model that takes text and outputs structured json according to a spec is nice. It let's you turn natural language into a user action. For example, command palettes could benefit from this.

      If you can do a tiny bit of planning (todo) and chain actions, it seems reasonable that you could traverse a rich state space to achieve some goal on behalf of a user.

      Games could use something like it for free form dialog while stool enforcing predefined narrative graphs etc.

      I'm sure you could come up with more. It's a fuzzy function.

      3 replies →