← Back to context

Comment by pavel_lishin

2 months ago

> …all things that on-device LLMs can already do, for example my MacBook can run Llama 4 (albeit slowly) and it can generate recipes for me.

I've run a local LLM, and while I probably didn't do a great job optimizing things, it was crawling. I would absolutely not stand there for 20 minutes while my fridge stutters out a recipe for kotleti, while probably getting some of it wrong and requiring a re-prompt.

Not everything needs to be a genie.