Comment by pavel_lishin
2 months ago
> …all things that on-device LLMs can already do, for example my MacBook can run Llama 4 (albeit slowly) and it can generate recipes for me.
I've run a local LLM, and while I probably didn't do a great job optimizing things, it was crawling. I would absolutely not stand there for 20 minutes while my fridge stutters out a recipe for kotleti, while probably getting some of it wrong and requiring a re-prompt.
Not everything needs to be a genie.
I guess I was thinking about a smart fridge of the type you’d find in the year, say, 2031.
Why not daydream about a Starfleet replicator?
How many GPUs were you running?
I'm not sure, but how many GPUs do we expect a refridgerator to have?