Comment by anonzzzies
2 days ago
I have been using 4.6 on Cerebras (or Groq with other models) since it dropped and it is a glimpse of the future. If AGI never happens but we manage to optimise things so I can run that on my handheld/tablet/laptop device, I am beyond happy. And I guess that might happen. Maybe with custom inference hardware like Cerebras. But seeing this generate at that speed is just jaw dropping.
Apple's M5 Max will probably be able to run it decently (as it will fix the biggest issue with the current lineup, prompt processing, in addition to a bandwidth bump).
That should easily run an 8 bit (~360GB) quant of the model. It's probably going to be the first actually portable machine that can run it. Strix Halo does not come with enough memory (or bandwidth) to run it (would need almost 180GB for weights + context even at 4 bits), and they don't have any laptops available with the top end (max 395+) chips, only mini PCs and a tablet.
Right now you only get the performance you want out of a multi GPU setup.
Cerebras and Groq both have their own novel chip designs. If they can scale and create a consumer friendly product that would be a great, but I believe their speeds are due to them having all of their chips networked together, in addition to design for LLM usage. AGI will likely happen at the data center level before we can get on-device performance equivalent to what we have access to today (affordably), but I would love to be wrong about that.