Comment by cyanydeez
10 hours ago
have you seen this: https://chatjimmy.ai/
It's quite impressive what purpose build inference can/will do once everyone stops trying to become kind of the best model.
10 hours ago
have you seen this: https://chatjimmy.ai/
It's quite impressive what purpose build inference can/will do once everyone stops trying to become kind of the best model.
Wow impressive. What's the story with this?
It's a tech demonstrator for a company that turns models into custom silicon for fast inference. In this case llama3.1-8b https://taalas.com/products/
Is this an ASIC? Or FPGA? Or something even more exotic?
I’m guessing it’s some form of ASIC because I can’t imagine crafting the logic of Llama on silicon is a very quick or easy job. Not that doing it on an ASIC is a piece of cake either.
Taalas hardware implementation of Llama 3.1 8B They claim 16k tok/s vs Cerbras at 2k. https://taalas.com/products/