Comment by owenpalmer
4 days ago
> Kinda like a CD-ROM/Game cartridge, or a printed book, it only holds one model and cannot be rewritten.
Imagine a slot on your computer where you physically pop out and replace the chip with different models, sort of like a Nintendo DS.
That slot is called USB-C. I can fully imagine inference ASICs coming in powerbank form factor that you'd just plug and play.
Like the chip-software in Gibson’s sprawl, from the micro-soft to the ROM cowboy to the Aleph, the endgame of computertool distribution is via single-use chunks of quasi-biological computronium
Michael Bay just read "computronium" and spawned an 8 movie franchise in his head.
This would be a hell of a hot power bank. It uses about as much power as my oven. So probably more like inside a huge cooling device outside the house. Or integrated into the heating system of the house.
(Still compelling!)
*the whole server uses 2.2kw or whatever, not a single board. I think that was for 8 boards or something.
2 replies →
Not if you need 200w power to run inference.
USB-C can do up to 240W. These days I power all my devices with a USB hub, even my Lipo charger.
2 replies →
Pretty sure it'd just be a thumbdrive. Are the Taalas chips particularly large in surface area?
The only product they've announced at the moment [0] is a PCI-e card. It's more like a small power bank than a big thumb drive.
But sure, the next generation could be much smaller. It doesn't require battery cells, (much) heat management, or ruggedization, all of which put hard limits on how much you can miniaturise power banks.
[0] https://taalas.com/the-path-to-ubiquitous-ai/
6 replies →
800 mm2, about 90mm per side, if imagined as a square. Also, 250 W of power consumption.
The form factor should be anything but thumbdrive.
5 replies →
Yes, bigger than a 5090's GB202 ASIC! :)
> USB-C
With these speeds you can run it over USB2, though maybe power is limiting.
You would likely need external power anyway.
USB-C is just a form factor and has nothing to do with which protocol you run at which speeds.
1 reply →
That's the kind of hardware am rooting for. Since it'll encourage Open weighs models, and would be much more private.
Infact, I was thinking, if robots of future could have such slots, where they can use different models, depending on the task they're given. Like a Hardware MoE.
> Since it'll encourage Open weighs models
Is this accurate? I don't know enough about hardware, but perhaps someone could clarify: how hard would it be to reverse engineer this to "leak" the model weights? Is it even possible?
There are some labs that sell access to their models (mistral, cohere, etc) without having their models open. I could see a world where more companies can do this if this turns out to be a viable way. Even to end customers, if reverse engineering is deemed impossible. You could have a device that does most of the inference locally and only "call home" when stumped (think alexa with local processing for intent detection and cloud processing for the rest, but better).
It's likely possible to extract model weights from the chip's design, but you'd need tooling at the level of an Intel R&D lab, not something any hobbyist could afford.
I doubt anyone would have the skills, wallet, and tools to RE one of these and extract model weights to run them on other hardware. Maybe state actors like the Chinese government or similar could pull that off.
This is what I've been wanting! Just like those eGPUs you would plug into your Mac. You would have a big model or device capable of running a top-tier model under your desk. All local, completely private.
A cartridge slot for models is a fun idea. Instead of one chip running any model, you get one model or maybe a family of models per chip at (I assume) much better perf/watt. Curious whether the economics work out for consumer use or if this stays in the embedded/edge space.
Plug it into skull bone. Neuralink + slot for a model that you can buy in s grocery store instead of prepaid Netflix card.
We better solve the energy usage and cooling first otherwise that will be a very spicy body mod.
Would somewhat work except for the power usage.
I doubt it would scale linearly, but for home use 170 tokens/s at 2.5W would be cool; 17 tokens/s at 0,25W would be awesome.
On the other hand, this may be a step towards positronic brains (https://en.wikipedia.org/wiki/Positronic_brain)
Yeah maybe you can call it PCIe.