Comment by tigershark

1 month ago

I’m not an expert, but I would say extremely small.

For comparison Hunyuan video encodes a shit-ton of videos and rudimentary real world physics understanding, at very high quality in only 13B parameters. LLAMA 3.3 encodes a good chunk of all the knowledge available to humanity in only 70B parameters. And this is only considering open source models, the closed source one may be even more efficient.

Maybe we have different understandings of what extremely small is (including that emphasis) but an LLM is not that by definition (the first L). I'm no expert either but the smaller value mentioned is 13e9. If these things are 8-bit integers, that's 13 GB data (more for a normal integer or a float). That's a significant percentage of long term storage on a phone (especially Apple models) let alone that it would fit in RAM on even most desktops which is afaik required for useful speeds. Taking this as upper bound and saying it must be extremely small to encode only landmarks, idk. I'd be impressed if it's down to a few dozen megabytes, but with potentially hundreds of such mildly useful neural nets, it adds up and isn't so small that you'd include it as a no-brainer either