Comment by SubiculumCode
1 day ago
That seems like a lot of rationalization to me. China is pursuing these because they cannot compete on the frontier. Yes, there is a possibility that all that compute is not needed, but it is a rather remote possibility, and there is no doubt that, given the choice, China would be pursuing frontier model building with closed, propietary-only offerings.
All that compute is not needed. We have an existence proof from biology in the form of natural intelligence that much greater efficiency is possible. However, achieving dramatic improvements in compute efficiency will depend on unpredictable scientific breakthroughs. Personally I suspect that an entirely new hardware architecture will be needed, although I don't have any hard evidence to back that up.
>We have an existence proof from biology in the form of natural intelligence that much greater efficiency is possible.
It's only a proof that it's possible with 18+ years of training.
In certain ways my dog has more generalized intelligence than any LLM, and I trained her in only a few months with a modest investment in dog treats.
>from biology ... much greater efficiency is possible
Those are much more specialized models with pretty mediocre tokens per second.
Perhaps tokens is a dead end?
1 reply →
[dead]
I dunno, DeepSeek v4 Pro is rather on par as far as I can tell, maybe not with 5.5 Pro in all areas quite yet, but close.
I think China is thinking more about the application layer on top of models as going to matter more than the models themselves, so they don't need to gatekeep the models as much.
China is competing in value AI because they cannot work at the frontier, but how is this bad at all? It’s like how the USA has the best drones but they are a few million dollars apiece while China has DJI.
If China could work at the frontier, I don’t know, I kind of think they would still be dumping a lot of resources into exploring the value side since they have that culture already in place.
I did not imply it was bad. I implied that competing in value AI is the only option that China-based AI companies have due to limitations in compute.
This is true, but I don't think they would all be rushing to frontier if that option was available. Chinese are used to working with constraints to their benefit, they would see the price of working at frontier and make hard choices that maybe we can ignore in the states.
Forgive me if this is a naive assumption, but wouldn’t large language models be fundamentally different for a language that is largely symbols? Again, my understanding of Mandarin is limited if it exists at all.
All tokens are symbols. All of the frontier models speak Mandarin.
This is why misspellings and homophones are tells of human righting. LLMs strongly prefer word-level tokens, and word substitutions follow semantic similarity and not the more human auditory similarity.
5 replies →
"飞机" and "airplane" aren't fundamentally different in terms of how they're represented to a computer. Especially for an LLM, where tokenization likely turns each of those into a single token.
> China is pursuing these because they cannot compete on the frontier.
? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao. Local + efficient is clearly the future
> ? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao
Unfortunately local inference is inefficient, 100s of times more inefficient than cloud. When you answer one request at a time you still have to fetch all active weights into compute units, once every token. When you run a batch of 300, you load it once and compute 300 at a time.
Compared to cloud, local inference is less flexible. You can't scale up 5x or 20x, can't have spikes, and pay for it no matter if you use it or not. But usage factor is very low, like 5%. And to run a decent model your system costs $2000 or more.
AI boosters cling to this notion because it's the only way the massive data center buildouts make any sense at all. I guess you could say the US is winning the frontier AI race. Okay. I'm never going to grant a cloud service access to all the contents of my hard drive, that's just never going to happen, so if you expect me and a lot of people like me who feel similarly to get on this train, you better have a local, lightweight model too or we're not even having a discussion, the answer is just no.
The thing is, frontier model providers don’t take your feelings into account even a little bit. It’s totally irrelevant to the discussion about the service they can provide, because that service is predicated on access to high power GPU slices that local models can’t touch. Those providers won’t be in an existential crisis because some people choose the privacy route, it’s a cost of doing business.
4 replies →
> ? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao. Local + efficient is clearly the future
Corporate America is where the money is, and corporate America will dictate what products are successful by virtue of spend. Individuals aren't going to be paying $100s or $1000s/month en masse for these models but businesses will be. Being local and efficient isn't that important at this stage but even so as American companies continue to scale and invest they'll be able to make those models more local and efficient if the market wants it. Sort of like how you had a big, giant desktop computer and now you've got a super computer in your phone which is in your pocket. Going straight to "local and efficient" means going straight to being behind because at some point, perhaps now even, the local and efficient model won't be able to keep up.
For some reason people think that they somehow know something that Google or Nvidia or whoever, with hundreds of billions of dollars of real money at stake don't already know and it's both amusing and bizarre to see this play out again and again in off-hand comments like "lol tiny benefits".
You buy an iPhone even though the cheap-o Wal-Mart Android phone for $100 "does the same thing". Except that in this case the Android phone just puts you out of business while those spending big money for "tiny benefits" beat you in the market.
Corporate america is the past. Momentum is carrying capital out of the country. Pay attention to rate of change.
5 replies →
> You buy an iPhone even though the cheap-o Wal-Mart Android phone for $100 "does the same thing".
People buy iPhones because of status signalling and network effects, neither of which appears to apply to AI model choice. LLMs are already rapidly on the way to being interchangeable commodities.
13 replies →
Well China is consistently 6 months behind the frontier labs(possibly because they can they harvest data from released frontier models). If the scaling continues, US will win, but if not then China will win as the models will converge.
The non-release of Mythos tell you the future of that, so long as they can keep the weights from being exfiltrated. Once models become true national security threats, they won't be released in their full form. The hitch-a-ride approach becomes less capable of keeping up.
How would they prevent distillation? That would seem pretty tough to block for any LLM available for commercial use.
4 replies →
We were talking about winning commercially, not on model quality.