Comment by sho

10 hours ago

I am no-where near as concerned by this as I was a year ago, when I was expecting the axe to fall at any moment before the Chinese labs achieved some sort of escape velocity. I now think it's too late, all the cats are out of all the bags, there's no moat except maybe a temporal one of a few months, the genie is out of the bottle.

There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough. Deepseek 4 and Kimi 2.5 are not quite Claude 4.5/GPT5.5 but there's no fundamental principle missing - they are strong evidence that there's no real advantage the "frontier" labs possess that isn't related to scale, which they will gain in time (if they even need to). The RL post-training techniques that work are widely known and easily copied. All Deepseek is really lacking is data, which they're getting - and the harder Anthropic/the USG makes it to access claude in china, the more of that precious data they'll get!

I used to sort of entertain the "fast take-off breakaway" scenario as being plausible but not really anymore. The only genuine moat the frontier labs have is their product take-up, which isn't nothing, far from it, but it's not some unbreakable technological wall. Too late guys - it might have been too late for quite some time.

I wish it was true. I would gladly use a GPT 5.2 high model equivalent for coding (6 months old) if it was offered cheaper by Deepseek or Kimi. And I'm sure that's an extremely prevalent opinion by the millions of Claude and Codex users who are bothered by the costs.

However, they just don't perform that well in practice. That's the real issue. You can actually see it when you move away from open benchmarks. Deep seek 3.2 is 4% on Arc-AGI 2 [1], while GPT 5.2 high is 52% and GPT 5.5 pro high is 84.6%. That's the real reason why nobody is using these models for serious work. It's incredibly frustrating.

In addition, I already feel the pain myself on the model restriction. I'll asking my codex 5.5 agent to crawl a website - BOOM, cybersecurity warning on my account. I'll ask it to fix SSH on my local network - another warning. I'm worried about the day my account would be randomly banned and I cannot create a new one. OpenAI already asks you to perform full identification in order to eliminate these warnings - probably exactly for that - so that if they ban you, it's permanent.

[1] https://arcprize.org/leaderboard

  • I worked extensively on ARC AGI before and one thing is SURE as hell. OpenAI and Gemini in particular use this as marketing material. You can correlate the benchmark release with stock price increase. They feed synthetic datasets of ARC into their models to boost the numbers. There is no doubt in my mind Gemini is no better than DeepSeek other than being specifically fine tuned for ARC AGI. Heck, they even say so and they say they have paid annotations for ARC. Again, economic incentives. In terms of whether these models are actually better at the benchmarks, likely not. See ARC 3, where the gap is diminishingly small.

    • I've also worked extensively on ARC AGI 1/2, and I mainly agree. Marketing and training. Performance of LLMs on ARC is most importantly a function of training on grid/table-like data. It doesn't have to be specifically synthetic ARC data though. Training an LLM to be better at perceiving grid-like arrangements of data in a spatial way like an image, rather than just tabular, is hugely useful for things outside of ARC benchmarks, though it's a narrow skill. Hence, I'm sure they do it. I want them to do that. I believe the labs when they say they didn't train specifically for ARC-AGI 1/2 (where did Google say otherwise? I don't see it). But it does not mean the models are getting better at general purpose reasoning. They were already plenty good enough at that. You can describe ARC images in words and reason about it using a level of intelligence LLMs have had for years: they're designed to be easy! LLMs just couldn't reason about image-like grids very well.

    • ARC-AGI isn't perfect, but it helps demonstrates the gap. I'm sure all companies optimize their models for this benchmark given its dominance.

      1 reply →

    • Why do you think DeepSeek isn't also fine tuned on ARC AGI? Maybe they're more fine tuned on ARC AGI but still get worse scores. There's no way to know.

      2 replies →

  • > Deep seek 3.2 is 4% on Arc-AGI 2

    Why are you bringing up an outdated Chinese model from 6 months ago to compare to a US model from 6 months ago? The outdated Chinese model will have performance from ~12 months ago, obviously. But today's Chinese model DeepSeek 4 has performance not far from the US model 6 months ago; 46% compared to 52% from 5.2.

  • I 100% agree with you, but I've been convinced over the last year that it's a time and scale issue, not anything fundamental.

    The Chinese models right now are in a weird spot. Compared to the frontiers, both their pre and post training is woeful - tiny, resource constrained in every dimension including human, slow. I'd compare it to OpenAI 5 years ago except I think even then OpenAI had way more!

    But they "cheat" quite a lot in distillation and very benchmark-focussed RL and that's where you get this superficial quality in the leaderboards that doesn't match up when you go off-script. Arc is a great example in that it really belies an "inferior soul" at the heart of it all.

    What gives me great hope though is that those same scaling laws that Altman and others have been hyping forever will absolutely kick in for the Chinese labs just as they did for the US ones, and I don't think anything can stop that process now. So they will catch up. It won't be tomorrow, but it's not going to be 10 years either. 3-5 would be my reasonably educated guess.

    And the final risk, that China itself might try to restrict availability of the tsunami of GPU or other AI hardware it will inevitably produce - well, I just can't really imagine a country that has been configuring itself for the last 40 years as a single purpose export machine deciding that actually, no, it doesn't want to export something.

    About the model restrictions - absolutely. I've been trying to do security research on my own software and the frontier models immediately get suspicious. I've been playing with the local ones much more this year basically because of this. They have deficiencies, for sure - they feel very "hollow" compared to the major labs. But I've talked to a lot of people, and the consensus is pretty clear - just a matter of time.

    • > I'd compare it to OpenAI 5 years ago except I think even then OpenAI had way more!

      Say what? 5 years ago OpenAI had received around $139 million in funding, and they’d just come out with GPT3 with 175B parameters, a 2048 context window, trained on 300B tokens on a 10,000 V100 cluster which would have cost maybe $4-13 million at the time for their training run.

      Meanwhile Deepseek V3’s famously frugal training was $5M, and Chinese AI companies are raising billions in funding. Sure American AI companies are raising tens (and maybe hundreds in the case of OpenAI, if you count their circular funding rounds) of billions but they’re grossly inefficient, and we’ve already hit the limits of the scaling laws where there’s little point in increasing the number of parameters of a model.

      1 reply →

    • Just an observation: constraints often result in creative solutions. I wouldn't be surprised if a smaller lab makes a big breakthrough because they have to.

  • Have you tried the latest DeepSeek v4 Pro inside of the Claude Code harness? It's not listed in that site.

    It definitely 'feels like' it is as good as Claude for many regular web app coding tasks (though I don't have real benchmarks). And it is comically cheap.

    I'm not suggesting it is better than the latest Claude or codex models, but it seems 'good enough' for a lot of use cases in my limited real world testing.

    • I'm starting to feel like a parrot, but people seem to forget that software engineering is actually a very narrow slice of the white collar pie. You don't need a mega-model which can reason about 100 000 lines of code when you want to create a nice PPT (which consumed literally hours of your life before) to impress your boss. SOTA models will probably be used for frontier research, complex coding tasks, large scale data analysis, etc. And the average Joe shall be able to buy a pre-configured box with a plug-and-play harness and run medium models air-gapped. Or use such models through cloud APIs dirt cheap if privacy is not a concern.

      4 replies →

    • Also so many developers i know use LLMs for one shoting isolated problems, explainers, discussions and planning. For these even Kimi is pretty great.

      I don't think every dev will be comfortable just releasing claude on their project.

  • They're not even that much cheaper (1/2 price per task according to Artificial Analysis) once you account for lower token usage of GPT-5.5. I can't justify it when factoring in the extra time wasted, and the cheap codex usage I get through the monthly plan. Frontier intelligence is not a commodity product ... yet.

    • The price per task already factors in token usage so you're double counting if you're also tacking "higher token usage" as another argument on top

  • Arc has no predictive power whatsoever. I always use the best models available. So far I haven't found a task that chineses models cannot solve very quickly and reasonably. Do you have any examples where they failed for you?

  • If you want something close to claude, use glm 5.1 with claude code. Their subscription price is no longer x10 times cheaper now though (at best 2 times cheaper)

  • And yet Claude six months ago was amazing and good enough for you.

    This shows that AI cloud consumption is just a conspicuous consumption status symbol, nobody knows why they need cloud AI or what problem they are even solving.

    • Ah, AI is running off of the highway model, induced demand. That kind of makes a lot of sense now that I think about it.

Which is why, I believe, the big AI companies are starting to focus and roll out vertical products more. They know that the models themselves aren't sticky, people can easily switch between different models with not much hassle.

I think the big AI companies are trying to transform into the next Microsoft. Completely capture both enterprise and consumers.

  • "I think the big AI companies are trying to transform into the next Microsoft. Completely capture both enterprise and consumers."

    That is going to be a failing strategy though. Whatever OpenAI or Anthropic implement, Microsoft and Google can trivially copy and provide to their existing customers that are already deeply invested in their platforms.

Harness engineering is a moat. There’s user loyalty and reliance on the chassis that Claude is on, for example, just like there’s more market share by MacOS+WindowsOS over Linux Open Source.

  • I regularly switch between codex and Claude in the same sessions. I’d throw in other models if I could.

    Data governance and enterprise sales is a moat. The harnesses aren’t.

  • I thought so too.

    But 1) people use other models with that same harness. 2) I moved on from Claude Code and all the features I cared for up and running in less than a couple days. Without even looking for available plugins or extensions.

  • It's absolutely NOT a moat. Making a harness is the EASY part.

    If you had said "marketing is a moat" then yes, I would say you were right. But creating a harness equal to or better than Claude Code is trivial. The CC harness is actually shit. There are tons of open-source harnesses than work better than CC while using Opus via OpenRouter.

  • > Harness engineering is a moat.

    I mean, if that’s the case, then Anthropic themselves are currently actively filling in that moat with nice, solid, walkable dirt. Claude Code may have been a moat 6 months ago but these days you’ll want to replace the “m” with a “bl”.

  • The industry on tooling have been very much moving in direction of "plug the AI of your choosing" for a while now, and given how much Anthropic fights the 3rd party tools they are definitely afraid to be left in the dust.

    > just like there’s more market share by MacOS+WindowsOS over Linux Open Source.

    It's hard to change OS. It's not hard to jump from one AI tool to another

All of the reasons in the article also apply to Chinese companies. If a Chinese model becomes good enough to make it significantly easier to hack Chinese government servers, do you think they'll allow random people unfettered access to it?

The economic pressures are the same, too. Currently, Chinese models are offered for cheap or in some cases provide weights for free because that's the only way to gain traction. (That closed-weight releases by Baidu, Bytedance, iFlyTek etc. hardly generate any buzz bears that out, as does the fact that when Alibaba does a closed-weight release, someone always gets confused because they associate the Qwen brand with open models.) At some point, their investors are going to want profits, not just user counts. That means higher prices, or no more new models.

If there's no secret sauce and all you need is scale, that would actually be kind of the worst-case scenario for catching up to the frontier, since scaling is expensive and the frontier model companies have easier access to capital as well as higher revenues.

  • > If a Chinese model becomes good enough to make it significantly easier to hack Chinese government servers, do you think they'll allow random people unfettered access to it?

    They aren't trying to become that good, nor do they need to in order to have real positive impact. Models like Mythos are estimated to be humongous even on a datacenter-wide scale, which is actually a big factor in its limited availability at present. It's mostly helpful as a one-of-a-kind proof of concept, to answer the question of whether AI can still plausibly scale by growing capabilities and what happens to alignment concerns when you do that.

    • I expect every company to try to make a model as good as they possibly can, especially now that Mythos has served as a proof of concept to demonstrate that there's lots of interest in AI for cybersecurity. But if they don't try, that hardly assuages concerns about not being able to access the very best models, does it?

I agree the genie is out of the bottle technologically. I'm less convinced that means access stops being politically and economically important. The bottle may be gone but the best lamps are still expensive

  • But a “good enough” lamp just got a lot cheaper. The cost of tokens on DeepSeek V4 Pro is so low I don’t even think about and currently am trying to figure out useful things for as many agents simultaneously running as I can. What would have cost $150 less than a year ago now costs 35¢.

    Likewise Qwen 3.6 absolutely blows me away and that’s on a 35b 6-bit model on a local 5090. Same thing, busy trying to find stuff to do to keep it busy 24/7.

    I can still find some niches for Opus 4.7 but being able to attack problems and not worry about consumption is a game changer.

  • Virtually no one is going to pay for the best performing lamp if the next best lamp does 90% as good for an order of magnitude cheaper.

    I will say, as pointed out by others, DeepSeek and other Chinese providers still lack a bit in the tooling that Claude has, but they'll get there.

I would agree, the only thing Kimi is really missing is stability and harness training, For general chat tasks I consider it mostly on par. Occasionally I'll give the same problem to Kimi, Claude, GPT, Gemini and it's not unusual to see Kimi correctly figure out some kind of weird extra thing that the others missed, like some kind of mentally unstable savant.

> The only genuine moat the frontier labs have is their product take-up

And even then, their is no stickiness. For most use cases there isn’t much value in one frontier model over the other.

Just have to look at the people flocking from one to the other for whatever reason.

  • I’m flocking from GPT to opus every week for the past 3 months and always come back.

    The point isn’t that gpt is better, it’s that it is so much better for my work it isn’t even sticky, it’s reinforced concrete. I use opus 1% of the time because it writes better and it’s sticky there.

    Yes I’ll switch approximately immediately if opus or Gemini (which I use more than opus!) is better for what I do, but at this point frontier model tokens are not fungible.

    • There will always be dataset and training quirks, and the provider’s own biases and focus, granting one model an edge over the others in some specific domain.

      1 reply →

  • The large AI houses arguably ensure that model switching be a natural action for their clients, by switching the default model of their flagship offerings every few months. Such is the price of progress.

What about access to GPUs and memory? This is becoming a pretty major bottleneck.

  • Today's tech echoes 1960-1970 mainframe era: very centralized around a handful of companies controlling "massive cloud compute" in bespoke mainframe-like topology.

    All of that will all be legacy in a couple of years. Today's B200 clusters are tomorrow's e-waste. Decentralization might happen gradually or abruptly. But to me it's obvious that we'll be thinking of high-tech tensor processors and GPUs the way we thought of individual transistors and tube amplifiers in the 1980s.

    If AI turns out to be the revolution it purports to be, than the underlying hardware will change much more rapidly than it did with ICs and microprocessors in the late 1970s. Today's hot is tomorrow's junk.

    • > Today's B200 clusters are tomorrow's e-waste.

      Hardware depreciation timescales are actually getting longer, not shorter, because frontier hardware like B200 clusters is highly bottlenecked. It's not just a RAMpocalypse out there, we're seeing early signs of production bottlenecks with GPUs and maybe even CPUs.

      1 reply →

    • One thing that is potentially different this time is that Moore's Law has stopped scaling. Computers aren't getting smaller exponentially. They're getting bigger with multiple chips glued together to make up for Moore's Law.

      2 replies →

  • It's basically converted sand. Most of that conversion happens in Taiwan at the moment. Which is considered, by China, to be one of their provinces and as a protectorate by the usa. Hence the interest in that region....

> There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough.

Over last year it seems that the only thing US labs are ahead is money spent. At least half of technical innovations if not more came from Chinese labs and was published openly.

> There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough

This is not just about mainland China though. The current US government is extremely selfish and self-centered. Other countries really need to consider for their own long-term situation here.