Comment by Alifatisk

7 hours ago

Have you all noted that the latest releases (Qwen3 max thinking, now Kimi k2.5) from Chinese companies are benching against Claude opus now and not Sonnet? They are truly catching up, almost at the same pace?

They distill the major western models, so anytime a new SOTA model drops, you can expect the Chinese labs to update their models within a few months.

  • This is just a conspiracy theory/urban legend. How do you "distill" a proprietary model with no access to the original weights? Just doing the equivalent of training on chat/API logs has terrible effectiveness (you're trying to drink from a giant firehose through a tiny straw) and gives you no underlying improvements.

  • Yes, they do distill. But just saying all they do is distill is not correct and actually kind of unfair. These Chinese labs have done lots of research in this field and publish it to the public, some of not majority contribute with open-weight models making a future of local llm possible! Deepseek, Moonshot, Minimax, Z.a, Alibabai (Qwen).

    They are not just leeching here, they took this innovation, refined it and improved it further. This is what the Chinese is good at.

They are, in benchmarks. In practice Anthropic's models are ahead of where their benchmarks suggest.

  • Bear in mind that lead may be, in large part, from the tooling rather than the model