Comment by rvz
1 day ago
This how Anthropic is marketing their AI releases and the reality is, they are terrified of local AI models competing against them.
Almost everyone on this thread is falling for the same trick they are pulling and not asking why are their benchmarks and research after training new models not independently verified but always internal to the company.
So it is just marketing wrapped around creating fear to get local AI models banned.
The disbelief in this thread is wild. Most of yall are cooked if you think this is actually the case.
The only people who are "cooked" are those who rely on SOTA models to function in their jobs, and companies who are desperate to regulate open / local models to maintain their marketshare.
If you aren't relying on a SOTA model to do your job, you aren't doing your job right (and are cooked.)
7 replies →
Yep, this is exactly it. Open source models and especially ones that run locally are catching up and it's literally an existential threat to these companies. Local models are now quite useful (Qwen, Gemma) and open weight models running on cheaper clouds are perfectly sufficient for use by responsible software engineers to use for building software. You can take your pick of Kimi 2.5, GLM 5.1, and the soon to be released Deepseek 4 which might end up above Opus levels as it stands for a fifth of the cost. Anthropic is particularly vulnerable here, since their entire marketshare rests on the developer market. There is a reason why Google for example, is not so concerned with this and is perfectly happy releasing open models which cut into their own marketshare, and to a lesser extend, same with OpenAI. Anthropic has bet the house on software development which is why we see increasing desperation to both lobby for regulation on open/local models and to wall off their coding harness and subscription plans.