Comment by hacker_homie
7 hours ago
Easy they lied to the public not investors and have more money than you.
Local llm or nothing at all.
7 hours ago
Easy they lied to the public not investors and have more money than you.
Local llm or nothing at all.
This is a classic example highlighting the upside of local llms.
However the local llms I can run on reasonable hardware are so dumb compared to opus, and even if I shelled out five figures of hardware to run the largest/smartest open model it still will be noticeably worse.
Right now the remote models are just so much smarter and more affordable under most usage patterns.
> Local llm or nothing at all.
I'm not as familiar with LLMs as I am media models, but there can't seriously be local contenders for beating Opus, GPT-5, etc. Right?
At home hardware isn't good enough.
Nobody "far enough behind" that isn't scared to release their model as open weights actually has a competitive model within 70% of the lead models.
Now that the Chinese are catching up and even pulling ahead (eg. in video), they've stopped releasing the weights.
Stragglers release weights. And those weights aren't competitive.
Am I missing something?
GLM and Kimi are still releasing weights for near-SOTA models. DeepSeek, Qwen and arguably MiniMax are the ones that are perhaps falling behind.