Comment by iLoveOncall
5 days ago
You can write projects with LLMs thanks to tools that can analyze your local project's context, which didn't exist a year ago.
You could use Cursor, Windsurf, Q CLI, Claude Code, whatever else with Claude 3 or even an older model and you'd still get usable results.
It's not the models which have enabled "vibe coding", it's the tools.
An additional proof of that is that the new models focus more and more on coding in their releases, and other fields have not benefited at all from the supposed model improvements. That wouldn't be the case if improvements were really due to the models and not the tooling.
You need a certain quality of model to make 'vibe coding' work. For example, I think even with the best tooling in the world, you'd be hard pressed to make GPT 2 useful for vibe coding.
I'm not claiming otherwise. I'm just saying that people say "look what we can do with the new models" when they're completely ignoring the fact that the tooling has improved a hundred fold (or rather, there was no tooling at all and now there is).
That contradicts what you said earlier -- "this has all to do with the tooling and nothing to do with the models".
1 reply →
OK, no objections from me there.
Chatgpt itself has gotten much better at producing and reading code since a year ago, in my experience
They're using a specific model for that, and since they can't access private GitHub repos like MS, they rely on code shared by devs, which keeps growing every month.