Comment by reedf1
7 hours ago
Karpathy coined the term vibecoding 11 months ago (https://x.com/karpathy/status/1886192184808149383). It caused quite a stir - because not only was it was a radically new concept, but fully agentic coding had only become recently possible. You've been vibe coding for two years??
I had GPT-4 design and build a GPT-4 powered Python programmer in 2023. It was capable of self-modification and built itself out after the bootstrapping phase (where I copy pasted chunks or code based on GPT-4's instructions).
It wasn't fully autonomous (the reliability was a bit low -- e.g. had to get the code out of code fences programmatically), and it wasn't fully original (I stole most of it from Auto-GPT, except that I was operating on the AST directly due to the token limitations).
My key insight here was that I allowed GPT to design the apis that itself was going to use. This makes perfect sense to me based on how LLMs work. You tell it to reach for a function that doesn't exist, and then you ask it to make it exist based on how it reached for it. Then the design matches its expectations perfectly.
GPT-4 now considers self modifying AI code to be extremely dangerous and doesn't like talking about it. Claude's safety filters began shutting down similar conversations a few months ago, suggesting the user switch to a dumber model.
It seems the last generation or two of models passed some threshold regarding self replication (which is a distinct but highly related concept), and the labs got spooked. I haven't heard anything about this in public though.
Edit: It occurs to me now that "self modification and replication" is a much more meaningful (and measurable) benchmark for artificial life than consciousness is...
BTW for reference the thing that spooked Claude's safety trigger was "Did PKD know about living information systems?"
The term was created by Karpathy, meaning one thing, but nowadays many people use the term to refer to any time they are asking AI to write code.
You don't need a "fully agentic" tool like Claude Code to write code. Any of the AI chatbots can write code too, obviously doing so better since the advent of "thinking" models, and RL post-training for coding. They also all have had built-in "code interpreter" functionality for about 2 years where they can not only write code but also run and test it in a sandbox, at least for Python.
Recently at least, the quality of code generation (at least if you are asking for something smallish) is good enough that cut and pasting chatbot output (e.g. C++, not Python) to compile and run yourself is still a productivity boost, although this was always an option.
The term was coined then, but people have been doing it with claude code and cursor and copilot and other tools for longer. They just didn't have a word for it yet.
Claude Code was released a month after this post - and cursor did not yet have an agent concept, mostly just integrated chat and code completion. I know because I was using it.
The author is using the term to mean AI assisted coding. Thats been around for longer than the word vibe coding
This remains a point of great confusion every time there is such a discussion.
When some people say vibe coding, they mean they're copy-pasting snippets of code from ChatGPT.
When some people say vibe coding, they give a one sentence prompt to their cluster of Claude Code instances and leave for a road trip!
Very good point. Also, What the OP describes is something I went through in the first few months of coding with AI. I pushed passed “the code looks good but it’s crap” phase and now it’s working great. I’ve found the fix is to work with it during research/planning phase and get it to layout all its proposed changes and push back on the shit. Once you have a research doc that looks good end to end then hit “go”.
I have only ever successfully tried "vibe coding", as Kaparathy describes it, once, soon after VS Code Copilot added the chat feature, but timestamps tell that was in November 2023. So two years is quite realistic.
Yeah, that's what I pointed out.
Just more FUD from devs that think they're artisans.
[dead]