Comment by threethirtytwo
13 days ago
It’s not. There are tons of great programmers, that are big names in the industry who now exclusively vibe code. Many of these names are obviously intelligent and great programmers.
This is an extremely false statement.
People use "vibe coding" to mean different things - some mean the original Karpathy "look ma, no hands!", feel the vibez, thing, and some just (confusingly) use "vibe coding" to refer to any use of AI to write code, including treating it as a tool to write small well-defined parts that you have specified, as opposed to treating it as a magic genie.
There also seem to be people hearing big names like Karpathy and Linus Torvalds say they are vibe coding on their hobby projects, meaning who knows what, and misunderstanding this as being an endorsement of "magic genie" creation of professional quality software.
Results of course also vary according to how well what you are asking the AI to do matches what it was trained on. Despite sometimes feeling like it, it is not a magic genie - it is a predictor that is essentially trying to best match your input prompt (maybe a program specification) to pieces of what it was trained on. If there is no good match, then it'll have a go anyway, and this is where things tend to fall apart.
Funny, the last interview I watched with Karpathy he highlighted the way the AI/LLM was unable to think in a way that aligned with his codebase. He described vibe-coding a transition from Python to Rust but specifically called out that he hand-coded all of the python code due to weaknesses in LLM's ability to handle performant code. I'm pretty sure this was the last Dwarkesh interview with "LLMs as ghosts".
Right, and he also very recently said that he felt essentially left behind by AI coding advances, thinking that his productivity could be 10x if he knew how to use it better.
It seems clear that Karpathy himself is well aware of the difference between "vibe coding" as he defined it (which he explicitly said was for playing with on hobby projects), and more controlled productive use of AI for coding, which has either eluded him, or maybe his expectations are too high and (although it would be surprising) he has not realized the difference between the types of application where people are finding it useful, and use cases like his own that do not play to its strength.
karpathy is biased. I wouldn't use his name as he's behind the whole vibe coding movement.
You have to pick people with nothing to gain. https://x.com/rough__sea/status/2013280952370573666
I don't think he meant to start a movement - it was more of a throw-away tweet that people took way too seriously, although maybe with his bully pulpit he should have realized that would happen.
1 reply →
How many more appeal-to-authority counter arguments are going to be made in this thread
They are more effective then on the ground in your face evidence largely because people who are so against AI are blind to it.
I hold a result of AI in front of your face and they still proclaim it’s garbage and everything else is fraudulent.
Let’s be clear. You’re arguing against a fantasy. Nobody even proponents of AI claims that AI is as good as humans. Nowhere near it. But they are good enough for pair programming. That is indisputable. Yet we have tons of people like you who stare at reality and deny it and call it fraudulent.
Examine the lay of the land if that many people are so divided it really means both perspectives are correct in a way.
Just to be more pedantic, there is more nuance to all of that.
Nobody smart is going to disagree that LLMs are a huge net positive. The finer argument is whether or not at this point you can just hand off coding to an LLM. People who say yes simply just haven't had enough experience with using LLMs to a large extent. The amount of time you have to spend prompt engineering the correct response is often the same amount of time it takes for you to write the correct code yourself.
And yes, you can put together AGENT.md files, mcp servers, and so on, but then it becomes a game of this. https://xkcd.com/1205/
If you want to be any good at all in this industry, you have to develop enough technical skills to evaluate claims for yourself. You have to. It's essential.
Because the dirty secret is a lot of successful people aren't actually smart or talented, they just got lucky. Or they aren't successful at all, they're just good at pretending they are, either through taking credit for other people's work or flat out lying.
I've run into more than a few startups that are just flat out lying about their capabilities and several that were outright fraud. (See DoNotPay for a recent fraud example lol)
Pointing to anyone and going "well THEY do it, it MUST work" is frankly engineering malpractice. It might work. But unless you have the chops to verify it for yourself, you're just asking to be conned.
5 replies →
I think the author is way understating the uselessness of LLMs in any serious context outside of a demo to an investor. I've had nothing but low IQ nonsense from every SOTA model.
If we're being honest with ourselves, Opus 4.5 / GPT 5.2 etc are maybe 10-20% better than GPT 3.5 at most. It's a total and absolute catastrophic failure that will go down in history as one of humanity's biggest mistakes.
Non sequitor.
You don't have to be bad at coding to use LLMs. The argument was specifically about thinking that LLMS can be great at accomplishing complex tasks (which they are not)
Wtf are you talking about. Great programmers use LLMs for complex tasks. That was the point of my comment
And my point is that what you think are complex tasks are not really complex.
The simple case is that if you ask an agent to do a whole bunch of modifications across a large number of files, it often loses context due to context windows.
Now, you can make your own agents with custom mcp servers to basically improve its ability to do tasks, but then you are basically just building automation tools in the first place.
1 reply →
You didn't state any complex tasks though. You only stated programmers who use LLMs.
1 reply →
Yeah, take Ryan Dahl:
His tweets were getting ~40k views average. He made his big proclamation about AI and boom viral 7 million
This is happening over, and over, and over again
I'm not saying he's making shit up but you're naive if you don't think they're slightly tempted by the clear reaction this content gets
He’d get an equivalent reaction for talking shit about AI. Anyway it’s not just him. Plenty of other people.