Comment by vardalab
15 hours ago
We said the same thing when 3D printing came out. Any sort of cool tech, we think everybody’s going to do it. Most people are not capable of doing it. in college everybody was going to be an engineer and then they drop out after the first intro to physics or calculus class. A bunch of my non tech friends were vibe coding some tools with replit and lovable and I looked at their stuff and yeah it was neat but it wasn't gonna go anywhere and if it did go somewhere, they would need to find somebody who actually knows what they're doing. To actually execute on these things takes a different kind of thinking. Unless we get to the stage where it's just like magic genie, lol. Maybe then everybody’s going to vibe their own software.
I don't think claude code is like 3d printing.
The difference is that 3D printing still requires someone, somewhere to do the mechanical design work. It democratises printing but it doesn't democratise invention. I can't use words to ask a 3d printer to make something. You can't really do that with claude code yet either. But every few months it gets better at this.
The question is: How good will claude get at turning open-ended problem statements into useful software? Right now a skilled human + computer combo is the most efficient way to write a lot of software. Left on its own, claude will make mistakes and suffer from a slow accumulation of bad architectural decisions. But, will that remain the case indefinitely? I'm not convinced.
This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.
There are already some companies using fine tuned AI models for "red team" infosec audits. Apparently they're already pretty good at finding a lot of creative bugs that humans miss. (And apparently they find an extraordinary number of security bugs in code written by AI models). It seems like a pretty obvious leap to imagine claude code implementing something similar before long. Then claude will be able to do security audits on its own output. Throw that in a reinforcement learning loop, and claude will probably become better at producing secure code than I am.
> I can't use words to ask a 3d printer to make something
Setting aside any implications for your analogy. This is now possible.
The design work remains.
I’m not a fan of analogies, but here goes: Apple don’t make iPhones. But they employ an enormous number of people working on iPhone hardware, which they do not make.
If you think AI can replace everyone at Apple, then I think you’re arguing for AGI/superintelligence, and that’s the end of capitalism. So far we don’t have that.
> This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.
Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.
Though geeks absolutely like raving about go and especially chess.
> Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.
Yeah but, does that actually matter? Is that actually a reason to think LLMs won't be able to outpace humans at software development?
LLMs already deal with imperfect information in a stochastic world. They seem to keep getting better every year anyway.
1 reply →
There is verification and validation.
The first part is making sure you built to your specification, the second thing is making sure you built specification was correct.
The second part is going to be the hard part for complex software and systems.
I think validation is already much easier using LLMs. Arguably this is one of the best use cases for coding LLMs right now: you can get claude to throw together a working demo of whatever wild idea you have without needing to write any code or write a spec. You don't even need to be a developer.
I don't know about you, but I'd much rather be shown a demo made by our end users (with claude) than get sent a 100 page spec. Especially since most specs - if you build to them - don't solve anyone's real problems.
Demo, don't memo.
3 replies →
> The second part is going to be the hard part for complex software and systems.
Not going to. Is. Actually, always has been; it isn’t that coding solutions wasn’t hard before, but verification and validation cannot be made arbitrarily cheap. This is the new moat - if your solutions require time consuming and expensive in dollar terms qa (in the widest sense), it becomes the single barrier to entry.
Amazon Kiro starts with making the detailed specification based on human input in natural language.
> I can't use words to ask a 3d printer to make something.
You can: the words are in the G-code language.
I mean: you are used to learn foreign languages in school, so you are already used to formulate your request in a different language to make yourself understood. In this case, this language is G-code.
This is a strange take; no one is hand-writing the g-code for their 3d print. There are ways to model objects using code (eg openscad), but that still doesn't replace the actual mechanical design work involved in studying a problem and figuring out what sort of part is required to solve it.
3 replies →
Its not our current location, but our trajectory that is scary.
The walls and plateaus that have been consistently pulled out from "comments of reassurance" have not materialized. If this pace holds for another year and a half, things are going to be very different. And the pipeline is absolutely overflowing with specialized compute coming online by the gigawatt for the foreseeable future.
So far the most accurate predictions in the AI space have been from the most optimistic forecasters.
There is a distribution of optimism, some people in 2023 were predicting AGI by 2025.
No such thing as trajectory when it comes to mass behavior because it can turn on a dime if people find reason to. Thats what makes civilization so fun.
https://xkcd.com/605/
You can basically hand it a design, one that might take a FE engineer anywhere from a day to a week to complete and Codex/Claude will basically have it coded up in 30 seconds. It might need some tweaks, but it's 80% complete with that first try. Like I remember stumbling over graphing and charting libraries, it could take weeks to become familiar with all the different components and APIs, but seemingly you can now just tell Codex to use this data and use this charting library and it'll make it. All you have to do is look at the code. Things have certainly changed.
It might be 80-95% complete but the last 5% is either going to take twice the time or be downright impossible.
This is like Tesla's self-driving: 95% complete very early on, still unsuitable for real life many years later.
Not saying adding few novel ideas (perhaps working world models) to the current AI toolbox won't make a breakthrough, but LLMs have their limits.
That was the same thing with human products though.
https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule
Except that the either side of it is immensely cheaper now.
I figure it takes me a week to turn the output of ai into acceptable code. Sure there is a lot of code in 30 seconds but it shouldn't pass code review (even the ai's own review).
For now. Claude is worse than we are at programming. But its improving much faster than I am. Opus 4.6 is incredible compared to previous models.
How long before those lines cross? Intuitively it feels like we have about 2-3 years before claude is better at writing code than most - or all - humans.
2 replies →
> You can basically hand it a design
And, pray tell, how people are going to come up with such design?
Honestly you could just come up with a basic wireframe in any design software (MS paint would work) and a screen shot of a website with a design you like and tell it "apply the aesthetic from the website in this screenshot to the wireframe" and it would probably get 80% (probably more) of the way there. Something that would have taken me more than a day in the past.
3 replies →
Not really. What the FE engineer will produce in a week will be vastly different from what the AI will produce. That's like saying restaurants are dead because it takes a minute to heat up a microwave meal.
It does make the lowest common denominator easier to reach though. By which I mean your local takeaway shop can have a professional looking website for next to nothing, where before they just wouldn't have had one at all.
I think exceptional work, AI tools or not, still takes exceptional people with experience and skill. But I do feel like a certain level of access to technology has been unlocked for people smart enough, but without the time or tools to dive into the real industry's tools (figma, code, data tools etc).
4 replies →
There were some good and some pretty terrible FE devs though, and it's not clear which ones prevailed.
Wouldn’t we have more restaurants if there was no microwave ovens? But microwave oven also gave rise to many frozen food industry. Overall more industrializations.
The number of non-technical people in my orbit that could successfully pull up Claude code and one shot a basic todo app is zero. They couldn’t do it before and won’t be able to now.
They wouldn’t even know where to begin!
You don't need to draw the line between tech experts and the tech-naive. Plenty of people have the capability but not the time or discipline to execute such a thing by hand.
You go to chatGPT and say "produce a detailed prompt that will create a functioning todo app" and then put that output into Claude Code and you now have a TODO app.
4 replies →
Thank you for posting this.
Im really tired, and exhausted of reading simple takes.
Grok is a very capable LLM that can produce decent videos. Why are most garbage? Because NOT EVERYONE HAS THE SKILL NOR THE WILL TO DO IT WELL!
The answer is taste.
I don't know if they will ever get there, but LLMs are a long ways away from having decent creative taste.
Which means they are just another tool in the artist's toolbox, not a tool that will replace the artist. Same as every other tool before it: amazing in capable hands, boring in the hands of the average person.
Also, if you are a human who does taste, it's very difficult to get an AI to create exactly what you want. You can nudge it, and little by little get closer to what you're imagining, but you're never really in control.
This matters less for text (including code) because you can always directly edit what the AI outputs. I think it's a lot harder for video.
1 reply →
Taste is both driven by tools and independent of it.
It's driven by it in the sense that better tools and the democratization of them changes people's baseline expectations.
It's independent of it in that doing the baseline will not stand out. Jurassic Park's VFX stood out in 1993. They wouldn't have in 2003. They largely would've looked amateurish and derivative in 2013 (though many aspects of shot framing/tracking and such held up, the effects themselves are noticeably primitive).
Art will survive AI tools for that reason.
But commerce and "productivity" could be quite different because those are rarely about taste.
100% correct. Taste is the correct term - I avoid using it as Im not sure many people here actually get what it truly means.
How can I proclaim what I said in the comment above? Because Ive spent the past week producing something very high quality with Grok. Has it been easy? Hell no. Could anyone just pick up and do what Ive done? Hell no. It requires things like patience, artistry, taste etc etc.
The current tech is soul-less in most people hands and it should remain used in a narrow range in this context. The last thing I want to see is low quality slop infesting the web. But hey that is not what the model producers want - they want to maximize tokens.
1 reply →
> To actually execute on these things takes a different kind of thinking
Agreed. Honestly, and I hate to use the tired phrase, but some people are literally just built different. Those who'd be entrepreneurs would have been so in any time period with any technology.
This goes well along with all my non-tech and even tech co-workers. Honestly the value generation leverage I have now is 10x or more then it was before compared to other people.
HN is a echo chamber of a very small sub group. The majority of people can’t utilize it and needs to have this further dumbed down and specialized.
That’s why marketing and conversion rate optimization works, its not all about the technical stuff, its about knowing what people need.
For funded VC companies often the game was not much different, it was just part of the expenses, sometimes a lot sometimes a smaller part. But eventually you could just buy the software you need, but that didn’t guarantee success. Their were dramatic failures and outstanding successes, and I wish it wouldn’t but most of the time the codebase was not the deciding factor. (Sometimes it was, airtable, twitch etc, bless the engineers, but I don’t believe AI would have solved these problems)
> The majority of people can’t utilize it
Tbh, depending on the field, even this crowd will need further dumbing down. Just look at the blog illustration slops - 99% of them are just terrible, even when the text is actually valuable. That's because people's judgement of value, outside their field of expertise, is typically really bad. A trained cook can look at some chatgpt recipe and go "this is stupid and it will taste horrible", whereas the average HN techbro/nerd (like yours truly) will think it's great -- until they actually taste it, that is.
The example is bad imo because chatgpt can be really great for cooking if you utilize it correctly. Like in coding you already need some skill and shouldn't believe everything it says.
Agreed. This place amazes in regards to how overly confident some people feel stepping outside of their domains.. the mistakes I see here in relation to talking about subject areas associated with corporate finance, valuation etc is hilarious. Truly hilarious.
> whereas the average HN techbro/nerd (like yours truly) will think it's great -- until they actually taste it, that is.
This is the schtick though, most people wouldn't even be able to tell when they taste it. This is typically how it works, the average person simply lacks the knowledge so they don't even know what is possible.
3 things
1) I don’t disagree with the spirit of your argument
2) 3D printing has higher startup costs than code (you need to buy the damn printer)
3) YOU are making a distinction when it comes to vibe coding from non-tech people. The way these tools are being sold, the way investments are being made, is based on non-domain people developing domain specific taste.
This last part “reasonable” argument ends up serving as a bait and switch, shielding these investments. I might be wrong, but your comment doesn’t indicate that you believe the hype.