Comment by pron
2 months ago
The thing is that some imagined AI that can reliably produce reliable software will also likely be able to be smart enough to come up with the requirements on its own. If vibe coding is that capable, then even vibe coding itself is redundant. In other words, vibe coding cannot possibly be "the future", because the moment vibe coding can do all that, vibe coding doesn't need to exist.
The converse is that if vibe coding is the future, that means we assume there are things the AI cannot do well (such as come up with requirements), at which point it's also likely it cannot actually vibe code that well.
The general problem is that once we start talking about imagined AI capabilities, both the capabilities and the constraints become arbitrary. If we imagine an AI that does X but not Y, we could just as easily imagine an AI that does both X and Y.
This is the most coherent comment in this thread. People who believe in vibe coding but not in generalizing it to “engineering”... brother the LLMs speak English. They can even hold conversations with your uncle.
My bet is that it will be good enough to devise the requirements.
They already can brainstorm new features and make roadmaps. If you give them more context about the business strategy/goals then they will make better guesses. If you give them more details about the user personas / feedback / etc they will prioritize better.
We're still just working our way up the ladder of systematizing that context, building better abstractions, workflows, etc.
If you were to start a new company with an AI assistant and feed it every piece of information (which it structures / summarizes synthesizes etc in a systematic way) even with finite context it's going to be damn good. I mean just imagine a system that can continuously read and structure all the data from regular news, market reports, competitor press releases, public user forums, sales call transcripts, etc etc. It's the dream of "big data".
If it gets to that point, why is the customer even talking to a software company? Just have the AI build whatever. And if an AI assistant can synthesize every piece of business information, why is there a need for a new company? The end user can just ask it to do whatever.
Maybe yes. It takes time for those structures to "compact" and for systems to realign.
I agree with the first part which is basically 'being able to do a software engineers full job' is basically ASI/AGI complete.
But I think it is certainly possible that we reach a point/plateau where everything is just 'english -> code' compilation but that 'vibe coding' compilation step is really really good.
It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs. In fact, why not the opposite? I think that currently LLMs are better at writing general prose, offering advice etc.., than they are at writing code. They are better at knowing what people generally want than they are at solving complex logic puzzle that require many deduction steps. Once we're reduced to imagining what AI can and cannot do, we can imagine pretty much any capability or restriction we like. We can imagine something is possible, and we can just as well choose to imagine it's not possible. We're now in the realm of, literally, science fiction.
> It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs.
Ideation at the working PM level, sure. I meant more hard technical ideation - ie. what gets us from 'not working humanoid robot' to 'humanoid robot' or 'what do we need to do to get a detection of a higgs boson', etc. etc. I think it is possible to imagine a world where 'english -> code' (for reasonably specific english) is solved but not that level of ideation. If that level of ideation is solved, then we have ASI.
1 reply →
The only reason to imagine that plateau is because it’s painful to imagine a near future where humans have zero economic value.
It's not the only reason, technologies do plateau. We're not living in orbiting cities flying fusion powered vehicles around, even though we built rockets and nuclear power more than half a century ago.
5 replies →
Why is this desirable?
1 reply →
Following similar thinking, there's no world in which AI becomes exactly capable of replacing all software developers and then stops there, miraculously saving the jobs of everyone else next to and above them in the corporate hierarchy. There may be a human, C-suite driven cost-cutting effort to pause progress there for some brief time, but if AI can do all dev work, there's no reason it can't do all office work to replace every human in front of a keyboard. Either we're all similarly affected, or else AI still isn't good enough, in which case fleets of programmers are still needed, and among those, the presumed "helpfulness" of AI will vary wildly. Not unlike what we see already.
> if AI can do all dev work, there's no reason it can't do all office work to replace every human in front of a keyboard
There are plenty of reasons.
Radiologists aren’t being replaced by AI because of liability. Same for e.g. civil engineers. Coders don’t have liability for shipping shit code. That makes switching to an AI that’s equally blameless easier.
Also, data: the web is first and foremost a lot of code. AI is getting good at coding first for good reason.
Finally, as OP says, the hard work in engineering is actually scoping requirements and then executing and iterating on that. Some of that is technical know-how. A lot is also political and social skills. Again, customers are okay with a vibe-coded website in a way most people are not with even support chatbots.
> Coders don’t have liability for shipping shit code
Depends on the industry, and shipping shit code is the reason cybersecurity laws are starting to be a thing.
> Coders don’t have liability for shipping shit code
What if you're shipping code for a therac-25?
1 reply →
> Coders don’t have liability for shipping shit code.
Yet another reason why this needs to change, fast.
What do you mean "come up with the requirements"? Like if self-driving cars got so good that they didn't just drive you somewhere but decided where you should go?
No, I mean that instead of vibe coding - i.e. guiding the AI through features - you'll just tell it what you want in broad strokes, e.g. "create a tax filing system that's convenient enough for the average person to use", or, "I like the following games ... Build a game involving spaceships that I'll enjoy", and it will figure out the rest.