Comment by mattmanser
4 hours ago
But at least you could basically follow their logic.
I think what a lot of us are concerned about is that the vibe-coded stuff bloats fast. It's so verbose and all over the place, that picking that thing apart will be a huge job, and relying on an AI to pick apart work that an AI already failed to maintain seem like wishful thinking.
It's literally "The AI is failing! Don't worry I'll just use AI to fix the AI!".
Yes, as long as context size increase and llm improve at least there's a way out through using AI but once the progress stops...
Huh? Even if progress somehow stopped, current models are already good enough to help -- and the quality of a given vibe-coded throwaway codebase will be higher the more recently it was created.
The worst I would ever get was "here's our Access database - can you rewrite it". That was utterly useless to me.
What I needed to do was sit with a user (not a manager/the person buying my services) and ask them to show me the different things they did with the software. Then I could write a spec for the actual _feature_ and would only need to look at the existing codebase if they needed data transferring across[1]. I don't see why our new LLM-based future would be any different
[1] Of course this meant I would leave out edge-cases and/or weird quirks of the system - often this was actually a bonus as they were either no longer relevant or worked that way because that was the only way they knew how to do it