Comment by fooker
8 hours ago
This might not pan out to be the glorious victory of human craft as you’re imagining it to be.
Here’s a slightly different future - these AI rescue consultants are bots too, just trained for this purpose.
Plausible?
I have already experienced claude 4.7 handle pretty complex refactors without issues. Scale and correctness aren’t even 1% of the issue it was last year. You just have to get the high level design right, or explicitly ask it critique your design before building it.
> You just have to get the high level design right, or explicitly ask it critique your design before building it.
Do you think people are not giving their agents specs and asking for input?
The ones who end up with messes, no
Very often, no.
Maybe the professional devs, but not the vibecoders
A thing I've noticed is that everyone thinks they prompt better than the next guy.
This. I have this buddy, who is not an idiot by stretch of the imagination and more adventurous than me in some ways ( I don't really run agents on my machine ), but when I was looking at his prompts, I sometimes question how he gets anything done at all. It is vague and angry demands.
And the bots training the bots are just bots that were trained to train bots?
Nothing that sexy, just thirty odd years of software engineering data from humans.
Commits, design reviews, whitepapers, code reviews, test suites. And pretty concerning : chat logs and even keystrokes from employees nowadays.
The way we train specialized bots now is incredibly inefficient, that part is rapidly improving.
One AI can't vibe code out of the mess, so you'd make another AI trained on getting out of vibe coded messes?
That's serious levels of circular thinking right there.
This is literally how training humans have worked for thousands of years.
We train humans to do things untrained humans can not do.
I think that will happen. I think several things can be true at the same time:
- AI Hype
- AI Psychosis
- AI keeps getting better and better until it can work around big AI slop code bases
> AI keeps getting better and better until it can work around big AI slop code bases
The belief in this is a form of AI psychosis, I think.
Maybe in the future but certainly no evidence of this anytime soon
> Maybe in the future but certainly no evidence of this anytime soon
Here's some anecdotal evidence from me - I cleaned up multiple GPT 4.x era vibecoded projects recently with the latest claude model and integrated one of those into a fairly large open source codebase.
This is something AI completely failed at last year.
Maybe you should try something like this or listen to success stories before claiming 'certainly no evidence' in future?
There are untold billions of dollars to be had if you can make this future come to pass. You don't need AGI to make it happen either. You just need to keep making the context windows bigger and keep coming up with updated training data. It's not the outcome I want, but it really does feel within reach. The only limiting factor is going to be token count and cost to process/generate those tokens. But if you don't particularly care about quality, costs are going to have to go up by several orders of magnitude before you start to regret firing your software engineers.
I don't know what happens in a decade when there are no junior engineers, skilled senior engineers are becoming rare, and the only data left the train LLMs on is 200th-generation slop. But AI slop being qualitatively slop is not enough of a obstacle to prevent that future from coming to pass. And billions of dollars will be "saved" along the way.
No evidence? Chatgpt came out 3 years ago. You basically just need to stick a ruler up on a curve
8 replies →
I have personally had success telling Claude that some AI-written system is too complicated and ask it to rewrite it in a more logical way. This sometimes results in thousands of lines of code being deleted. I give an instruction like that if I see certain red flags, eg:
1) same business logic implemented in two different places, with extra code to sync between them
2) fixing apparently simple bugs results in lots of new code being written
It’s a sign I need to at least temporarily dedicate more effort to overseeing work in that area.
I somewhat agree with the AI psychosis framing of the OP. It takes some taste and discipline to avoid letting things dissolve into complete slop.
It's amusing to me that:
* A belief that AI will keep getting better, presented without evidence, does not yield a lot of skepticism around these parts.
* Your comment saying it is wrong to believe AI will keep getting better, also presented without evidence, is downvoted.