Comment by acjohnson55
12 days ago
I am not having the exact same experience as the author--Opus 4.6 and Codex 5.3 seem more incremental to me than what he is describing--but if we're on an exponential curve, the difference is a rounding error.
4 months ago, I tried to build an application mostly vibe-coded. I got impressively far for what I thought was possible, but it bogged down. This past weekend, my friend had OpenClaw build an application of similar complexity in a weekend. The difference is vast.
At work, I wouldn't say I'm one-shotting tasks, but the first shot is doing what used to be a week's work in about an hour, and then the next few hours are polish. Most of the delay in the polish phase is due to the speed of the tooling (e.g. feature branch environment spin up and CI) and the human review at the end of the process.
The side effects people report of lower quality code hitting review are real, but I think that is a matter of training, process and work harness. I see no reason that won't significantly improve.
As I said in another thread a couple days ago, AI is the first technology where everyone is literally having a different experience. Even within my company, there are divergent experiences. But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author. And if they can find people who can actuate that reality, the folks who can't are going to see their options contract precipitously.
> But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author.
I think this part is very real.
If you’re in this thread saying “I don’t get it” you are in danger much faster than your coworker who is using it every day and succeeding at getting around AI’s quirks to be productive.
We’ve got repos full of 90% complete vibe code.
They’re all 90% there.
The thing is the last 10% is 90% of the effort. The last 1% is 99% of the effort.
For those of us who can consistently finish projects the future is bright.
The sheer amount of vibe code is simply going to overwhelm us (see current state of open source)
My wife manages 70 software developers. Her boss, the CIO, who has no practical programming experiece, is demanding her and her peers cut 50% of their staff in the next year.
But here's the thing I don't get. I can see the argument for AI endangering our jobs. But why does this also mean that rapid adoption of AI on a personal level of is so important? In an AI world, there could be a new bell curve of talent and a fight to stay ahead. So ... adopt early so you're the one not left behind. (and implicitly, most other people are left behind?)
If AI tightens down the job market I just don't see why there would need to be this frantic urgency to adopt it. Getting a small head start might not mean very much once the dust has settled. Employers will still be cutting, and there will still be new blood who will adapt to new technology faster than you can.
Be careful here. I have more coworkers contributing slop and causing production issues than 10x’ing themselves.
The real danger is if management sees this as acceptable. If so best of luck to everyone.
> The real danger is if management sees this as acceptable. If so best of luck to everyone.
Already happening. It's just an extension of the "move fast and break stuff" mantra, only faster. I think the jury is still out on if more or less things will break, but it's starting to look like not enough to pump the brakes.
> Be careful here. I have more coworkers contributing slop and causing production issues than 10x’ing themselves.
Sure, many such cases. We'll all have work for a while, if only so that management has someone to yell at when things break in prod. And break they will -- the technology is not perfected and many are now moving faster than they can actually vet the results. There is obvious risk here.
But the curve we're on is also obvious now. I'm seeing massive improvements in reliability with every model drop. And the model drops are happening faster now. There is less of an excuse than ever for not using the tools to improve your productivity.
I think the near future is going to be something like a high-speed drag race. Going slow isn't an option. Everyone will have to go fast. Many will crash. Some won't and they will win.
1 reply →
If a company lets faulty code get to production, that's an issue no matter how it is produced. Agentic coding can produce code at much higher volumes, but I think we're still in the early days of figuring out how to scale quality and the other nonfunctional requirements. (I do believe that we're literally talking about days, though, when it comes to some facets of some of these problems.)
But there's nothing inherent about agentic coding to lead to slop outcomes. If you're steering it as a human, you can tweak the output, by hand or agentically, until it matches your expectations. It's not currently a silver bullet.
That said, my experience is that the compressing of the research, initial draft process, and revision--all which used to be the bulk of my job--is radical.
Yes, for me --moved past AI coding accelerating you to 80-90% then living in the valley of infinite tweaks. This past month with with right thinking working with say Opus 4.6 has moved past that blocker.
> But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author.
We already live in that world. It's called "Hey Siri", "Hey Google", and "Alexa". It seems that no amount of executive tantrum has caused any of these tools to give a convergent experience.
Voice assistants, which I've used less than 10 times in my life, are hardly related to what I'm talking about.
They are very related: the only thing they can get with 100% reliability is spying.