Comment by simonw
6 hours ago
A bit odd that this talks about AutoGPT and declares it a failure. Gary quotes himself describing it like this:
> With direct access to the Internet, the ability to write source code and increased powers of automation, this may well have drastic and difficult to predict security consequences.
AutoGPT was a failure, but Claude Code / Codex CLI / the whole category of coding agents fit the above description almost exactly and are effectively AutoGPT done right, and they've been a huge success over the past 12 months.
AutoGPT was way too early - the models weren't ready for it.
>they've been a huge success over the past 12 months
They lose billions of dollars annually.
In what universe is that a business success?
Coding agents are successful products which generate billions of dollars of revenue from millions of paying customers.
The organizations that provide them lose money because of the R&D costs involved in staying competitive in the model training arms race.
Revenue isn't profit.
Checking whether Claude Code by itself is profitable or not is probably impossible. It doesn't make a lot of sense divorcing R&D from the product. And obviously the running costs are not insignificant.
The company as a whole loses money.
2 replies →
Have they actually been a huge success, though? You're one of the most active advocates here, so I want to ask you what you make of "the Codex app". More specifically, the fact that it's a shitty Electron app. Is this not a perfect use case for agents? Why can OpenAI, with unlimited agents, not let them loose on the codebase with instructions to replace Electron with an appropriate cross-platform native framework, or even a per-platform native GUI? They said they chose Electron for ease of portability for cross-platform delivery, but they could allocate 1, 10, or 1000 agents to develop a native Linux and native Windows port of the MacOS codebase they started with. This is not even a particularly serious endeavour. I have coded a cross-platform chat application myself with more advanced features than what Codex offers, and chat GUIs are really among the most basic thing you can be doing; practically every consumer-targeted GUI application finds a time when they shove a chat box into a significantly more complex framework.
The conclusion that seems readily apparent to me, as it has always been, is that these "agents" are completely incapable of creating production-grade software suitable for shipping, or even meaningfully modifying existing software for a task like a port. Like the one-shot game they demo'd, they can make impressive proof-of-concepts, but nothing any user would use, nor with a suitable foundation for developers to actually build upon.
The bottleneck in development is human attention and ability to validate now (https://sibylline.dev/articles/2026-01-27-stop-orchestrating...). OpenAI could unleash the Kraken, but in order to ensure they're releasing good software that works, they still need the eyeball hours and people who can hold the idea of the thing being built in their head and validate against that ideal.
Agents default to creating big balls of mud but it's fairly trivial to use prompting/tools to keep things growing in a more factored, organized way.
"Why isn't there better software available?" is the 900 pound gorilla in the LLM room, but I do think there are enough anecdotes now to hypothesize that what agents seem to be good at is writing software that
1. wasn't economical to write in the first place previously, and
2. doesn't need to be sold to anyone else or maintained over time
So, Brad in logistics previously had to collate scanned manifests with purchase requests once a month, but now he can tell Claw to do it for him.
Which is interesting given the talk of The End of Software Development or whatever because "software that nobody was willing to pay for previously" kind of by definition isn't going to displace a lof of people who make software.
I do agree with this fully. I think LLMs have utility in making the creation of bad software extremely accessible. Bad software that happens to perfectly match some person's super specific need is by no means a bad thing to have in the world. A gap has been filled in creating niche software that previously was not worth paying anyone to create. But every single day we have multiple articles here proclaiming the end of software engineering, and I just don't get how the people hyping this up reconcile their hype with the lack of software being produced by agents that is good enough to replace any of the software people actually pay for.
My experience is that coding agents as-of November (GPT-5.2/Opus 4.5) produce high quality, production-worthy code against both small and large projects.
I base this on my own experience with them plus conversations with many other peers who I respect.
You can argue that OpenAI Codex using Electron disproves this if you like. I think it demonstrates a team making the safer choice in a highly competitive race against Anthropic and Google.
If you're wondering why we aren't seeing seismic results from these new tools yet, I'll point out that November was just over 2 months ago and we had the December holiday period in the middle of that.
I'm not sure I buy the safer choice argument. How much of a risk is it to assign a team of "agents" to independently work on porting the code natively? If they fail, it costs a trivial amount of compute relative to OAI's resources. If they succeed, what a PR coup that would be! It seems like they would have nothing to lose by at least trying, but they either did not try, or they did and it failed, neither of which inspires confidence in their supposedly life-changing, world-changing product.
I will note that you specifically said the agents have shown huge success over "the past 12 months", so it feels like the goalposts are growing legs when you say "actually, only for the last two months with Opus 4.5" now.
2 replies →