Comment by thwarted
7 days ago
> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.
Something Brooks wrote about 50 years ago, and the industry has never fully acknowledged. Throw more bodies at it, be they human bodies or bot agent bodies.
The point of the mythical man month is not that more people are necessarily worse for a project, it's just that adding them at the last minute doesn't work, because they take a while to get up to speed and existing project members are distracted while trying to help them.
It's true that a larger team, formed well in advance, is also less efficient per person, but they still can achieve more overall than small teams (sometimes).
Interesting point. And from the agents point of view, it’s always joining at the last minute, and doesn’t stick around longer than its context window. There’s a lesson in there maybe…
The context window is the onboarding period. Every invocation is a new hire reading the codebase for the first time.
This is why architecture legibility keeps getting more important. Clean interfaces, small modules, good naming. Not because the human needs it (they already know the codebase) but because the agent has to reconstruct understanding from scratch every single time.
Brooks was right that the conceptual structure is the hard part. We just never had to make it this explicit before.
A small difference is that AGENTS.md gets added every time, so the evolution of that is essentially your agent's equivalent of team experience.
But there is a level of magnitude difference between coordinating AI agents and humans - the AIs are so much faster and more consistent than humans, that you can (as Steve Yegge [0] and Nicholas Carlini [1] showed) have them build a massive project from scratch in a matter of hours and days rather than months and years. The coordination cost is so much lower that it's just a different ball game.
[0] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
[1] https://www.anthropic.com/engineering/building-c-compiler
Then why aren’t we seeing orders of magnitude more software being produced?
Didn't we have a post the other day saying that the number of "Show HN" posts is skyrocketing?
https://news.ycombinator.com/item?id=47045804
I think we are. There's definitely been an uptick in "show HN" type posts with quite impressively complex apps that one person developed in a few weeks.
From my own experience, the problem is that AI slows down a lot as the scale grows. It's very quick to add extra views to a frontend, but struggles a lot more in making wide reaching refactors. So it's very easy to start a project, but after a while your progress slows significantly.
But given I've developed 2 pretty functional full stack applications in the last 3 months, which I definitely wouldn't have done without AI assistance, I think it's a fair assumption that lots of other people are doing the same. So there is almost certainly a lot more software being produced than there was before.
3 replies →
This question remains the 900-pound gorilla of this discussion
Claude Code released just over a year ago, agentic coding came into its own maybe in May or June of last year. Maybe give it a minute?
9 replies →
Why do you assume there isn't?
Enterprise (+API) usage of LLMs has continued to grow exponentially.
2 replies →
It doesn't appear to have improved the quality of the software we have either.
we are. you can check the APP STORE release yoy. it's skyrocketing.
2 replies →
"The future is already here, it's just not evenly distributed"
> But there is a level of magnitude difference between coordinating AI agents and humans
And yet, from https://news.ycombinator.com/item?id=47048599
> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.
Which sounds pretty much the same as how work is broken down and handed out to humans.
Yes, but you can do this at the top level, and then have AI agents do this themselves for all the low level tasks, which is then orders of magnitude faster than with human coordination.