← Back to context

Comment by thorum

1 day ago

Am I wrong that this entire approach to agent design patterns is based on the assumption that agents are slow? Which yeah, is very true in January 2026, but we’ve seen that inference gets faster over time. When an agent can complete most tasks in 1 minute, or 1 second, parallel agents seem like the wrong direction. It’s not clear how this would be any better than a single Claude Code session (as “orchestrator”) running subagents (which already exist) one at a time.

It's likely then that you are thinking too small. Sure for one off tasks and small implementations, a single prompt might save you 20-30 mins. But when you're building an entire library/service/software in 3 days that normally would have taken you by hand 30 days. Then the real limitation comes down to how fast you can get your design into a structured format. As this article describes.

  • Agree that planning time is the bottleneck, but

    > 3 days

    still seems slow! I’m saying what happens in 2028 when your entire project is 5-10 minutes of total agent runtime - time actually spent writing code and implementing your plan? Trying to parallelize 10m of work with a “town” of agents seems like unnecessary complexity.

    • I think that most of the anecdotal and research experiences I've seen for AI agent use so far tell us that you need at least a couple pass-throughs to converge upon a good solution, so even in your future vision where we have models 5x as good as now, I'll still need at least a few agents to ensure I arrive at a good solution. By this I specifically mean a working implementation of the design, not an incorrect assumption of the design which leads the AI off on the wrong path which I feel like is the main issue I keep hearing over and over. So coming back to your point, assuming we can have the 'perfect' design document which lays out everything, yeah we'll probably only need like 5 agents total to actually build it in a few years.