← Back to context

Comment by jbergqvist

3 days ago

Doesn't this whole argument fall apart if we consider iteration over time? Sure, the initial implementation might be uncoordinated, but once the subagents have implemented it, what stops the main agent from reviewing the code and sorting out any inconsistencies, ultimately arriving at a solution faster than it could if it wrote it by itself?

I'd wager that a "main agent" is really just a bunch of subagents in a sequential trench coat.

At the end, in both cases, it's a back and forth with an LLM, and every request has its own lifecycle. So it's unfortunately at least a networked systems problem. I think your point works with infinite context window and one-shot ting the whole repo every time... Maybe quantum LLM models will enable that

Subagents working on shared state are primarily a context window hack. They're powerful to the extent that they enable solving problems an agent with global state couldn't solve due to context pollution. I'm sure there are caveats, but to first approximation, a main agent that can comprehend the entire code in enough detail to sort out those inconsistencies could have just written the code itself.

Right, but what you're describing is a consensus protocol. It's called 2 phase commit. The point of the article is just that we should really be analysing these high level plans in terms of distributed algorithms terms, because there are fundamental limitations that you can't overcome.