Comment by andai
19 days ago
I wonder if the problem of idle time / waiting / breaking flow is a function of the slowness. That would be simple to test, because there are super fast 1000 tok/s providers now.
(Waiting for Cerebras coding plan to stop being sold out ;)
I've used them for smaller tasks (making small edits), and the "realtime" aspect of it does provide a qualitative difference. It stops being async and becomes interactive.
A sufficient shift in quantity produces a phase shift in quality.
--
That said, the main issue I find with agentic is my mental model getting desynchronized. No matter how fast the models get, it takes a fixed amount of time for me to catch up and understand what they've done.
The most enjoyable way I've found of staying synced is to stay in the driver's seat, and to command many small rapid edits manually. (i.e. I have my own homebrew "agent" that's just a loop of, I prompt it, it proposes edits, I accept or edit, repeat.)
So then the "synchronization" of the mental state is happening continuously, because there is no opportunity for desynchronization. Because you are the one driving. I call that approach semi-auto, or Power Coding (akin to Power Armor, which is wielded manually but greatly enhances speed and strength).
> That said, the main issue I find with agentic is my mental model getting desynchronized. No matter how fast the models get, it takes a fixed amount of time for me to catch up and understand what they've done.
This is why I'm so skeptical of anyone running 6+ Claude sessions at a time. I've gotten to 5 but really that was across 3 sessions with 2 standing by just to commit stuff. And even with just 3 sessions I constantly lost where I was and wasted time re-orienting myself, doing work in the wrong session, etc.
>The most enjoyable way I've found of staying synced is to stay in the driver's seat, and to command many small rapid edits manually.
Same, there's a fantastic flow state/momentum I can get in a single session just knocking off features. I don't mind switching between two sessions in this state but the experience is better when it's two different projects vs two different features on the same project. The complete context switch lets be re-orient more easily
Warning: I was in two different project experimenting with similar forms of db access at the same time. don't do that.
same
You still have to synchronize with your code reviewers and teammates, so how well you work together in a team becomes a limiting factor at some point then I guess.
Yes, and that constraint shows up surprisingly early.
Even if you eliminate model latency and keep yourself fully in sync via a tight human-in-the-loop workflow, the shared mental model of the team still advances at human speed. Code review, design discussion, and trust-building are all bandwidth-limited in ways that do not benefit much from faster generation.
There is also an asymmetry: local flow can be optimized aggressively, but collaboration introduces checkpoints. Reviewers need time to reconstruct intent, not just verify correctness. If the rate of change exceeds the team’s ability to form that understanding, friction increases: longer reviews, more rework, or a tendency to rubber-stamp changes.
This suggests a practical ceiling where individual "power coding" outpaces team coherence. Past that point, gains need to come from improving shared artifacts rather than raw output: clearer commit structure, smaller diffs, stronger invariants, better automated tests, and more explicit design notes. In other words, the limiting factor shifts from generation speed to synchronization quality across humans.
I've seen this happen over and over again well before LLMs, when teams are sufficiently "code focused" that they don't care much at all about their teammates. The kind that would throw a giant architectural changes over a weekend. You then get to either freeze a person for days, or end up with codebases nobody remembers, because the bigger architectural changes are secret.
With a good modern setup, everyone can be that "productive", and the only thing that keeps a project coherent is if the original design holds, therefore making rearchitecture a very rare event. It will also push us to have smaller teams in general, just because the idea of anyone managing a project with, say, 8 developers writing a codebase at full speed seems impossible, just like it was when we added enough high performance, talented people to a project. It's just harder to keep coherence.
You can see this risk mentioned in The Mythical Man Month already. The idea of "The Surgery Team", where in practice you only have a couple of people truly owning a codebase, and most of the work we used to hand juniors just being done via AI. It'd be quite funny if the way we have to change our team organization moves towards old recommendations.
This thread seems to have re-identified Amdahl’s law in the context of software development workflow.
Agentic coding is only speeding up or parallelising a small part of the workflow - the rest is still sequential and human-driven.
2 replies →
I've mostly done solo work, or very small teams with clear separation of concerns. But this reads as less of a case against power coding, and more of a case against teams!
You can ask the agent to reverse engineer its own design and provide a design document that can inform the code review discussion. Plus, hopefully human code review would only occur after several rounds of the agent refactoring its own one-shot slop into something that's up to near-human standards of surveyability and maintainability.
Waiting on AI is its own category, so I’m not entirely sure what ‘idle time’ means. Of course we could just go and read that study…