← Back to context

Comment by pron

15 hours ago

> If you’re more or less experienced, you can easily see the “good” and “bad” sides of it. So you kinda plan it out in a way that you can “evolve AI generated software”.

If you're truly "managing fleets of agents" there's no way you're able to sift through the good and the bad in the output. If your AI-generated code is evolvable (which is hard to tell right now) then you're not writing it with "fleets of agents". If you are writing it with fleets of agents, I would bet it's not evolvable; you just haven't reached the breaking point yet.

We’re not managing fleets of agents. They’re not productive for our workflows yet. It’s usually a couple of CC CLIs running and going back and forth on specific tasks we closely control.

  • They're not productive for any workflow is my point because they don't produce sustainable software, yet that's exactly what Armstrong is calling for. They don't work, and people experienced with AI workflows already know that.

    If you review the code and tell the agent to revert when it gets things wrong (not functionally but architecturally) you're fine. That's not what I was responding to.