← Back to context

Comment by SpicyLemonZest

7 hours ago

> The obvious objection is that code produced at that speed becomes unmanageable, a liability in itself. That is a reasonable concern, but it largely applies when agents produce code that humans then maintain. Agentic platforms are being iterated upon quickly, and for established patterns and non-business-critical code, which is the majority of what most engineering organizations actually maintain, detailed human familiarity with the codebase matters less than it once did. A messy codebase is still cheaper to send ten agents through than to staff a team around. And even if the agents need ten days to reason through an unfamiliar system, that is still faster and cheaper than most development teams operating today. The liability argument holds in a human-to-human or agent-to-human world. In an agent-to-agent world, it largely dissolves.

I keep seeing this assumption that "unmanageable" caps out at "kinda hard to reason about", and anyone with experience in large codebases can tell you that's not so. There are software components I own today which require me to routinely explain to junior engineers (and indeed to my own instances of Claude) why their PR is unsound and I won't let them merge it no matter how many tests they add.

Yeah this really breaks down when you put the logic up against ANY sort of compliance testing. Ok you don’t meet compliance, your agents have spent weeks on it and they’re just adding more bugs. Now what are you going to do? You have to go into the code yourself. Uh oh.