← Back to context

Comment by SaberTail

17 days ago

I was on a greenfield project late last year with a team that was very enthusiastic about coding agents. I would personally call it a failure, and the project is quietly being wound down after only a few months. It went in a few stages:

At first, it proceeded very quickly. Using agents, the team were able to generate a lot of code very fast, and so they were checking off requirements at an amazing pace. PRs were rubber stamped, and I found myself arguing with copy/pasted answers from an agent most of the time I tried to offer feedback.

As the components started to get more integrated, things started breaking. At first these were obvious things with easy fixes, like some code calling other code with wrong arguments, and the coding agents could handle those. But a lot of the code was written in the overly-defensive style agents were fond of, so there were a lot more subtle errors. Things like the agent adding code to substitute an invalid default value in instead of erroring out, far away from where that value was causing other errors.

At this point, the agents started making things strictly worse because they couldn't fit that much code in their context. Instead of actually fixing bugs, they'd catch any exceptions and substitute in more defaults. There was some manual work by some engineers to remove a lot of the defensive code, but they could not keep up with the agents. This is also about when the team discovered that most of the tests were effectively "assert true" because they mocked out so much.

We did ship the project, but it shipped in an incredibly buggy state, and also the performance was terrible. And, as I said, it's now being wound down. That's probably the right thing to do because it would be easier to restart from scratch than try to make sense of the mess we ended up with. Agents were used to write the documentation, and very little of it is comprehensible.

We did screw some things up. People were so enthusiastic about agents, and they produced so much code so fast, that code reviews were essentially non-existent. Instead of taking action on feedback in the reviews, a lot of the time there was some LLM-generated "won't do" response that sounded plausible enough that it could convince managers that the reviewers were slowing things down. We also didn't explicitly figure out things like how error-handling or logging should work ahead of time, and so what the agents did was all over the place depending on what was in their context.

Maybe the whole mess was a necessary learning as we figure out these new ways of working. Personally I'm still using the coding agents, but very selectively to "fill-in-the-blanks" on code where I know what it should look like, but don't need to write it all by hand myself.

I'd expect this kind of outcome to be common for anything complex, but there seem to be more claims of success stories than horror stories.

Various possibilities I suppose - I could just be overly skeptical, or failures might be kept on the down low, or perhaps most likely: many companies haven't actually reached the point where the hidden tech debt of using these things comes full circle.

Having been through it, what's your current impression of the success stories when you come across them?

  • I'd speculate we had a few factors working against us that made us hit the "limit" sooner.

    Several different engineering teams from different parts of the company had to come together for this, and the overall architecture was modular, so there was a lot of complexity before we had to start integrating. We have some company-wide standards and conventions, but they don't cover everything. To work on the code, you might need to know module A does something one way and module B does it in a different way because different teams were involved. That was implicit in how human engineers worked on it, and so it wasn't explicitly explained to the coding agents.

    The project was in the life sciences space, and the quality of code in the training data has to be worse than something like a B2B SaaS app. A lot of code in the domain is written by scientists, not software engineers, and only needs to work long enough to publish the paper. So any code an LLM writes is going to look like that by default unless an engineer is paying attention.

    I don't know that either of those would be insurmountable if the company were willing to burn more tokens, but I'd guess it's an order of magnitude more than we spent already.

    There are politics as well. There have been other changes in the company, and it seems like the current leadership wants to free up resources to work on completely different things, so there's no will to throw more tokens at untangling the mess.

    I don't disbelieve the success stories, but I think most of them are either at the level of following already successful patterns instead of doing much novel, or from companies with much bigger budgets for inference. If Anthropic burns a bunch of money to make a C compiler, they can make it back from increased investor hype, but most companies are not in that position.