← Back to context

Comment by max51

19 days ago

>I would argue that it's going to be the opposite. At re:Invent, one of the popular sessions was in creating a trio of SRE agents, one of which did nothing but read logs and report errors, one of which did analysis of the errors and triaged and proposed fixes, and one to do the work and submit PRs to your repo.

If you manage a code base this way at your company, sooner or later you will face a wall. What happens when the AI can't fix an important bug or is unable to add a very important feature? now you are stuck with a big fat dirty pile of code that no human can figure out because it wasn't coded by human and was never designed to be understood by a human in the first place.

I treat code quality, and readability, as one of the goals. The LLM can help with this and refactor code much quicker than a human. If I think the code is getting too complex I change over to architecture review and refactoring until I am happy with it.

What happens when humans can’t fix a bug or build an important feature? That is a pretty common scenario, that doesn’t result in the doomsday you imply.

  • There will always be bugs you can't fix, that doesn't mean we should embrace having orders of magnitude more of them. And it's not just about bugs, it's also about adding new features.

    This is tech debt on steroid. You are building an entire code base that no can read or understand and pray that the LLM won't fuck up too much. And when it does, no one in the company knows how to deal with it other than by throwing more LLM tokens at it and pray it works.

    As I said earlier, using pure AI agents will work for a while. But when it doesn't you are fucked.