Comment by metalliqaz
6 days ago
This example seems to keep coming up. Why do you need an AI to run linters? I have found that linters actually add very little value to an experience programmer, and actually get in the way when I am in the middle of active development. I have to say I'm having a hard time visualizing the amazing revolution that is alluded to by the author.
Static errors are caught by linters before runtime errors are caught by a test suite. When you have an LLM in a feedback loop, otherwise known as an agent, then iterative calls to the LLM will include requests and responses from linters and test suites, which can assure the user, who typically follows along with the entire process, that the agent is writing better code than it would otherwise.
You're missing the point. The main thing the AI does is to generate code based on a natural-language description of a problem. The liners and tests and on exist to guide this process.
The initial AI-based work flows were "input a prompt into ChatGPT's web UI, copy the output into your editor of choice, run your normal build processes; if it works, great, if not, copy the output back to ChatGPT, get new code, rinse and repeat".
The "agent" stuff is trying to automate this loop. So as a human, you still write more or less the same prompt, but now the agent code automates that loop of generating code with an LLM and running regular tools on it and sending those tools' output back to the LLM until they succeed for you. So, instead of getting code that may not even be in the right programming language as you do from an LLM, you get code that is 100% guaranteed to run and passes your unit tests and any style constraints you may have imposed in your code base, all without extra manual interaction (or you get some kind of error if the problem is too hard for the LLM).