Comment by tptacek
6 days ago
I cut several paragraphs from this explaining how agents work, which I wrote anticipating this exact comment. I'm very happy to have brought you to this moment of understanding --- it's a big one. The answer is "yes, that's exactly what people are doing": "turning LLMs loose" (really, giving them some fixed number of tool calls, some of which might require human approval) to do stuff on real systems. This is exactly what Cursor is about.
I think it's really hard to undersell how important agents are.
We have an intuition for LLMs as a function blob -> blob (really, token -> token, but whatever), and the limitations of such a function, ping-ponging around in its own state space, like a billion monkeys writing plays.
But you can also get go blob -> json, and json -> tool-call -> blob. The json->tool interaction isn't stochastic; it's simple systems code (the LLM could indeed screw up the JSON, since that process is stochastic --- but it doesn't matter, because the agent isn't stochastic and won't accept it, and the LLM will just do it over). The json->tool-call->blob process is entirely fixed system code --- and simple code, at that.
Doing this grounds the code generation process. It has a directed stochastic structure, and a closed loop.
I'm sorry but this doesn't explain anything. Whatever it is you have in your mind, I'm afraid it's not coming across on the page. There is zero chance that I'm going to let an AI start running arbitrary commands on my PC, let alone anything that resembles a commit.
What is an actual, real world example?
This all works something like this: an "agent" is a small program that takes a prompt as input, say "//fix ISSUE-0451".
The agent code runs a regex that recognizes this prompt as a reference to a JIRA issue, and runs a small curl with predefined credentials to download the bug description.
It then assembles a larger text prompt such as "you will act as a master coder to understand and fix the following issue as faithfully as you can: {JIRA bug description inserted here}. You will do so in the context of the following code: {contents of 20 files retrieved from Github based on Metadata in the JIRA ticket}. Your answer must be in the format of a Git patch diff that can be applied to one of these files".
This prompt, with the JIRA bug description and code from your Github filled in, will get sent to some LLM chosen by some heuristic built into the agent - say it sends it to ChatGPT.
Then, the agent will parse the response from ChatGPT and try to parse it as a Git patch. If it respects git patch syntax, it will apply it to the Git repo, and run something like `make build test`. If that runs without errors, it will generate a PR in your Github and finally output the link to that PR for you to review.
If any of the steps fails, the agent will generate a new prompt for the LLM and try again, for some fixed number of iterations. It may also try a different LLM or try to generate various follow-ups to the LLM (say, it will send a new prompt in the same "conversation" like "compilation failed with the following issue: {output from make build}. Please fix this and generate a new patch."). If there is no success after some number of tries, it will give up and output error information.
You can imagine many complications to this workflow - the agent may interrogate the LLM for more intermediate steps, it may ask the LLM to generate test code or even to generate calls to other services that the agent will then execute with whatever credentials it has.
It's a byzantine concept with lots of jerry-rigging that apparently actually works for some use cases. To me it has always seemed far too much work to get started before finding out if there is any actual benefit for the codebases I work on, so I can't say I have any experience with how well these things work and how much they end up costing.
The commands aren't arbitrary. They're particular— you write the descriptions of the tools it's allowed to use and it can only invoke those commands.
I'm interested in playing with this, since reading the article, but I think I will only have it run things in some dedicated VM. If it seems better than other LLM use, I'll gradually rely on it more, but likely keep its actions confined to the VM.
> There is zero chance that I'm going to let an AI start running arbitrary commands on my PC
The interfaces prompt you when it wants to run a command, like "The AI wants to run 'cargo add anyhow', is that ok?"
They're not arbitrary, far from it. You have a very constrained set of tools each agent can do. An agent has a "job" if you will.
Maybe the agent feeds your PR to the LLM to generate some feedback, and posts a the text to the PR as a comment. Maybe it can also run the linters, and use that as input to the feedback.
But the at the end of the day, all it's really doing is posting text to a github comment. At worst it's useless feedback. And while I personally don't have much AI in my workflow today, when a bunch of smart people are telling me the feedback can be useful I can't help but be curious!