Comment by tptacek
3 days ago
No, what's happening here is we're talking past each other.
An agent lints and compiles code. The LLM is stochastic and unreliable. The agent is ~200 lines of Python code that checks the exit code of the compiler and relays it back to the LLM. You can easily fool an LLM. You can't fool the compiler.
I didn't say anything about whether code needs to be reviewed line-by-line by humans. I review LLM code line-by-line. Lots of code that compiles clean is nonetheless horrible. But none of it includes hallucinated API calls.
Also, from where did this "you seem to have a fundamental belief" stuff come from? You had like 35 words to go on.
> If the LLM hallucinates, then the code it produces is wrong. That wrong code isn't obviously or programmatically determinable as wrong, the agent has no way to figure out that it's wrong, it's not as if the LLM produces at the same time tests that identify that hallucinated code as being wrong. The only way that this wrong code can be identified as wrong is by the human user "looking closely" and figuring out that it is wrong
The LLM can easily hallucinate code that will satisfy the agent and the compiler but will still fail the actual intent of the user.
> I review LLM code line-by-line. Lots of code that compiles clean is nonetheless horrible.
Indeed most code that LLMs generate compiles clean and is nevertheless horrible! I'm happy that you recognize this truth, but the fact that you review that LLM-generated code line-by-line makes you an extraordinary exception vs. the normal user, who generates LLM code and absolutely does not review it line-by-line.
> But none of [the LLM generated code] includes hallucinated API calls.
Hallucinated API calls are just one of many many possible kinds of hallucinated code that an LLM can generate, by no means does "hallucinated code" describe only "hallucinated API calls" -- !
When you say "the LLM can easily hallucinate code that will satisfy the compiler but still fail the actual intent of the user", all you are saying is that the code will have bugs. My code has bugs. So does yours. You don't get to use the fancy word "hallucination" for reasonable-looking, readable code that compiles and lints but has bugs.
I think at this point our respective points have been made, and we can wrap it up here.
> When you say "the LLM can easily hallucinate code that will satisfy the compiler but still fail the actual intent of the user", all you are saying is that the code will have bugs. My code has bugs. So does yours. You don't get to use the fancy word "hallucination" for reasonable-looking, readable code that compiles and lints but has bugs.
There is an obvious and categorical difference between the "bugs" that an LLM produces as part of its generated code, and the "bugs" that I produce as part of the code that I write. You don't get to conflate these two classes of bugs as though they are equivalent, or even comparable. They aren't.
2 replies →
Hallucination is a fancy word?
The parent seems to be, in part, referring to "reward hacking", which tends to be used as a super category to what many refer to as slop, hallucination, cheating, and so on.
https://courses.physics.illinois.edu/ece448/sp2025/slides/le...
You seem to be using "hallucinate" to mean "makes mistakes".
That's not how I use it. I see hallucination as a very specific kind of mistake: one where the LLM outputs something that is entirely fabricated, like a class method that doesn't exist.
The agent compiler/linter loop can entirely eradicate those. That doesn't mean the LLM won't make plenty of other mistakes that don't qualify as hallucinations by the definition I use!
It's newts and salamanders. Every newt is a salamander, not every salamander is a newt. Every hallucination is a mistake, not every mistake is a hallucination.
https://simonwillison.net/2025/Mar/2/hallucinations-in-code/
I'm not using "hallucinate" to mean "makes mistakes". I'm using it to mean "code that is syntactically correct and passes tests but is semantically incoherent". Which is the same thing that "hallucination" normally means in the context of a typical user LLM chat session.
Linting isn't verification of correctness, and yes, you can fool the compiler, linters, etc. Work with some human interns, they are great at it. Agents will do crazy things to get around linting errors, including removing functionality.
have you no tests?
Irrelevant, really. Tests establish a minimum threshold of acceptability, they don't (and can't) guarantee anything like overall correctness.
4 replies →
My guy didn't you spend like half your life in the field where your job was to sift through code that compiled but nonetheless had bugs that you tried to exploit? How can you possibly have this belief about AI generated code?
I don't understand this question. Yes, I spent about 20 years learning the lesson that code is profoundly knowable; to start with, you just read it. What challenge do you believe AI-generated code presents to me?