Comment by tptacek
10 months ago
There's a thru-line to commentary from experienced programmers on working with LLMs, and it's confusing to me:
Although pandas is the standard for manipulating tabular data in Python and has been around since 2008, I’ve been using the relatively new polars library exclusively, and I’ve noticed that LLMs tend to hallucinate polars functions as if they were pandas functions which requires documentation deep dives to confirm which became annoying.
The post does later touch on coding agents (Max doesn't use them because "they're distracting", which, as a person who can't even stand autocomplete, is a position I'm sympathetic to), but still: coding agents solve the core problem he just described. "Raw" LLMs set loose on coding tasks throwing code onto a blank page hallucinate stuff. But agenty LLM configurations aren't just the LLM; they're also code that structures the LLM interactions. When the LLM behind a coding agent hallucinates a function, the program doesn't compile, the agent notices it, and the LLM iterates. You don't even notice it's happening unless you're watching very carefully.
So in my interactions with gpt, o3 and o4 mini, I am the organic middle man that copy and pastes code into the repl and reports on the output back to gpt if anything should be the problem. And for me, past a certain point, even if you continually report back problems it doesn't get any better in its new suggestions. It will just spin its wheels. So for that reason I'm a little skeptical about the value of automating this process. Maybe the llms you are using are better than the ones I tried this with?
Specifically I was researching a lesser known kafka-mqtt connector: https://docs.lenses.io/latest/connectors/kafka-connectors/si..., and o1 was hallucinating the configuration needed to support dynamic topics. The docs said one thing, and I even mentioned it to o1 that the docs contradicted with it. But it would stick to its guns. If I mentioned that the code wouldn't compile it would start suggesting very implausible scenarios -- did you spell this correctly? Responses like that indicate you've reached a dead end. I'm curious how/if the "structured LLM interactions" you mention overcome this.
> And for me, past a certain point, even if you continually report back problems it doesn't get any better in its new suggestions. It will just spin its wheels. So for that reason I'm a little skeptical about the value of automating this process.
It sucks, but the trick is to always restart the conversations/chat with a new message. I never go beyond one reply, and also copy-paste a bunch. Got tired of copy-pasting, wrote something like a prompting manager (https://github.com/victorb/prompta) to make it easier, and not having to neatly format code blocks and so on.
Basically make one message, if they get the reply wrong, iterate on the prompt itself and start fresh, always. Don't try to correct by adding another message, but update initial prompt to make it clearer/steer more.
But I've noticed that every model degrades really quickly past the initial reply, no matter what length of each individual message. The companies seem to continue to increase the theoretical and practical context limits, but the quality degrades a lot faster even within the context limits, and they don't seem to try to address that (nor have a way of measuring it).
This is my experience as well, as has been for over a year now.
LLMs are so incredibly transformative when they're incredibly transformative. And when they aren't it's much better to fall back on the years of hard won experience I have - the sooner the better. For example I'll switch between projects and languages and even with explicit instruction to move to a strongly typed language they'll stick to dynamic answers. It's an odd experience to re-find my skills every once in a while. "Oh yeah, I'm pretty good at reading docs myself".
With all the incredible leaps in LLMs being reported (especially here on HN) I really haven't seen much of a difference in quite a while.
1 reply →
In other words don't use the context window. Treat it as a command line with input/output, in which the purpose of the command is to extract information signal or knowledge manipulation or data mining and so on.
Also special care has to be given to the number of tokens. Even with one-question/one-answer, 5 hundred to 1 thousand tokens can be focused at once by our artificial overlords. After that they start losing their marbles. There are exceptions to that rule with the reasoning models, but in essence they are not that different.
The difference of using the tool correctly versus not, might be that instead of getting 99.9% accuracy, the user gets just 98%. Probably that doesn't sound that big of a difference to some people. The difference is that it works 10 times better in the first case.
10 replies →
Aider, the tool, does exactly the opposite, in my experience.
It really works, for me. It iterates by itself and fixes the problem.
2 replies →
I refused to stop being a middle man, because I can often catch really bad implementation early and can course correct. E.g. a function which solves a problem with a series of nested loops that can be done several orders of magnitude faster by using vectorised operations offered by common packages like numpy.
Even with all the coding agent magik people harp on about, I've never seen something that can write clean good quality code reliably. I'd prefer to tell an LLM what a functions purpose is, what kind of information and data structures it can expect and what it should output, see what it produces, provide feedback, and get a rather workable often perfect function in return.
If i get it to write the whole thing in one go, I cannot imagine the pain of having to find out where the fuckery is that slows everything down, without diving deep with profilers etc. all for a problem I could have solved by just playing middle man and keeping a close eye on how things are building up and being in charge of ensuring the overarching vision is achieved as required.
> If I mentioned that the code wouldn't compile it would start suggesting very implausible scenarios
I have to chuckle at that because it reminds me of a typical response on technical forums long before LLMs were invented.
Maybe the LLM has actually learned from those responses and is imitating them.
It seems no discussion of LLMs on HN these days is complete without a commenter wryly observing how that one specific issue someone is pointing to with an LLM is also, funnily enough, an issue they've seen with humans. The implication always seems to be that this somehow bolsters the idea that LLMs are therefore in some sense and to some degree human-like.
Humans not being infallible superintelligences does not mean that the thing that LLMs are doing is the same thing we do when we think, create, reason, etc. I would like to imagine that most serious people who use LLMs know this, but sometimes it's hard to be sure.
Is there a name for the "humans stupid --> LLMs smart" fallacy?
8 replies →
You can have agent search the web for documentation and then provide it to the LLM. That is how Context7 is currently very popular in the AI user crowd.
I used o4 to generate nixos config files from the pasted modules source files. At first it did outdated config stuff, but with context files it worked very good.
Kagi Assistant can do this too but I find it's mostly useful because the traditional search function can find the pages the LLM loaded into its context before it started to output bullshit.
It's nice when the LLM outputs bullshit, which is frequent.
Seriously Cursor (using Claude 3.5) does this all the time. It ends up with a pile of junk because it will introduce errors while fixing something, then go in a loop trying to fix the errors it created and slap more garbage on those.
Because it’s directly editing code in the IDE instead of me transferring sections of code from a chat window the large amount of bad code it writes it much more apparent.
I wonder if LLMs have been seen claiming “THERE’S A BUG IN THE COMPILER!”
A stage every developer goes through early in their development.
Gemini 2.5 got into as close to a heated argument with me as possible about the existence of a function in the kotlin coroutines library that was never part of the library (but does exist as a 5 year old PR still visible in github that was never merged in).
It initially suggested I use the function as part of a solution, suggesting it was part of the base library and could be imported as such. When I told it that function didn't exist within the library it got obstinate and argued back and forth with me to the point where it told me it couldn't help me with that issue anymore but would love to help me with other things. It was surprisingly insistent that I must be importing the wrong library version or doing something else wrong.
When I got rid of that chat's context and asked it about the existence of that function more directly without the LLM first suggesting its use to me, it replied correctly that the function doesn't exist in the library but the concept being easy to implement... the joys(?) of using an LLM and having it go in wildly different directions depending upon the starting point.
I'm used to the opposite situation where an LLM will slide into sycophantic agreeable hallucinations so it was in a way kind of refreshing for Gemini to not do this, but on the other hand for it to be so confidently and provably wrong (while also standing its ground on its wrongness) got me unreasonably pissed off at it in a way that I don't experience when an LLM is wrong in the other direction.
1 reply →
> It will just spin its wheels. So for that reason I'm a little skeptical about the value of automating this process.
Question is would you rather find out it got stuck in a loop with 3 minutes with a coding agent or 40 minutes copy pasting. It can also get out of loops more often by being able to use tools to look up definitions with grep, ctags or language server tools, though you can copy paste commands for that too it will be much slower.
For several moments in the article I had to struggle to continue. He is literally saying "as an experienced LLM user I have no experience with the latest tools". He gives a rationale as to why he hasn't used the latest tools which is basically that he doesn't believe they will help and doesn't want to pay the cost to find out.
I think if you are going to claim you have an opinion based on experience you should probably, at the least, experience the thing you are trying to state your opinion on. It's probably not enough to imagine the experience you would have and then go with that.
He does partially address this elsewhere in the blog post. It seems that he's mostly concerned about surprise costs:
> On paper, coding agents should be able to address my complaints with LLM-generated code reliability since it inherently double-checks itself and it’s able to incorporate the context of an entire code project. However, I have also heard the horror stories of people spending hundreds of dollars by accident and not get anything that solves their coding problems. There’s a fine line between experimenting with code generation and gambling with code generation.
Less surprise costs, more wasting money and not getting proportionate value out of it.
> But agenty LLM configurations aren't just the LLM; they're also code that structures the LLM interactions. When the LLM behind a coding agent hallucinates a function, the program doesn't compile, the agent notices it, and the LLM iterates.
This describes the simplest, and most benign case of code assistents messing up. This isn't the problem.
The problem is when the code does compile, but contains logical errors, security f_ckups, performance dragdowns, or missed functionality. Because none of those will be caught by something as obvious as a compiler error.
And no, "let the AI write tests" wont catch them either, because that's not a solution, that's just kicking the can dwn the road...because if we cannot trust the AI to write correct code, why would we assume that it can write correct tests for that code?
What will ultimately catch those, is the poor sod in the data center, who, at 03:00 AM has to ring the on-call engineer out of his bed, because the production server went SNAFU.
And when the oncall then has to rely on "AI" to fix the mess, because he didn't actually write the code himself, and really doesn't even know the codebase any more (or even worse: Doesn't even understand the libraries and language used at all, because he is completely reliant on the LLM doing that for him), companies, and their customers, will be in real trouble. It will be the IT equivalent of attorneys showing up in court with papers containing case references that were hallucinated by some LLM.
Have you tried it? In my experience they just go off on a hallucination loop, or blow up the code base with terrible re-implementations.
Similarly Claude 3.5 was stuck on TensorRT 8, and not even pointing it at the documentation for the updated 10 APIs for RAG could ever get it to correctly use the new APIs (not that they were very complex; bind tensors, execute, retrieve results). The whole concept of the self-reinforcing Agent loop is more of a fantasy. I think someone else likened it to a lawnmower that will run rampage over your flower bed at the first hiccup.
Yes, they're part of my daily toolset. And yes, they can spin out. I just hit the "reject" button when they do, and revise my prompt. Or, sometimes, I just take over and fill in some of the structure of the problem I'm trying to solve myself.
I don't know about "self-reinforcing". I'm just saying: coding agents compile and lint the code they're running, and when they hallucinate interfaces, they notice. The same way any developer who has ever used ChatGPT knows that you can paste most errors into the web page and it will often (maybe even usually) come up with an apposite fix. I don't understand how anybody expects to convince LLM users this doesn't work; it obviously does work.
> I don't understand how anybody expects to convince LLM users this doesn't work; it obviously does work.
This is really one of the hugest divides I've seen in the discourse about this: anti-LLM people saying very obviously untrue things, which is uh, kind of hilarious in a meta way.
https://bsky.app/profile/caseynewton.bsky.social/post/3lo4td... is an instance of this from a few days ago.
I am still trying to sort out why experiences are so divergent. I've had much more positive LLM experiences while coding than many other people seem to, even as someone who's deeply skeptical of what's being promised about them. I don't know how to reconcile the two.
25 replies →
> I think someone else likened it to a lawnmower that will run rampage over your flower bed at the first hiccup
This reminds me of a scene from the recent animation movie "Wallace and Gromit: Vengeance Most Fowl" where Wallace actually uses a robot (Norbot) to do gardening tasks, and rampages over Gromit's flower bed.
https://youtu.be/_Ha3fyDIXnc
I mean, I have. I use them every day. You often see them literally saying "Oh there is a linter error, let me go fix it" and then a new code generation pass happens. In the worst case, it does exactly what you are saying, gets stuck in a loop. It eventually gets to the point where it says "let me try just once more" and then gives up.
And when that happens I review the code and if it is bad then I "git revert". And if it is 90% of the way there I fix it up and move on.
The question shouldn't be "are they infallible tools of perfection". It should be "do I get value equal to or greater than the time/money I spend". And if you use git appropriately you lose at most five minutes on a agent looping. And that happens a couple of times a week.
And be honest with yourself, is getting stuck in a loop fighting a compiler, type-checker or lint something you have ever experienced in your pre-LLM days?
Have you tried it? More than once?
I’m getting massive productivity gains with Cursor and Gemini 2.5 or Claude 3.7.
One-shotting whole features into my rust codebase.
I use it all the time, multiple times daily. But the discussion is not being very honest, particularly for all the things that are being bolted on (agent mode, MCP). Like just upstream people dunk on others for pointing out that maybe giving the model an API call to read webpages isn't quite turning LLM into search engines. Just like letting it run shell commands has not made it into a full blown agent engineer.
I tried it again just now with Claude 3.7 in Cursors Agent/Compose (they change this stuff weekly). Write a simple C++ TensorRT app that loads an engine and runs inference 100 times for a benchmark, use this file to source a toolchain. It generated code with the old API & a CMake file and (warning light turns on) a build script. The compile fails because of the old API, but this time it managed to fix it to use the new API.
But now the linking fails, because it overwrote the TRT/CUDA directories in the CMakeLists with some home cooked logic (there was nothing to do, the toolchain script sets up the environment fully and just find_package would work).
And this is where we go off the rails; it messes with the build script and CMakeLists more, but still it can not link. It thinks hey it looks like we are cross-compiling and creates a second build script "cross-compile.sh" that tries to use the compiler directly, but of course that misses things that the find_package in CMake would setup and so fails with include errors.
It pretends its a 1970 ./configure script and creates source files "test_nvinfer.cpp" and "test_cudart.cpp" that are supposed to test for the presence of those libraries, then tries to compile them directly; again its missing directories and obviously fails.
Next we create a mashup build script "cross-compile-direct.sh". Not sure anymore what this one tried to achieve, didn't work.
Finally, and this is my favorite agent action yet, it decides fuck it, if the library won't link, why don't we just mock out all the actual TensorRT/CUDA functionality and print fake benchmark numbers to demonstrate LLMs can average a number in C++. So it writes, builds ands runs a "benchmark_mock.cpp" that subs out all the useful functionality for random data from std::mt19937. This naturally works, so the agent declares success and happily updates the README.md with all the crap it added and stops.
This is what running the lawnmower over the flower bed means; you have 5 more useless source files and a bunch more shell scripts and a bunch of crap in a README that were all generated to try and fail to fix a problem it could not figure out, and this loop can keep going and generate more nonsense ad infinitum.
(Why could it not figure out the linking error? We come back to the shitty bolted on integrations; it doesn't actually query the environment, search for files or look at what link directories are being used, as one would investigating a linking error. It could of course, but the balance in these integrations is 99% LLM and 1% tool use, and even context from the tool use often doesn't help)
2 replies →
Someone gave me the tip to add "all source files should build without error", which you'd think would be implicit, but it seems not.
There's definitely a skill to using them well (I am not yet expert); my only frustration is with people who (like me) haven't refined the skill but have also concluded that there's no benefit to the tool. No, really, in this case, you're mostly just not holding it right.
The tools will get better, but what I see happening with people who are good at using them (and from my own code, even in my degraded LLM usage), we have an existence proof of the value of the tools.
There’s an argument that library authors should consider implementing those hallucinated functions, not because it’ll be easier for LLMs but because the hallucination is a statement about what an average user might expect to be there.
I really dislike libraries that have their own bespoke ways of doing things for no especially good reason. Don’t try to be cute. I don’t want to remember your specific API, I want an intuitive API so I spend less time looking up syntax and more time solving the actual problem.
There's also an argument that developers of new software, including libraries, should consider making an earnest attempt to do The Right Thing instead of re-implementing old, flawed designs and APIs for familiarity's sake. We have enough regression to the mean already.
The more LLMs are entrenched and required, the less we're able to do The Right Thing in the future. Time will be frozen, and we'll be stuck with the current mean forever. LLMs are notoriously bad at understanding anything that isn't mappable in some way to pre-existing constructs.
> for no especially good reason
That's a major qualifier.
Polars has their own LLM customised for the docs:
https://docs.pola.rs/api/python/stable/reference/
I would say that this is pretty good approach to combat the previous.
That sort of "REPL" system is why I really liked when they integrated a Python VM into ChatGPT - it wasn't perfect, but it could at least catch itself when the code didn't execute properly.
Sure. But it's 2025 and however you want to get this feature, be it as something integrated into VSCode (Cursor, Windsurf, Copilot), or a command line Python thing (aider), or a command line Node thing (OpenAI codex and Claude Code), with a specific frontier coding model or with an abstracted multi-model thingy, even as an Emacs library, it's available now.
I see people getting LLMs to generate code in isolation and like pasting it into a text editor and trying it, and then getting frustrated, and it's like, that's not how you're supposed to be doing it anymore. That's 2024 praxis.
The churn of staying on top of this means to me that we'll also chew through experts of specific times much faster. Gone are the day of established, trusted top performers, as every other week somebody creates a newer, better way of doing things. Everybody is going to drop off the hot tech at some point. Very exhausting.
1 reply →
I like using Jupyter Console as a primary interpreter, and then dropping into SQLite/duckdb to save data
Easy to to script/autogenerate code and build out pipelines this way
It is a little crazy how fast this has changed in the past year. I got VSCode's agent mode to write, run, and read the output of unit tests the other day and boy it's a game changer.
This has been my experience with any LLM I use as a code assistant. Currently I mostly use Claude 3.5, although I sometimes use Deepseek or Gemini.
The more prominent and widely used a language/library/framework, and the more "common" what you are attempting, the more accurate LLMs tends to be. The more you deviate from mainstream paths, the more you will hit such problems.
Which is why I find them them most useful to help me build things when I am very familiar with the subject matter, because at that point I can quickly spot misconceptions, errors, bugs, etc.
It's when it hits the sweet spot of being a productivity tool, really improving the speed with which I write code (and sometimes improving the quality of what I write, for sometimes incorporating good practices I was unaware of).
> The more prominent and widely used a language/library/framework, and the more "common" what you are attempting, the more accurate LLMs tends to be. The more you deviate from mainstream paths, the more you will hit such problems.
One very interesting variant of this: I've been experimenting with LLMs in a react-router based project. There's an interesting development history where there's another project called Remix, and later versions of react-router effectively ate it, that is, in December of last year, react-router 7 is effectively also Remix v3 https://remix.run/blog/merging-remix-and-react-router
Sometimes, the LLM will be like "oh, I didn't realize you were using remix" and start importing from it, when I in fact want the same imports, but from react-router.
All of this happened so recently, it doesn't surprise me that it's a bit wonky at this, but it's also kind of amusing.
I ran into this as well, but now I have given standing instructions for the llm to pull the latest RR docs anytime it needs to work with RR. That has solved the entire issue.
In addition to choosing languages, patterns and frameworks that the LLM is likely to be well trained in, I also just ask it how it wants to do things.
For example, I don't like ORMs. There are reasons which aren't super important but I tend to prefer SQL directly or a simple query builder pattern. But I did a chain of messages with LLMs asking which would be better for LLM based development. The LLM made a compelling case as to why an ORM with a schema that generated a typed client would be better if I expected LLM coding agents to write a significant amount of the business logic that accessed the DB.
My dislike of ORMs is something I hold lightly. If I was writing 100% of the code myself then I would have breezed past that decision. But with the agentic code assistants as my partners, I can make decisions that make their job easier from their point of view.
Cursor also can read and store documentation so it's always up to date [0]. Surprised that many people I talk to about Cursor don't know about this, it's one of its biggest strengths compared to other tools.
[0] https://docs.cursor.com/context/@-symbols/@-docs
>Although pandas is the standard for manipulating tabular data in Python and has been around since 2008, I’ve been using the relatively new polars library exclusively, and I’ve noticed that LLMs tend to hallucinate polars functions as if they were pandas functions which requires documentation deep dives to confirm which became annoying.
Funnily enough I was trying to deal with some lesser used parts of pandas via LLM and it kept sending me back through a deprecated function for everything. It was quite frustrating.
This is because the training data for pandas code is not great. It is a lot of non programmers banging keys until it works or a bunch of newbie focused blog posts that endorse bad practices.
That tracks
> the program doesn't compile
How does this even make sense when the "agent" is generating Python. There are several ways it can generate code that runs and even does the thing and still has severe issues.
Are you implying that you can actually let agents run loose to autonomously fix things without just creating a mess? Because that's not a thing that you can really do in real life, at least not for anything but the most trivial tasks.
When there's is an AI that writes Polars code correctly, please let me know.
How much money do you spend a day working like this?
I haven't spent many days or full days, but when I've toyed with this, it ends up at about $10/hour or maybe a bit less.
> the program doesn't compile
The issue you are addressing refers specifically to Python, which is not compiled... Are you referring to this workflow in another language, or by "compile" do you mean something else, such as using static checkers or tests?
Also, what tooling do you use to implement this workflow? Cursor, aider, something else?
Python is, in fact, compiled (to bytecode, not native code); while this is mostly invisible, syntax errors will cause it to fail to compile, but the circumstances described (hallucinating a function) will not, because function calls are resolved by runtime lookup, not at compile time.
I get that, and in that sense most languages are compiled, but generally speaking, I've always understood "compiled" as compiled-ahead-of-time - Python certainly doesn't do that and the official docs call it an interpreted language.
In the context we are talking about (hallucinating Polars methods), if I'm not mistaken the compilation step won't catch that, Python will actually throw the error at runtime post-compilation.
So my question still stands on what OP means by "won't compile".
3 replies →
Yes but it gets feedback from the IDE. Cursor is the best here