Using coding agents to track down the root cause of bugs like this works really well:
> Three out of three one-shot debugging hits with no help is extremely impressive. Importantly, there is no need to trust the LLM or review its output when its job is just saving me an hour or two by telling me where the bug is, for me to reason about it and fix it.
The approach described here could also be a good way for LLM-skeptics to start exploring how these tools can help them without feeling like they're cheating, ripping off the work of everyone who's code was used to train the model or taking away the most fun part of their job (writing code).
Have the coding agents do the work of digging around hunting down those frustratingly difficult bugs - don't have it write code on your behalf.
I understand the pitch here ("it finds bugs! it's basically all upside because worst case there's no output anyways"), but I'm finding some of these agents to be ... uhhh... kind of agressive at trying to find the solution and end up missing the forest for the trees. And there's some "oh you should fix this" stuff which, while sometimes isn't _wrong_, is completely besides the point.
The end result being these robots doing bikeshedding. When paired with junior engineers looking at this output and deciding to act on it, it just generates busywork. Not helping that everyone and their dog wants to automatically run their agent against PRs now
I'm trying to use these to some extent when I find myself in a canonical situation that should work and am not getting the value everyone else seems to get in many cases. Very much "trying to explain a thing to a junior engineer taking more time than doing it myself" thing, except at least the junior is a person.
When models start to forage around in the weeds, it's a good idea to restart the session and add more information to the prompt for what it should ignore or assume. For example in ML projects, Claude gets very worried that datasets aren't available or are perhaps responsible. Usually if you tell it outright where you suspect the bug to be (or straight up tell it, even if you're unsure) it will focus on that. Or, make it give you a list of concerns and ask you which are valid.
I've found that having local clones of large library repos (or telling it to look in the environment for packages) is far more effective than relying on built-in knowledge or lousy web search. It can also use ast-grep on those. For some reason the agent frameworks are still terrible about looking up references in a sane way (where in an IDE you would simply go to declaration).
Sometimes you hit a wall where something is simply outside of the LLM's ability to handle, and it's best to give up and do it yourself. Knowing when to give up may be the hardest part of coding with LLMs.
Notably, these walls are never where I expect them to be—despite my best efforts, I can't find any sort of pattern. LLMs can find really tricky bugs and get completely stuck on relatively simple ones.
Communication with a person is more difficult and the feedback loop is much, much longer. I can almost instantly tell whether Claude has understood the mission or digested context correctly.
I would say a lot of people are only posting their positive experiences. Stating negative things about AI is mildly career-dangerous at the moment where as the opposite looks good. I found the results from using it on a complicated code base are similar to yours, but it is very good at slapping things on until it works.
If you're not watching it like a hawk it will solve a problem in a way that is inconsistent and, importantly, not integrated into the system. Which makes sense, it's been trained to generate code, and it will.
> I understand the pitch here ("it finds bugs! it's basically all upside because worst case there's no output anyways"), but I'm finding some of these agents to be ... uhhh... kind of agressive at trying to find the solution and end up missing the forest for the trees. And there's some "oh you should fix this" stuff which, while sometimes isn't _wrong_, is completely besides the point.
How long/big do your system/developer/user prompts end up being typically?
The times people seem to be getting "less than ideal" responses from LLMs tend to be when they're not spending enough time setting up a general prompt they can reuse, describing exactly what they want and do not want.
So in your case, you need to steer it to do less outside of what you've told it. Adding things like "Don't do anything outside of what I've just told you" or "Focus only on the things inside <step>" for example, would fix those particular problems, as long as you're not using models that are less good at following instructions (some of Google's models are borderline impossible to prevent adding comments all over the place, as one example).
So prompt it to not care about solutions, and only care about finding the root cause, and you'll find that you can mostly avoid the annoying parts by either prescribing what you'd want instead, or just straight up tell it not to do those things.
Then you iterate on this reusable prompt across projects, and it builds up so eventually 99% of the times the models do exactly what you expect.
What did you have for AI three years ago? Jack fucking shit is what.
Why is “wow that’s cool, I wonder what it’ll turn into” a forbidden phrase, but “there are clearly no experts on this topic but let me take a crack at it!!” important for everyone to comment on?
So you feed the output into another LLM call to re-evaluate and assess, until the number of actual reports is small enough to be manageable. Will this result in false negatives? Almost certainly. But what does come out the end of it has a higher prior for being relevant, and you just review what you can.
Again, worst case all you wasted was your time, and now you've bounded that.
They're quite good at algorithm bugs, a lot less good at concurrency bugs, IME. Which is very valuable still, just that's where I've seen the limits so far.
Their also better at making tests for algorithmic things than for concurrency situations, but can get pretty close. Just usually don't have great out-of-the-box ideas for "how to ensure these two different things run in the desired order."
Everything that I dislike about generating non-greenfield code with LLMs isn't relevant to the "make tests" or "debug something" usage. (Weird/bad choices about when to duplicate code vs refactor things, lack of awareness around desired "shape" of codebase for long-term maintainability, limited depth of search for impact/related existing stuff sometimes, running off the rails and doing almost-but-not-quite stuff that ends up entirely the wrong thing.)
Well if you know it's wrong, tell it, and why. I don't get the expectation for one shotting everything 100% of the time. It's no different than bouncing ideas off a colleague.
I’ve been pretty impressed with LLMs at (to me) greenfield hobby projects, but not so much at work in a huge codebase.
After reading one of your blog posts recommending it, I decided to specifically give them a try as bug hunters/codebase explainers instead, and I’ve been blown away. Several hard-to-spot production bugs down in two weeks or so that would have all taken me at least a few focused hours to spot all in all.
One of my favorite ways to use LLM agents for coding is to have them write extensive documentation on whatever I'm about to dig in coding on. Pretty low stakes if the LLM makes a few mistakes. It's perhaps even a better place to start for skeptics.
I am not so sure. Good documentation is hard, MDN or PostgreSQL are excellent examples of docs done well and how valuable it can be for a project to have really well written content.
LLMs can generate content but not really write, out of the box they tend to be quote verbose and generate a lot of proforma content. Perhaps with the right kind of prompts, a lot of editing and reviews, you can get them to good, but at the point it is almost same as writing it yourself.
It is a hard choice between lower quality documentation (AI slop?) or it being lightly or fully undocumented. The uncanny valley of precision in documentation maybe acceptable in some contexts but it can be dangerous in others and it is harder to differentiate because depth of doc means nothing now.
Over time we find ourselves skipping LLM generated documentation just like any other AI slop. The value/emphasis placed on reading documentation erodes that finding good documentation becomes harder like other online content today and get devalued.
Same. Initially surprised how good it was. Now routinely do this on every new codebase. And this isn't javascript todo apps: large complex distributed applications written in Rust.
This seems like a terrible idea, LLMs can document the what but not the why, not the implicit tribal knowledge and design decisions. Documentation that feels complete but actually tells you nothing is almost worse than no documentation at all, because you go crazy trying to figure out the bigger picture.
I’m a bit of an LLM hater because they’re overhyped. But in these situations they can be pretty nice if you can quickly evaluate correctness. If evaluating correctness is harder than searching on your own then there are net negative. I’ve found with my debugging it’s really hard to know which will be the case. And as it’s my responsibility to build a “Do I give the LLM a shot?” heuristic that’s very frustrating.
> start exploring how these tools can help them without feeling like they're [...] ripping off the work of everyone who's code was used to train the model
But you literally still are. If you weren't, it should be trivially easy to create these models without using huge swathes of non-public-domain code. Right?
It feels less like you're ripping off work if the model is helping you understand your own code as opposed to writing new code from scratch - even though the models were built in exactly the same way.
If someone scraped every photo on the internet (along with their captions) and used the data to create a model that was used purely for accessibility purposes - to build tools which described images to people with visual impairments - many people would be OK with that, where they might be justifiably upset at the same scraped data being used to create an image-generation model that competes with the artists on who's work it was trained.
Similarly, many people were OK with Google scraping the entire internet for 20+ years to build a search engine that helps users find their content, but are unhappy about an identical scrape being used to train a generative AI model.
I'm only an "AI sceptic" in the sense that I think that today's LLM models cannot regularly and substantially reduce my workload, not because they aren't able to perform interesting programming tasks (they are!), but because they don't do so reliably, and for a regular and substantial reduction in effort, I think a tool needs to be reliable and therefore trustworthy.
Now, this story is a perfect use case, because Filippo Valsorda put very little effort into communicating with the agent. If it worked - great; if it didn't - no harm done. And it worked!
The thing is that I already know that these tools are capable of truly amazing feats, and this is, no doubt, one of them. But it's been a while since I had a bug in a single-file library implementing a well-known algorithm, so it still doesn't amount to a regular and substantial increase in productivity for me, but "only" to yet another amazing feat by LLMs (something I'm not sceptical of).
Next time I have such a situation, I'll definitely use an LLM to debug it, because I enjoy seeing such results first-hand (plus, it would be real help). But I'm not sure that it supports the claim that these tools can today offer a regular and substantial productivity boost.
I know this is not an argument against LLM's being useful to increase productivity, but of all tasks in my job as software developer, hunting for and fixing obscure bugs is actually one of the most intellectually rewarding. I would miss that if it were to be taken over by a machine.
Also, hunting for bugs is often a very good way to get intimately familiar with the architecture of a system which you don't know well, and furthermore it improves your mental model of the cause of bugs, making you a better programmer in the future. I can spot a possible race condition or unsafe alien call at a glance. I can quickly identify a leaky abstraction, and spot mutable state that could be made immutable. All of this because I have spent time fixing bugs that were due to these mistakes. If you don't fix other people's bugs yourself, I fear you will also end up relying on an LLM to make judgements about your own code to make sure that it is bug-free.
> hunting for and fixing obscure bugs is actually one of the most intellectually rewarding. I would miss that if it were to be taken over by a machine.
That's fascinating to me. It's the thing I literally hate the most.
When I'm writing new code, I feel like I'm delivering value. When I'm fixing bugs, I feel like it's a frustrating waste of time caused by badly written code in the first place, making it a necessary evil. (Even when I was the one who wrote the original code.)
Bug hunting tends to be interpolation, which LLMs are really good at. Writing code is often some extrapolation (or interpolating at a much more abstract level).
Sometimes it's the end of the day and you've been crunching for hours already and you hit one gnarly bug and you just want to go and make a cup of tea and come back to some useful hints as to the resolution.
This is no different than when LLMs write code. In both scenarios they often turn into bullshit factories that are capable, willing, and happy to write pages and pages of intricate, convincing-sounding explanations for bugs that don't exist, wasting everyone's time and testing my patience.
That's not my experience at all. When I ask them to track down the root cause of a bug about 80% of the time they reply with a few sentences correctly identifying the source of the bug.
1/5 times the get it wrong and I might waste a minute or two confirming they they missed. I can live with those odds.
You're posting this comment on a thread attached to an article where Filippo Valsorda - a noted cryptography expert - used these tools to track down gnarly bugs in Go cryptography code.
Personally my biggest piece of advice is: AI First.
If you really want to understand what the limitations are of the current frontier models (and also really learn how to use them), ask the AI first.
By throwing things over the wall to the AI first, you learn what it can do at the same time as you learn how to structure your requests. The newer models are quite capable and in my experience can largely be treated like a co-worker for "most" problems. That being said.. you also need to understand how they fail and build an intuition for why they fail.
Every time a new model generation comes up, I also recommend throwing away your process (outside of things like lint, etc.) and see how the model does without it. I work with people that have elaborate context setups they crafted for less capable models, they largely are un-neccessary with GPT-5-Codex and Sonnet 4.5.
> By throwing things over the wall to the AI first, you learn what it can do at the same time as you learn how to structure your requests.
Unfortunately, it doesn't quite work out that way.
Yes, you will get better at using these tools the more you use them, which is the case with any tool. But you will not learn what they can do as easily, or at all.
The main problem with them is the same one they've had since the beginning. If the user is a domain expert, then they will be able to quickly spot the inaccuracies and hallucinations in the seemingly accurate generated content, and, with some effort, coax the LLM into producing correct output.
Otherwise, the user can be easily misled by the confident and sycophantic tone, and waste potentially hours troubleshooting, without being able to tell if the error is on the LLM side or their own. In most of these situations, they would've probably been better off reading the human-written documentation and code, and doing the work manually. Perhaps with minor assistance from LLMs, but never relying on them entirely.
This is why these tools are most useful to people who are already experts in their field, such as Filippo. For everyone else who isn't, and actually cares about the quality of their work, the experience is very hit or miss.
> That being said.. you also need to understand how they fail and build an intuition for why they fail.
I've been using these tools for years now. The only intuition I have for how and why they fail is when I'm familiar with the domain. But I had that without LLMs as well, whenever someone is talking about a subject I know. It's impossible to build that intuition with domains you have little familiarity with. You can certainly do that by traditional learning, and LLMs can help with that, but most people use them for what you suggest: throwing things over the wall and running with it, which is a shame.
> I work with people that have elaborate context setups they crafted for less capable models, they largely are un-neccessary with GPT-5-Codex and Sonnet 4.5.
I haven't used GPT-5-Codex, but have experience with Sonnet 4.5, and it's only marginally better than the previous versions IME. It still often wastes my time, no matter the quality or amount of context I feed it.
I guess there are several unsaid assumptions here. The article is by a domain expert working on their domain. Throw work you understand at it, see what it does. Do it before you even work on it. I kind of assumed based on the audience that most people here would be domain experts.
As for the building intuition, perhaps I am over-estimating what most people are capable of.
Working with and building systems using LLMs over the last few years has helped me build a pretty good intuition about what is breaking down when the model fails at a task. While having an ML background is useful in some very narrow cases (like: 'why does an LLM suck at ranking...'), I "think" a person can get a pretty good intuition purely based on observational outcomes.
I've been wrong before though. When we first started building LLM products, I thought, "Anyone can prompt, there is no barrier for this skill." That was not the case at all. Most people don't do well trying to quantify ambiguity, specificity, and logical contridiction when writing a process or set of instructions. I was REALLY surprised how I became a "go-to" person to "fix" prompt systems all based on linguistics and systematic process decomposition. Some of this was understaing how the auto-regressive attention system benefits from breaking the work down into steps, but really most of it was just "don't contradict yourself and be clear".
Working with them extensively also has helped me hone in on how the models get "better" with each release. Though most of my expertise is with OpenAI and Antrhopic model families.
I still think most engineers "should" be able to build intuition generally on what works well with LLMs and how to interact with them, but you are probably right. It will be just like most ML engineers where they see something work in a paper and then just paste it onto their model with no intuition about what systemically that structurally changes in the model dynamics.
I did ask the AI first, about some things that I already knew how to do.
It gave me horribly inefficient or long-winded ways of doing it. In the time it took for "prompt tuning" I could have just written the damn code myself. It decreased the confidence for anything else it suggested about things I didn't already know about.
Claude still sometimes insists that iOS 26 isn't out yet. sigh.. I suppose I just have to treat it as an occasional alternative to Google/StackOverflow/Reddit for now. No way would I trust it to write an entire class let alone an app and be able to sleep at night (not that I sleep at night, but that's besides the point)
I think I prefer Xcode's built-in local model approach better, where it just offers sane autocompletions based on your existing code. e.g. if you already wrote a Dog class it can make a Cat class and change `bark()` to `meow()`
You can write the "prompt tuning" down in AGENTS.md and then you only need to do it once. This is why you need to keep working with different ones to get the feeling what they're good at and how you can steer them closer to your style and preferences without having to reiterate from scratch every time.
I personally have a git submodule built specifically for shared instructions like that, it contains the assumptions and defaults for my specific style of project for 3 different programming languages. When I update it on one project, all my projects benefit.
This way I don't need to tell whatever LLM I'm working with to use modernc.org/sqlite for database connections, for example.
> Claude still sometimes insists that iOS 26 isn't out yet.
How would you imagine an AI system working that didn't make mistakes like that?
iOS 26 came out on September 15th.
LLMs aren't omniscient or constantly updated with new knowledge. Which means we have to figure out how to make use of them despite them not having up-to-the-second knowledge of the world.
> Full disclosure: Anthropic gave me a few months of Claude Max for free. They reached out one day and told me they were giving it away to some open source maintainers.
Related, lately I've been getting tons of Anthropic Instagram ads; they must be near a quarter of all the sponsored content I see for the last month or so. Various people vibe coding random apps and whatnot using different incarnations of Claude. Or just direct adverts to "Install Claude Code." I really have no idea why I've been targeted so hard, on Instagram of all places. Their marketing team must be working overtime.
I think it might be that they've hit product-market fit.
Developers find Claude Code extremely useful (once they figure out how to use it). Many developers subscribe to their $200/month plan. Assuming that's profitable (and I expect it is, since even for that much money it cuts off at a certain point to avoid over-use) Anthropic would be wise to spend a lot of money on marketing to try and grow their paying subscriber base for it.
I don’t feel like paying for a max level subscription, but am trying out MCP servers across OpenAI, Anthropic etc so I pay for the access to test them.
When my X hour token allotment runs out on one model I jump to the next closing Codex and opening Claude code or whatever together with altering my prompting a tiny bit to fit the current model.
Being so extremely fungible should by definition be a race to zero margins and about zero profit being made in the long run.
I suppose they can hope to make bank the next 6-12 months but that doesn’t create a longterm sustainable company.
I guess they can try building context to lock me in by increasing the cost to switch - but this is today broken by me every 3-4 prompts clearing the context because I know the output will be worse if I keep going.
What makes it better than VSCode Co-pilot with Claude 4.5? I barely program these days since I switched to PM but I recently started using that and it seems pretty effective… why should I use a fork instead?
Unfortunately that might also be due to how Instagram shows ads, and not necessarily Anthropic's marketing push. As soon as you click on or even linger on a particular ad, Instagram notices and doubles down on sending you more ads from the same provider as well as related ads. My experience is that Instagram has 2-3 groups of ad types I receive which slowly change over the months as the effectiveness wanes over time.
The fact that you are receiving multiple kinds of ads from Anthropic does signify more of a marketing push, though.
> Importantly, there is no need to trust the LLM or review its output when its job is just saving me an hour or two by telling me where the bug is, for me to reason about it and fix it.
Except they regularly come up with "explanations" that are completely bogus and may actually waste an hour or two. Don't get me wrong, LLMs can be incredibly helpful for identifying bugs, but you still have to keep a critical mindset.
OP said "for me to reason about it", not for the LLM to reason about it.
I agree though, LLMs can be incredible debugging tools, but they are also incredibly gullable and love to jump to conclusions. The moment you turn your own fleshy brain off is when they go to lala land.
> OP said "for me to reason about it", not for the LLM to reason about it.
But that's what I meant! Just recently I asked an LLM about a weird backtrace and it pointed me the supposed source of the issue. It sounded reasonable and I spent 1-2 hours researching the issue, only to find out it was a total red herring. Without the LLM I wouldn't have gone down that road in the first place.
(But again, there have been many situations where the LLM did point me to the actual bug.)
> As ever, I wish we had better tooling for using LLMs which didn’t look like chat or autocomplete
I think part of the reason why I was initially more skeptical than I ought to have been is because chat is such a garbage modality. LLMs started to "click" for me with Claude Code/Codex.
A "continuously running" mode that would ping me would be interesting to try.
On the one hand, I agree with this. The chat UI is very slow and inefficient.
But on the other, given what I know about these tools and how error-prone they are, I simply refuse to give them access to my system, to run commands, or do any action for me. Partly due to security concerns, partly due to privacy, but mostly distrust that they will do the right thing. When they screw up in a chat, I can clean up the context and try again. Reverting a removed file or messed up Git repo is much more difficult. This is how you get a dropped database during code freeze...
The idea of giving any of these corporations such privileges is unthinkable for me. It seems that most people either don't care about this, or are willing to accept it as the price of admission.
I experimented with Aider and a self-hosted model a few months ago, and wasn't impressed. I imagine the experience with SOTA hosted models is much better, but I'll probably use a sandbox next time I look into this.
But you should try Claude Code or Codex just to understand them. Can always run them in a container or VM if you fear their idiocy (and it's not a bad idea to fear it)
Like I said sibling, it's not the right modality. Others agree. I'm a good typer and good at writing, so it doesn't bug me too much, but it does too much without asking or working through it. Sometimes this is brilliant. Other times it's like.. c'mon guy, what did you do over there? What Balrog have I disturbed?
It's good to be familiar with these things in any case because they're flooding the industry and you'll be reviewing their code for better or for worse.
I absolutely agree with this sentiment as well and keep coming back to it. What I want is more of an actual copilot which works in a more paired way and forces me to interact with each of its changes and also involves me more directly in them, and teaches me about what it's doing along the way, and asks for more input.
A more socratic method, and more augmentic than "agentic".
Hell, if anybody has investment money and energy and shares this vision I'd love to work on creating this tool with you. I think these models are being misused right now in attempt to automate us out of work when their real amazing latent power is the intuition that we're talking about on this thread.
Misused they have the power to worsen codebases by making developers illiterate about the very thing they're working on because it's all magic behind the scenes. Uncorked they could enhance understanding and help better realize the potential of computing technology.
I'm working on such a thing, but I'm not interested in money, nor do I have money to offer - I'm interested in a system which I'm proud of.
What are your motivations?
Interested in your work: from your public GitHub repos, I'm perhaps most interested in `moor` -- as it shares many design inclinations that I've leaned towards in thinking about this problem.
CLI terminals are incredibly powerful. They are also free if you use Gemini CLI or Qwen Code. Plus, you can access any OpenAI-compatible API(2k TPS via Cerebras at 2$/M or local models). And you can use them in IDEs like Zed with ACP mode.
All the simple stuff (creating a repo, pushing, frontend edits, testing, Docker images, deployment, etc.) is automated. For the difficult parts, you can just use free Grok to one-shot small code files. It works great if you force yourself to keep the amount of code minimal and modular. Also, they are great UIs—you can create smart programs just with CLI + MCP servers + MD files. Truly amazing tech.
I started with Claude Code, realized it was too much money for every message, then switched to Gemini CLI, then Qwen. Probably Claude Code is better, but I don't need it since I can solve my problems without it.
Gemini and it's tooling is absolute shit. The LLM itself is barely usable and needs so much supervision you might as well do the work yourself. Then couple that with an awful cli and vscode interface and you'll find that it's just a complete waste of time.
Compared to the anthropic offering is night and day. Claude gets on with the job and makes me way more productive.
>so I checked out the old version of the change with the bugs (yay Jujutsu!) and kicked off a fresh Claude Code session
There's a risk there that the AI could find the solution by looking through your history to find it, instead of discovering it directly in the checked-out code. AI has done that in the past:
> For example, how nice would it be if every time tests fail, an LLM agent was kicked off with the task of figuring out why, and only notified us if it did before we fixed it?
You can use Git hooks to do that. If you have tests and one fails, spawn an instance of claude a prompt -p 'tests/test4.sh failed, look in src/ and try and work out why'
$ claude -p 'hello, just tell me a joke about databases'
A SQL query walks into a bar, walks up to two tables and asks, "Can I JOIN you?"
$
Or if, you use Gogs locally, you can add a Gogs hook to do the same on pre-push
> An example hook script to verify what is about to be pushed. Called by "git push" after it has checked the remote status, but before anything has been pushed. If this script exits with a non-zero status nothing will be pushed.
I like this idea. I think I shall get Claude to work out the mechanism itself :)
It is even a suggestion on this Claude cheet sheet
Yesterday I tried something similar with a website (Go backend) that was doing a complex query/filter and showing the wrong results. I just described the problem to Claude Code, told it how to run psql, and gave it an authenticated cookie to curl the backend with. In about three minutes of playing around, it fixed the problem. It only needed raw psql and curl access, no specialized tooling, to write a bunch of bash commands to poke around and compare the backend results with the test database.
I'm surprised it didn't fix it by removing the code. In my experience, if you give Claude a failing test, it fixes it by hard-coding the code to return the value expected by the test or something similar.
Last week I asked it to look at why a certain device enumeration caused a sigsegv, and it quickly solved the issue by completely removing the enumeration. No functionality, no bugs!
I've got a paste in prompt that reiterates multiple times not to remove features or debugging output without asking first, and not to blame the test file/data that the program failed on. Repeated multiple times, the last time in all caps. It still does it. I hope maybe half as often, but I may be fooling myself.
A whole class of tedious problems have been eliminated by LLMs because they are able to look at code in a "fuzzy" way. But this can be a liability, too. I have a codebase that "looks kinda" like a nodejs project, so AI agents usually assume it is one, even if I rename the package.json, it will inspect the contents and immediately clock it as "node-like".
This is basically the ideal scenario for coding agents. Easily verifiable through running tests, pure logic, algorithmic problem. It's the case that has worked the best for me with LLMs.
Can you expand on what you find to be 'bad advice'?
The author uses an LLM to find bugs and then throw away the fix and instead write the code he would have written anyway. This seems like a rather conservative application of LLMs. Using the 'shooting someone in the foot' analogy - this article is an illustration of professional and responsible firearm handling.
Layman in cryptotography (that's 99% of us at least) may be encouraged to deploy LLM generated crypto implementations, without understanding the crypto
Honestly, it read more like attention seeking. He "live coded" his work, by which I believe he means he streamed everything he was doing while working. It just seems so much more like a performance and building a brand than anything else. I guess that's why I'm just a nobody.
The article even states that the solution claude proposed wasn't the point. The point was finding the bug.
AI are very capable heuristics tools. Being able to "sniff test" things blind is their specialty.
i.e. Treat them like an extremely capable gas detector that can tell you there is a leak and where in the plumbing it is, not a plumber who can fix the leak for you.
I think it works because Claude takes some standard coding issues and systematizes them. The list is long, but Claude doesn't run out of patience like a human being does. Or at least it has some credulity left after trying a few initial failed hypotheses. This being a cryptography problem helps a little bit, in that there are very specific keywords that might hint at a solution, but from my skim of the article, it seems like it was mostly a good old coding error, taking the high bits twice.
The standard issues are just a vague laundry list:
- Are you using the data you think you're using? (Bingo for this one)
- Could it be an overflow?
- Are the types right?
- Are you calling the function you think you're calling? Check internal, then external dependencies
- Is there some parameter you didn't consider?
And a bunch of others. When I ask Claude for a debug, it's always something that makes sense as a checklist item, but I'm often impressed by how it diligently followed the path set by the results of the investigation. It's a great donkey, really takes the drudgery out of my work, even if it sometimes takes just as long.
> The list is long, but Claude doesn't run out of patience like a human being does
I've flat out had Claude tell me it's task was getting tedious, and it will often grasp at straws to use as excuses for stopping a repetitive task and moving in to something else.
Keeping it on task when something keeps moving forward, is easy, but when it gets repetitive it takes a lot of effort to make it stick to it.
> Claude doesn't run out of patience like a human being does.
It very much does! I had a debugging session with Claude Code today, and it was about to give up with the message along the lines of “I am sorry I was not able to help you find the problem”.
It took some gentle cheering (pretty easy, just saying “you are doing an excellent job, don’t give up!”) and encouragement, and a couple of suggestions from me on how to approach the debug process for it to continue and finally “we” (I am using plural here because some information that Claude “volunteered” was essential to my understanding of the problem) were able to figure out the root cause and the fix.
Claude told me it stopped debugging since it would run out of tokens in its context window. I asked how many tokens it had left and it said actually it had plenty so could continue. Then again it stopped, and without me asking about tokens, wrote
Context Usage
• Used: 112K/200K tokens (56%)
• Remaining: 88K tokens
• Sufficient for continued debugging, but fresh session recommended for clarity
I just recently found a number of bugs in both the RELIC and MCL libraries. It took a while to track them down but it was remarkable that it was able to find them at all.
This is one of the things I've mentioned before, I think it's just hidden a bit and hard to see, but this is basically the LLM doing style transfer, which they're really good at. There was a specification for the code (which looks like it was already trained into the LLM since it didn't have to go fetch it but it also had intimate knowledge of), there was an implementation, and it's really good at extracting out the style difference between code and spec. Anything that looks like style transfer is a good use for LLMs.
As another example, I think things like "write unit tests for this code" are usually similar sort of style transfer as well, based on how it writes the tests. It definitely has a good idea as to how to sort of ensure that all the functionality gets tested, I find it is less likely to produce "creative" ways that bugs may come out, but hey, it's a good start.
This isn't a criticism, it's intended to be a further exploration and understanding of when these tools can be better than you might intuitively think.
Even the title of the post makes this clear: it's about debugging low-level cryptography. He didn't vibe code ML-DSA. You have to be building a low-level implementation in the first place for anything in this post to apply to it.
Using coding agents to track down the root cause of bugs like this works really well:
> Three out of three one-shot debugging hits with no help is extremely impressive. Importantly, there is no need to trust the LLM or review its output when its job is just saving me an hour or two by telling me where the bug is, for me to reason about it and fix it.
The approach described here could also be a good way for LLM-skeptics to start exploring how these tools can help them without feeling like they're cheating, ripping off the work of everyone who's code was used to train the model or taking away the most fun part of their job (writing code).
Have the coding agents do the work of digging around hunting down those frustratingly difficult bugs - don't have it write code on your behalf.
I understand the pitch here ("it finds bugs! it's basically all upside because worst case there's no output anyways"), but I'm finding some of these agents to be ... uhhh... kind of agressive at trying to find the solution and end up missing the forest for the trees. And there's some "oh you should fix this" stuff which, while sometimes isn't _wrong_, is completely besides the point.
The end result being these robots doing bikeshedding. When paired with junior engineers looking at this output and deciding to act on it, it just generates busywork. Not helping that everyone and their dog wants to automatically run their agent against PRs now
I'm trying to use these to some extent when I find myself in a canonical situation that should work and am not getting the value everyone else seems to get in many cases. Very much "trying to explain a thing to a junior engineer taking more time than doing it myself" thing, except at least the junior is a person.
When models start to forage around in the weeds, it's a good idea to restart the session and add more information to the prompt for what it should ignore or assume. For example in ML projects, Claude gets very worried that datasets aren't available or are perhaps responsible. Usually if you tell it outright where you suspect the bug to be (or straight up tell it, even if you're unsure) it will focus on that. Or, make it give you a list of concerns and ask you which are valid.
I've found that having local clones of large library repos (or telling it to look in the environment for packages) is far more effective than relying on built-in knowledge or lousy web search. It can also use ast-grep on those. For some reason the agent frameworks are still terrible about looking up references in a sane way (where in an IDE you would simply go to declaration).
3 replies →
Sometimes you hit a wall where something is simply outside of the LLM's ability to handle, and it's best to give up and do it yourself. Knowing when to give up may be the hardest part of coding with LLMs.
Notably, these walls are never where I expect them to be—despite my best efforts, I can't find any sort of pattern. LLMs can find really tricky bugs and get completely stuck on relatively simple ones.
26 replies →
Communication with a person is more difficult and the feedback loop is much, much longer. I can almost instantly tell whether Claude has understood the mission or digested context correctly.
I would say a lot of people are only posting their positive experiences. Stating negative things about AI is mildly career-dangerous at the moment where as the opposite looks good. I found the results from using it on a complicated code base are similar to yours, but it is very good at slapping things on until it works.
If you're not watching it like a hawk it will solve a problem in a way that is inconsistent and, importantly, not integrated into the system. Which makes sense, it's been trained to generate code, and it will.
> I understand the pitch here ("it finds bugs! it's basically all upside because worst case there's no output anyways"), but I'm finding some of these agents to be ... uhhh... kind of agressive at trying to find the solution and end up missing the forest for the trees. And there's some "oh you should fix this" stuff which, while sometimes isn't _wrong_, is completely besides the point.
How long/big do your system/developer/user prompts end up being typically?
The times people seem to be getting "less than ideal" responses from LLMs tend to be when they're not spending enough time setting up a general prompt they can reuse, describing exactly what they want and do not want.
So in your case, you need to steer it to do less outside of what you've told it. Adding things like "Don't do anything outside of what I've just told you" or "Focus only on the things inside <step>" for example, would fix those particular problems, as long as you're not using models that are less good at following instructions (some of Google's models are borderline impossible to prevent adding comments all over the place, as one example).
So prompt it to not care about solutions, and only care about finding the root cause, and you'll find that you can mostly avoid the annoying parts by either prescribing what you'd want instead, or just straight up tell it not to do those things.
Then you iterate on this reusable prompt across projects, and it builds up so eventually 99% of the times the models do exactly what you expect.
Just ask it to prioritize the top ones for your review. Yes, they can bikeshed, but because they don’t have egos, they don’t stick to it.
Alternatively, if it is in an area with good test coverage, let it go fix the minor stuff.
1 reply →
> except at least the junior is a person.
+1 Juniors can learn over time.
Ok, fair critique.
EXCEPT…
What did you have for AI three years ago? Jack fucking shit is what.
Why is “wow that’s cool, I wonder what it’ll turn into” a forbidden phrase, but “there are clearly no experts on this topic but let me take a crack at it!!” important for everyone to comment on?
One word: Standby. Maybe that’s two words.
5 replies →
So you feed the output into another LLM call to re-evaluate and assess, until the number of actual reports is small enough to be manageable. Will this result in false negatives? Almost certainly. But what does come out the end of it has a higher prior for being relevant, and you just review what you can.
Again, worst case all you wasted was your time, and now you've bounded that.
They're quite good at algorithm bugs, a lot less good at concurrency bugs, IME. Which is very valuable still, just that's where I've seen the limits so far.
Their also better at making tests for algorithmic things than for concurrency situations, but can get pretty close. Just usually don't have great out-of-the-box ideas for "how to ensure these two different things run in the desired order."
Everything that I dislike about generating non-greenfield code with LLMs isn't relevant to the "make tests" or "debug something" usage. (Weird/bad choices about when to duplicate code vs refactor things, lack of awareness around desired "shape" of codebase for long-term maintainability, limited depth of search for impact/related existing stuff sometimes, running off the rails and doing almost-but-not-quite stuff that ends up entirely the wrong thing.)
Well if you know it's wrong, tell it, and why. I don't get the expectation for one shotting everything 100% of the time. It's no different than bouncing ideas off a colleague.
6 replies →
I’ve been pretty impressed with LLMs at (to me) greenfield hobby projects, but not so much at work in a huge codebase.
After reading one of your blog posts recommending it, I decided to specifically give them a try as bug hunters/codebase explainers instead, and I’ve been blown away. Several hard-to-spot production bugs down in two weeks or so that would have all taken me at least a few focused hours to spot all in all.
One of my favorite ways to use LLM agents for coding is to have them write extensive documentation on whatever I'm about to dig in coding on. Pretty low stakes if the LLM makes a few mistakes. It's perhaps even a better place to start for skeptics.
I am not so sure. Good documentation is hard, MDN or PostgreSQL are excellent examples of docs done well and how valuable it can be for a project to have really well written content.
LLMs can generate content but not really write, out of the box they tend to be quote verbose and generate a lot of proforma content. Perhaps with the right kind of prompts, a lot of editing and reviews, you can get them to good, but at the point it is almost same as writing it yourself.
It is a hard choice between lower quality documentation (AI slop?) or it being lightly or fully undocumented. The uncanny valley of precision in documentation maybe acceptable in some contexts but it can be dangerous in others and it is harder to differentiate because depth of doc means nothing now.
Over time we find ourselves skipping LLM generated documentation just like any other AI slop. The value/emphasis placed on reading documentation erodes that finding good documentation becomes harder like other online content today and get devalued.
4 replies →
Same. Initially surprised how good it was. Now routinely do this on every new codebase. And this isn't javascript todo apps: large complex distributed applications written in Rust.
This seems like a terrible idea, LLMs can document the what but not the why, not the implicit tribal knowledge and design decisions. Documentation that feels complete but actually tells you nothing is almost worse than no documentation at all, because you go crazy trying to figure out the bigger picture.
4 replies →
Well if it writes documentation that is wrong, then the subtle bugs start :)
1 reply →
I’m a bit of an LLM hater because they’re overhyped. But in these situations they can be pretty nice if you can quickly evaluate correctness. If evaluating correctness is harder than searching on your own then there are net negative. I’ve found with my debugging it’s really hard to know which will be the case. And as it’s my responsibility to build a “Do I give the LLM a shot?” heuristic that’s very frustrating.
> start exploring how these tools can help them without feeling like they're [...] ripping off the work of everyone who's code was used to train the model
But you literally still are. If you weren't, it should be trivially easy to create these models without using huge swathes of non-public-domain code. Right?
It feels less like you're ripping off work if the model is helping you understand your own code as opposed to writing new code from scratch - even though the models were built in exactly the same way.
If someone scraped every photo on the internet (along with their captions) and used the data to create a model that was used purely for accessibility purposes - to build tools which described images to people with visual impairments - many people would be OK with that, where they might be justifiably upset at the same scraped data being used to create an image-generation model that competes with the artists on who's work it was trained.
Similarly, many people were OK with Google scraping the entire internet for 20+ years to build a search engine that helps users find their content, but are unhappy about an identical scrape being used to train a generative AI model.
2 replies →
I'm only an "AI sceptic" in the sense that I think that today's LLM models cannot regularly and substantially reduce my workload, not because they aren't able to perform interesting programming tasks (they are!), but because they don't do so reliably, and for a regular and substantial reduction in effort, I think a tool needs to be reliable and therefore trustworthy.
Now, this story is a perfect use case, because Filippo Valsorda put very little effort into communicating with the agent. If it worked - great; if it didn't - no harm done. And it worked!
The thing is that I already know that these tools are capable of truly amazing feats, and this is, no doubt, one of them. But it's been a while since I had a bug in a single-file library implementing a well-known algorithm, so it still doesn't amount to a regular and substantial increase in productivity for me, but "only" to yet another amazing feat by LLMs (something I'm not sceptical of).
Next time I have such a situation, I'll definitely use an LLM to debug it, because I enjoy seeing such results first-hand (plus, it would be real help). But I'm not sure that it supports the claim that these tools can today offer a regular and substantial productivity boost.
I know this is not an argument against LLM's being useful to increase productivity, but of all tasks in my job as software developer, hunting for and fixing obscure bugs is actually one of the most intellectually rewarding. I would miss that if it were to be taken over by a machine.
Also, hunting for bugs is often a very good way to get intimately familiar with the architecture of a system which you don't know well, and furthermore it improves your mental model of the cause of bugs, making you a better programmer in the future. I can spot a possible race condition or unsafe alien call at a glance. I can quickly identify a leaky abstraction, and spot mutable state that could be made immutable. All of this because I have spent time fixing bugs that were due to these mistakes. If you don't fix other people's bugs yourself, I fear you will also end up relying on an LLM to make judgements about your own code to make sure that it is bug-free.
> hunting for and fixing obscure bugs is actually one of the most intellectually rewarding. I would miss that if it were to be taken over by a machine.
That's fascinating to me. It's the thing I literally hate the most.
When I'm writing new code, I feel like I'm delivering value. When I'm fixing bugs, I feel like it's a frustrating waste of time caused by badly written code in the first place, making it a necessary evil. (Even when I was the one who wrote the original code.)
>Have the coding agents do the work of digging around hunting down those frustratingly difficult bugs - don't have it write code on your behalf.
Why? Bug hunting is more challenging and cognitive intensive than writing code.
Bug hunting tends to be interpolation, which LLMs are really good at. Writing code is often some extrapolation (or interpolating at a much more abstract level).
1 reply →
Sometimes it's the end of the day and you've been crunching for hours already and you hit one gnarly bug and you just want to go and make a cup of tea and come back to some useful hints as to the resolution.
Why as in “why should it work” or “why should we let them do it”?
For the latter, the good news is that you’re free to use LLMs for debugging or completely ignore them.
Because it's easy to automate.
"this should return X, it returns Y, find out why"
With enough tooling LLMs can pretty easily figure out the reason eventually.
This is no different than when LLMs write code. In both scenarios they often turn into bullshit factories that are capable, willing, and happy to write pages and pages of intricate, convincing-sounding explanations for bugs that don't exist, wasting everyone's time and testing my patience.
That's not my experience at all. When I ask them to track down the root cause of a bug about 80% of the time they reply with a few sentences correctly identifying the source of the bug.
1/5 times the get it wrong and I might waste a minute or two confirming they they missed. I can live with those odds.
1 reply →
I have tested the AI SAST tools that were hyped after a curl article on several C code bases and they found nothing.
Which low level code base have you tried this latest tool on? Official Anthropic commercials do not count.
You're posting this comment on a thread attached to an article where Filippo Valsorda - a noted cryptography expert - used these tools to track down gnarly bugs in Go cryptography code.
5 replies →
whose
Personally my biggest piece of advice is: AI First.
If you really want to understand what the limitations are of the current frontier models (and also really learn how to use them), ask the AI first.
By throwing things over the wall to the AI first, you learn what it can do at the same time as you learn how to structure your requests. The newer models are quite capable and in my experience can largely be treated like a co-worker for "most" problems. That being said.. you also need to understand how they fail and build an intuition for why they fail.
Every time a new model generation comes up, I also recommend throwing away your process (outside of things like lint, etc.) and see how the model does without it. I work with people that have elaborate context setups they crafted for less capable models, they largely are un-neccessary with GPT-5-Codex and Sonnet 4.5.
> By throwing things over the wall to the AI first, you learn what it can do at the same time as you learn how to structure your requests.
Unfortunately, it doesn't quite work out that way.
Yes, you will get better at using these tools the more you use them, which is the case with any tool. But you will not learn what they can do as easily, or at all.
The main problem with them is the same one they've had since the beginning. If the user is a domain expert, then they will be able to quickly spot the inaccuracies and hallucinations in the seemingly accurate generated content, and, with some effort, coax the LLM into producing correct output.
Otherwise, the user can be easily misled by the confident and sycophantic tone, and waste potentially hours troubleshooting, without being able to tell if the error is on the LLM side or their own. In most of these situations, they would've probably been better off reading the human-written documentation and code, and doing the work manually. Perhaps with minor assistance from LLMs, but never relying on them entirely.
This is why these tools are most useful to people who are already experts in their field, such as Filippo. For everyone else who isn't, and actually cares about the quality of their work, the experience is very hit or miss.
> That being said.. you also need to understand how they fail and build an intuition for why they fail.
I've been using these tools for years now. The only intuition I have for how and why they fail is when I'm familiar with the domain. But I had that without LLMs as well, whenever someone is talking about a subject I know. It's impossible to build that intuition with domains you have little familiarity with. You can certainly do that by traditional learning, and LLMs can help with that, but most people use them for what you suggest: throwing things over the wall and running with it, which is a shame.
> I work with people that have elaborate context setups they crafted for less capable models, they largely are un-neccessary with GPT-5-Codex and Sonnet 4.5.
I haven't used GPT-5-Codex, but have experience with Sonnet 4.5, and it's only marginally better than the previous versions IME. It still often wastes my time, no matter the quality or amount of context I feed it.
I guess there are several unsaid assumptions here. The article is by a domain expert working on their domain. Throw work you understand at it, see what it does. Do it before you even work on it. I kind of assumed based on the audience that most people here would be domain experts.
As for the building intuition, perhaps I am over-estimating what most people are capable of.
Working with and building systems using LLMs over the last few years has helped me build a pretty good intuition about what is breaking down when the model fails at a task. While having an ML background is useful in some very narrow cases (like: 'why does an LLM suck at ranking...'), I "think" a person can get a pretty good intuition purely based on observational outcomes.
I've been wrong before though. When we first started building LLM products, I thought, "Anyone can prompt, there is no barrier for this skill." That was not the case at all. Most people don't do well trying to quantify ambiguity, specificity, and logical contridiction when writing a process or set of instructions. I was REALLY surprised how I became a "go-to" person to "fix" prompt systems all based on linguistics and systematic process decomposition. Some of this was understaing how the auto-regressive attention system benefits from breaking the work down into steps, but really most of it was just "don't contradict yourself and be clear".
Working with them extensively also has helped me hone in on how the models get "better" with each release. Though most of my expertise is with OpenAI and Antrhopic model families.
I still think most engineers "should" be able to build intuition generally on what works well with LLMs and how to interact with them, but you are probably right. It will be just like most ML engineers where they see something work in a paper and then just paste it onto their model with no intuition about what systemically that structurally changes in the model dynamics.
5 replies →
I did ask the AI first, about some things that I already knew how to do.
It gave me horribly inefficient or long-winded ways of doing it. In the time it took for "prompt tuning" I could have just written the damn code myself. It decreased the confidence for anything else it suggested about things I didn't already know about.
Claude still sometimes insists that iOS 26 isn't out yet. sigh.. I suppose I just have to treat it as an occasional alternative to Google/StackOverflow/Reddit for now. No way would I trust it to write an entire class let alone an app and be able to sleep at night (not that I sleep at night, but that's besides the point)
I think I prefer Xcode's built-in local model approach better, where it just offers sane autocompletions based on your existing code. e.g. if you already wrote a Dog class it can make a Cat class and change `bark()` to `meow()`
You can write the "prompt tuning" down in AGENTS.md and then you only need to do it once. This is why you need to keep working with different ones to get the feeling what they're good at and how you can steer them closer to your style and preferences without having to reiterate from scratch every time.
I personally have a git submodule built specifically for shared instructions like that, it contains the assumptions and defaults for my specific style of project for 3 different programming languages. When I update it on one project, all my projects benefit.
This way I don't need to tell whatever LLM I'm working with to use modernc.org/sqlite for database connections, for example.
2 replies →
> Claude still sometimes insists that iOS 26 isn't out yet.
How would you imagine an AI system working that didn't make mistakes like that?
iOS 26 came out on September 15th.
LLMs aren't omniscient or constantly updated with new knowledge. Which means we have to figure out how to make use of them despite them not having up-to-the-second knowledge of the world.
7 replies →
> Full disclosure: Anthropic gave me a few months of Claude Max for free. They reached out one day and told me they were giving it away to some open source maintainers.
Related, lately I've been getting tons of Anthropic Instagram ads; they must be near a quarter of all the sponsored content I see for the last month or so. Various people vibe coding random apps and whatnot using different incarnations of Claude. Or just direct adverts to "Install Claude Code." I really have no idea why I've been targeted so hard, on Instagram of all places. Their marketing team must be working overtime.
I think it might be that they've hit product-market fit.
Developers find Claude Code extremely useful (once they figure out how to use it). Many developers subscribe to their $200/month plan. Assuming that's profitable (and I expect it is, since even for that much money it cuts off at a certain point to avoid over-use) Anthropic would be wise to spend a lot of money on marketing to try and grow their paying subscriber base for it.
I just don’t see how they can build a moat.
I don’t feel like paying for a max level subscription, but am trying out MCP servers across OpenAI, Anthropic etc so I pay for the access to test them.
When my X hour token allotment runs out on one model I jump to the next closing Codex and opening Claude code or whatever together with altering my prompting a tiny bit to fit the current model.
Being so extremely fungible should by definition be a race to zero margins and about zero profit being made in the long run.
I suppose they can hope to make bank the next 6-12 months but that doesn’t create a longterm sustainable company.
I guess they can try building context to lock me in by increasing the cost to switch - but this is today broken by me every 3-4 prompts clearing the context because I know the output will be worse if I keep going.
1 reply →
> Many developers subscribe to their $200/month plan.
There's no way "many developers" are paying $2,400 annually for the benefit of their employers.
There's no way companies are paying when they won't even fork out $700 a year for IntelliJ instead pushing us all onto VSCode.
2 replies →
What makes it better than VSCode Co-pilot with Claude 4.5? I barely program these days since I switched to PM but I recently started using that and it seems pretty effective… why should I use a fork instead?
15 replies →
> since even for that much money it cuts off at a certain point to avoid over-use
if anything this just confirms that the unit economics are still bad
Unfortunately that might also be due to how Instagram shows ads, and not necessarily Anthropic's marketing push. As soon as you click on or even linger on a particular ad, Instagram notices and doubles down on sending you more ads from the same provider as well as related ads. My experience is that Instagram has 2-3 groups of ad types I receive which slowly change over the months as the effectiveness wanes over time.
The fact that you are receiving multiple kinds of ads from Anthropic does signify more of a marketing push, though.
> Importantly, there is no need to trust the LLM or review its output when its job is just saving me an hour or two by telling me where the bug is, for me to reason about it and fix it.
Except they regularly come up with "explanations" that are completely bogus and may actually waste an hour or two. Don't get me wrong, LLMs can be incredibly helpful for identifying bugs, but you still have to keep a critical mindset.
OP said "for me to reason about it", not for the LLM to reason about it.
I agree though, LLMs can be incredible debugging tools, but they are also incredibly gullable and love to jump to conclusions. The moment you turn your own fleshy brain off is when they go to lala land.
> OP said "for me to reason about it", not for the LLM to reason about it.
But that's what I meant! Just recently I asked an LLM about a weird backtrace and it pointed me the supposed source of the issue. It sounded reasonable and I spent 1-2 hours researching the issue, only to find out it was a total red herring. Without the LLM I wouldn't have gone down that road in the first place.
(But again, there have been many situations where the LLM did point me to the actual bug.)
1 reply →
This resonates with me a lot:
> As ever, I wish we had better tooling for using LLMs which didn’t look like chat or autocomplete
I think part of the reason why I was initially more skeptical than I ought to have been is because chat is such a garbage modality. LLMs started to "click" for me with Claude Code/Codex.
A "continuously running" mode that would ping me would be interesting to try.
On the one hand, I agree with this. The chat UI is very slow and inefficient.
But on the other, given what I know about these tools and how error-prone they are, I simply refuse to give them access to my system, to run commands, or do any action for me. Partly due to security concerns, partly due to privacy, but mostly distrust that they will do the right thing. When they screw up in a chat, I can clean up the context and try again. Reverting a removed file or messed up Git repo is much more difficult. This is how you get a dropped database during code freeze...
The idea of giving any of these corporations such privileges is unthinkable for me. It seems that most people either don't care about this, or are willing to accept it as the price of admission.
I experimented with Aider and a self-hosted model a few months ago, and wasn't impressed. I imagine the experience with SOTA hosted models is much better, but I'll probably use a sandbox next time I look into this.
Aider hurt my head it did not seem... good. Sorry to say.
If you want open source and want to target something over an API "crush" https://github.com/charmbracelet/crush is excellent
But you should try Claude Code or Codex just to understand them. Can always run them in a container or VM if you fear their idiocy (and it's not a bad idea to fear it)
Like I said sibling, it's not the right modality. Others agree. I'm a good typer and good at writing, so it doesn't bug me too much, but it does too much without asking or working through it. Sometimes this is brilliant. Other times it's like.. c'mon guy, what did you do over there? What Balrog have I disturbed?
It's good to be familiar with these things in any case because they're flooding the industry and you'll be reviewing their code for better or for worse.
I absolutely agree with this sentiment as well and keep coming back to it. What I want is more of an actual copilot which works in a more paired way and forces me to interact with each of its changes and also involves me more directly in them, and teaches me about what it's doing along the way, and asks for more input.
A more socratic method, and more augmentic than "agentic".
Hell, if anybody has investment money and energy and shares this vision I'd love to work on creating this tool with you. I think these models are being misused right now in attempt to automate us out of work when their real amazing latent power is the intuition that we're talking about on this thread.
Misused they have the power to worsen codebases by making developers illiterate about the very thing they're working on because it's all magic behind the scenes. Uncorked they could enhance understanding and help better realize the potential of computing technology.
Have you tried to ask the agents to work with you in the way you want?
I’ve found that using some high level direction / language and sharing my wants / preferences for workflow and interaction works very well.
I don’t think that you can find an off the shelf system todo what you want. I think you have to customize it to your own needs as you go.
Kind of like how you customize emacs as it’s running to your desires.
I’ve often wondered if you could put a mini LLM into emacs or vscode and have it implement customizations :)
2 replies →
I'm working on such a thing, but I'm not interested in money, nor do I have money to offer - I'm interested in a system which I'm proud of.
What are your motivations?
Interested in your work: from your public GitHub repos, I'm perhaps most interested in `moor` -- as it shares many design inclinations that I've leaned towards in thinking about this problem.
1 reply →
CLI terminals are incredibly powerful. They are also free if you use Gemini CLI or Qwen Code. Plus, you can access any OpenAI-compatible API(2k TPS via Cerebras at 2$/M or local models). And you can use them in IDEs like Zed with ACP mode.
All the simple stuff (creating a repo, pushing, frontend edits, testing, Docker images, deployment, etc.) is automated. For the difficult parts, you can just use free Grok to one-shot small code files. It works great if you force yourself to keep the amount of code minimal and modular. Also, they are great UIs—you can create smart programs just with CLI + MCP servers + MD files. Truly amazing tech.
Claude code ux is really good, though.
How good is Gemini CLI compared to Claude code and openAi codex?
I started with Claude Code, realized it was too much money for every message, then switched to Gemini CLI, then Qwen. Probably Claude Code is better, but I don't need it since I can solve my problems without it.
10 replies →
Not great.
It's ok for documentation or small tasks, but consistently fails at tasks that both Claude or Codex succeed at.
Gemini and it's tooling is absolute shit. The LLM itself is barely usable and needs so much supervision you might as well do the work yourself. Then couple that with an awful cli and vscode interface and you'll find that it's just a complete waste of time.
Compared to the anthropic offering is night and day. Claude gets on with the job and makes me way more productive.
9 replies →
>so I checked out the old version of the change with the bugs (yay Jujutsu!) and kicked off a fresh Claude Code session
There's a risk there that the AI could find the solution by looking through your history to find it, instead of discovering it directly in the checked-out code. AI has done that in the past:
https://news.ycombinator.com/item?id=45214670
> For example, how nice would it be if every time tests fail, an LLM agent was kicked off with the task of figuring out why, and only notified us if it did before we fixed it?
You can use Git hooks to do that. If you have tests and one fails, spawn an instance of claude a prompt -p 'tests/test4.sh failed, look in src/ and try and work out why'
Or if, you use Gogs locally, you can add a Gogs hook to do the same on pre-push
> An example hook script to verify what is about to be pushed. Called by "git push" after it has checked the remote status, but before anything has been pushed. If this script exits with a non-zero status nothing will be pushed.
I like this idea. I think I shall get Claude to work out the mechanism itself :)
It is even a suggestion on this Claude cheet sheet
https://www.howtouselinux.com/post/the-complete-claude-code-...
This could probably be implemented as a simple Bash script, if the user wants to run everything manually. I might just do that to burn some time.
sure, there a multiple ways of spawning an instance
the only thing I imagine might be problem is claude demanding a login token as it happens quite regularly
Yesterday I tried something similar with a website (Go backend) that was doing a complex query/filter and showing the wrong results. I just described the problem to Claude Code, told it how to run psql, and gave it an authenticated cookie to curl the backend with. In about three minutes of playing around, it fixed the problem. It only needed raw psql and curl access, no specialized tooling, to write a bunch of bash commands to poke around and compare the backend results with the test database.
I found llm debugging to work better if you give the llm access to a debugger.
You can build this pretty easily: https://github.com/jasonjmcghee/claude-debugs-for-you
Coming soon, adversarial attacks on LLM training to ensure cryptographic mistakes.
I'm surprised it didn't fix it by removing the code. In my experience, if you give Claude a failing test, it fixes it by hard-coding the code to return the value expected by the test or something similar.
Last week I asked it to look at why a certain device enumeration caused a sigsegv, and it quickly solved the issue by completely removing the enumeration. No functionality, no bugs!
I've got a paste in prompt that reiterates multiple times not to remove features or debugging output without asking first, and not to blame the test file/data that the program failed on. Repeated multiple times, the last time in all caps. It still does it. I hope maybe half as often, but I may be fooling myself.
A whole class of tedious problems have been eliminated by LLMs because they are able to look at code in a "fuzzy" way. But this can be a liability, too. I have a codebase that "looks kinda" like a nodejs project, so AI agents usually assume it is one, even if I rename the package.json, it will inspect the contents and immediately clock it as "node-like".
This is basically the ideal scenario for coding agents. Easily verifiable through running tests, pure logic, algorithmic problem. It's the case that has worked the best for me with LLMs.
So the ”fix” includes a completely new function? In a cryptography implementation?
I feel like the article is giving out very bad advice which is going to end up shooting someone in the foot.
Can you expand on what you find to be 'bad advice'?
The author uses an LLM to find bugs and then throw away the fix and instead write the code he would have written anyway. This seems like a rather conservative application of LLMs. Using the 'shooting someone in the foot' analogy - this article is an illustration of professional and responsible firearm handling.
Layman in cryptotography (that's 99% of us at least) may be encouraged to deploy LLM generated crypto implementations, without understanding the crypto
1 reply →
Honestly, it read more like attention seeking. He "live coded" his work, by which I believe he means he streamed everything he was doing while working. It just seems so much more like a performance and building a brand than anything else. I guess that's why I'm just a nobody.
The article even states that the solution claude proposed wasn't the point. The point was finding the bug.
AI are very capable heuristics tools. Being able to "sniff test" things blind is their specialty.
i.e. Treat them like an extremely capable gas detector that can tell you there is a leak and where in the plumbing it is, not a plumber who can fix the leak for you.
[dead]
I'm not surprised it worked.
Before I used Claude, I would be surprised.
I think it works because Claude takes some standard coding issues and systematizes them. The list is long, but Claude doesn't run out of patience like a human being does. Or at least it has some credulity left after trying a few initial failed hypotheses. This being a cryptography problem helps a little bit, in that there are very specific keywords that might hint at a solution, but from my skim of the article, it seems like it was mostly a good old coding error, taking the high bits twice.
The standard issues are just a vague laundry list:
- Are you using the data you think you're using? (Bingo for this one)
- Could it be an overflow?
- Are the types right?
- Are you calling the function you think you're calling? Check internal, then external dependencies
- Is there some parameter you didn't consider?
And a bunch of others. When I ask Claude for a debug, it's always something that makes sense as a checklist item, but I'm often impressed by how it diligently followed the path set by the results of the investigation. It's a great donkey, really takes the drudgery out of my work, even if it sometimes takes just as long.
> The list is long, but Claude doesn't run out of patience like a human being does
I've flat out had Claude tell me it's task was getting tedious, and it will often grasp at straws to use as excuses for stopping a repetitive task and moving in to something else.
Keeping it on task when something keeps moving forward, is easy, but when it gets repetitive it takes a lot of effort to make it stick to it.
ive been getting that experience with both claude-code and gemini, but not from cline and qcli. i wonder why
1 reply →
> Claude doesn't run out of patience like a human being does.
It very much does! I had a debugging session with Claude Code today, and it was about to give up with the message along the lines of “I am sorry I was not able to help you find the problem”.
It took some gentle cheering (pretty easy, just saying “you are doing an excellent job, don’t give up!”) and encouragement, and a couple of suggestions from me on how to approach the debug process for it to continue and finally “we” (I am using plural here because some information that Claude “volunteered” was essential to my understanding of the problem) were able to figure out the root cause and the fix.
Claude told me it stopped debugging since it would run out of tokens in its context window. I asked how many tokens it had left and it said actually it had plenty so could continue. Then again it stopped, and without me asking about tokens, wrote
Context Usage • Used: 112K/200K tokens (56%) • Remaining: 88K tokens • Sufficient for continued debugging, but fresh session recommended for clarity
lol. I said ok use a subagent for clarity.
That's interesting, that only happened to me on GPT models in Cursor. It would apologize profusely.
I just recently found a number of bugs in both the RELIC and MCL libraries. It took a while to track them down but it was remarkable that it was able to find them at all.
With AI, we will finally be able to do the impossible: roll our own crypto.
It is very much possible, it's just a bad idea. Doubly so with AI.
You're not going to do better than the NSA.
To be fair you're also not going to be backdoored by the NSA.
I don’t have to.
LLMs built by trillion dollar companies will do it for me.
That's exactly not what he's doing.
This is one of the things I've mentioned before, I think it's just hidden a bit and hard to see, but this is basically the LLM doing style transfer, which they're really good at. There was a specification for the code (which looks like it was already trained into the LLM since it didn't have to go fetch it but it also had intimate knowledge of), there was an implementation, and it's really good at extracting out the style difference between code and spec. Anything that looks like style transfer is a good use for LLMs.
As another example, I think things like "write unit tests for this code" are usually similar sort of style transfer as well, based on how it writes the tests. It definitely has a good idea as to how to sort of ensure that all the functionality gets tested, I find it is less likely to produce "creative" ways that bugs may come out, but hey, it's a good start.
This isn't a criticism, it's intended to be a further exploration and understanding of when these tools can be better than you might intuitively think.
As declared by an expert in cryptography who knows how to guide the LLM into debugging low-level cryptography, which that's good.
Quite different if you are not a cryptographer or a domain expert.
Even the title of the post makes this clear: it's about debugging low-level cryptography. He didn't vibe code ML-DSA. You have to be building a low-level implementation in the first place for anything in this post to apply to it.