Comment by Cthulhu_
9 hours ago
> I started the project in my brain and it has many flaws and nuances which I think LLMs are struggling to respect.
The project, or your brain? I think this is what a lot of LLM coders run into - they have a lot of intrinsic knowledge that is difficult or takes a lot of time and effort to put into words and describe. Vibes, if you will, like "I can't explain it but this code looks wrong"
I updated my original comment to explain my reasoning a bit more clearly.
Essentially I ask an LLM to look at a project and it just sees the current state of the codebase, it doesn't see the iterations and hacks and refactors and reverts.
It also doesn't see the first functionality I wrote for it at v1.
This could indeed be solved by giving the LLM a git log and telling it a story, but that might not solve my issue?
I'm now letting Claude Code write commits + PRs (for my solo dev stuff), but the benefits have been pretty immense as it's basically Claude keeping a history of it's work that can then be referenced at any time that's also outside the code context window.
FWIW - it works a lot better to have it interact via the CLI than the MCP.
I personally don't have any trouble with that. Using Sonnet 3.7 in Claude Code, I just ask it to spelunk the git history for a certain segment of the code if I think it will be meaningful for its task.
Out of curiosity, why 3.7 Sonnet? I see lots of people saying to always use the latest and greatest 4.5 Opus. Do you find that it’s good enough that the increased token cost of larger/more recent models aren’t worth it? Or is there more to it?
4 replies →
Yes, a lot of coders are terrible at documentation (both doc files and code docs) as well as good test coverage. Software should not need to live in ones head after written, it should be well architected and self-documenting - and when it is, both humans and LLMs navigate it pretty well (when augmented with good context management, helper mcps, etc).
I've been a skeptic, but now that I'm getting into using LLMs, I'm finding being very descriptive and laying down my thoughts, preferences, assumptions, etc, to help greatly.
I suppose a year ago we were talking about prompt engineers, so it's partly about being good at describing problems.
One trick to get out of this scenario where you're writing a ton is to ask the model to interview until we're in alignment on what is being built. Claude and open code both have an AskUserQuestionTool which is really nice for this and cuts down on explanation a lot. It becomes an iterative interview and clarifies my thinking significantly.