I read the study. I think it does the opposite of what the authors suggest - it's actually vouching for good AGENTS.md files.
> Surprisingly, we observe that developer-provided files only
marginally improve performance compared to omitting
them entirely (an increase of 4% on average), while LLM-
generated context files have a small negative effect on agent
performance (a decrease of 3% on average).
This "surprisingly", and the framing seems misplaced.
For the developer-made ones: 4% improvement is massive! 4% improvement from a simple markdown file means it's a must-have.
> while LLM-
generated context files have a small negative effect on agent
performance (a decrease of 3% on average)
This should really be "while the prompts used to generate AGENTS files in our dataset..". It's a proxy for prompts, who knows if the ones generated through a better prompt show improvement.
The biggest usecase for AGENTS.md files is domain knowledge that the model is not aware of and cannot instantly infer from the project. That is gained slowly over time from seeing the agents struggle due to this deficiency. Exactly the kind of thing very common in closed-source, yet incredibly rare in public Github projects that have an AGENTS.md file - the huge majority of which are recent small vibecoded projects centered around LLMs. If 4% gains are seen on the latter kind of project, which will have a very mixed quality of AGENTS files in the first place, then for bigger projects with high-quality .md's they're invaluable when working with agents.
Regarding the 4% improvement for human written AGENTS.md: this would be huge indeed if it were a _consistent_ improvement. However, for example on Sonnet 4.5, performance _drops_ by over 2%. Qwen3 benefits most and GPT-5.2 improves by 1-2%.
The LLM-generated prompts follow the coding agent recommendations. We also show an ablation over different prompt types, and none have consistently better performance.
But ultimately I agree with your post. In fact we do recommend writing good AGENTS.md, manually and targetedly. This is emphasized for example at the end of our abstract and conclusion.
Without measuring quality of output, this seems irrelevant to me.
My use of CLAUDE.md is to get Claude to avoid making stupid mistakes that will require subsequent refactoring or cleanup passes.
Performance is not a consideration.
If anything, beyond CLAUDE.md I add agent harnesses that often increase the time and tokens used many times over, because my time is more expensive than the agents.
You're measuring binary outcomes, so you can use a beta distribution to understand the distribution of possible success rates given your observations, and thereby provide a confidence interval on the observed success rates. This week help us see whether that 4% success rate is statistically significant, or if it is likely to be noise.
> Regarding the 4% improvement for human written AGENTS.md: this would be huge indeed if it were a _consistent_ improvement. However, for example on Sonnet 4.5, performance _drops_ by over 2%. Qwen3 benefits most and GPT-5.2 improves by 1-2%.
Ok so that's interesting in itself. Apologies if you go into this in the paper, not had time to read it yet, but does this tell us something about the models themselves? Is there a benchmark lurking here? It feels like this is revealing something about the training, but I'm not sure exactly what.
> The LLM-generated prompts follow the coding agent recommendations. We also show an ablation over different prompt types, and none have consistently better performance.
I think the coding agent recommended LLM-generated AGENTS.md files are almost without exception really bad. Because the AGENTS.md, to perform well, needs to point out the _non_-obvious. Every single LLM-generated AGENTS.md I've seen - including by certain vendors who at one point in time out-of-the-box included automatic AGENTS.md generation - wrote about the obvious things! The literal opposite of what you want. Indeed a complete and utter waste of tokens that does nothing but induce context rot.
I believe this is because creating a good one consumes a massive amount of resources and some engineering for any non-trivial codebase. You'd need multiple full-context iterations, and a large number of thinking tokens.
On top of that, and I've said this elsewhere, most of the best stuff to put in AGENTS.md is things that can't be inferred from the repo. Things like "Is this intentional?", "Why is this the case?" and so on. Obviously, the LLM nor a new-to-the-project human could know this or add them to the file. And the gains from this are also hard to capture by your performance metric, because they're not really about the solving of issues, they're often about direction, or about the how rather than the what.
As for the extra tokens, the right AGENTS.md can save lots of tokens, but it requires thinking hard about them. Which system/business logic would take the agent 5 different file reads to properly understand, but can we summarize in 3 sentences?
In Theory There Is No Difference Between Theory and Practice, While In Practice There Is.
In large projects, having a specific AGENTS.md makes the difference between the agent spending half of its context window searching for the right commands, navigating the repo, understanding what is what, etc., and being extremely useful. The larger the repository, the more things it needs to be aware of and the more important the AGENTS.md is. At least that's what I have observed in practice.
> The biggest usecase for AGENTS.md files is domain knowledge that the model is not aware of and cannot instantly infer from the project. That is gained slowly over time from seeing the agents struggle due to this deficiency.
This. I have Claude write about the codebase because I get tired of it grepping files constantly. I rather it just know “these files are for x, these files have y methods” and I even have it breakdown larger files so it fits the entire context window several times over.
Funnily enough this makes it easier for humans to parse.
My pet peeve with AI is that it tends to work better in codebase where humans do well and for the same reason.
Large orchestration package without any tests that relies on a bunch of microservices to work? Claude Code will be as confused as our SDEs.
This in turns lead to broader effort to refactor our antiquated packages in the name of "making it compatible with AI" which actually means compatible with humans.
This reads a lot like bargaining stage. If agentic AI makes me a 10 times more productive developer, surely a 4% improvement is barely worth the token cost.
Honestly, the more research papers I read, the more I am suspicious. This "surprisingly" and other hyperbole is just to make reviewers think the authors actually did something interesting/exciting. But the more "surprises" there are in a paper, the more I am suspicious of it. Often such hyperbole ought to be at best ignored, at worst the exact opposite needs to be examined.
It seems like the best students/people eventually end up doing CS research in their spare time while working as engineers. This is not the case for many other disciplines, where you need e.g. a lab to do research. But in CS, you can just do it from your basement, all you need is a laptop.
This is why I only add information to AGENTS.md when the agent has failed at a task. Then, once I've added the information, I revert the desired changes, re-run the task, and see if the output has improved. That way, I can have more confidence that AGENTS.md has actually improved coding agent success, at least with the given model and agent harness.
I do not do this for all repos, but I do it for the repos where I know that other developers will attempt very similar tasks, and I want them to be successful.
That's a sensible approach, but it still won't give you 100% confidence. These tools produce different output even when given the same context and prompt. You can't really be certain that the output difference is due to isolating any single variable.
So true! I've also setup automated evaluations using the GitHub Copilot SDK so that I can re-run the same prompt and measure results. I only use that when I want even more confidence, and typically when I want to more precisely compare models. I do find that the results have been fairly similar across runs for the same model/prompt/settings, even though we cannot set seed for most models/agents.
Agree. I also found out that rule discovery approach like this perform better. It is like teaching a student, they probably have already performed well on some task, if we feed in another extra rule that they already well verse at, it can hinder their creativity.
My personal experience is that it’s worthwhile to put instructions, user-manual style, into the context. These are things like:
- How to build.
- How to run tests.
- How to work around the incredible crappiness of the codex-rs sandbox.
I also like to put in basic style-guide things like “the minimum Python version is 3.12.” Sadly I seem to also need “if you find yourself writing TypeVar, think again” because (unscientifically) it seems that putting the actual keyword that the agent should try not to use makes it more likely to remember the instructions.
I also try to avoid negative instructions. No scientific proof, just a feeling the same as you, "do not delete the tmp file" can lead too often to deleting the tmp file.
Then little toddler LLM will announce something like “I implemented what you requested and we’re all done. You can run the lint now.” And I’ll reply “do it yourself.”
I can only assume that everyone reporting amazing success with agent swarms and very long running tasks are using a different harness than I am :)
I also have felt like these kinds of efforts at instructions and agent files have been worthwhile, but I am increasingly of the opinion that such feelings represent self-delusion from seeing and expecting certain things aided by a tool that always agrees with my, or its, take on utility. The agent.md file looks like it’d work, it looks how you’d expect, but then it fails over and over. And the process of tweaking is pleasant chatting with supportive supposed insights and solutions, which means hours of fiddling with meta-documentation without clear rewards because of partial adherence.
The papers conclusions align with my personal experiments at managing a small knowledge base with LLM rules. The application of rules was inconsistent, the execution of them fickle, and fundamental changes in processing would happen from week-to-week as the model usage was tweaked. But, rule tweaking always felt good. The LLM said it would work better, and the LLM said it had read and understood the instructions and the LLM said it would apply them… I felt like I understoood how best to deliver data to the LLMs, only to see recurrent failures.
LLMs lie. They have no idea, no data, and no insights into specific areas, but they’ll make pleasant reality-adjacent fiction. Since chatting is seductive, and our time sense is impacted by talking, I think the normal time versus productivity sense is further pulled out of ehack. Devs are notoriously bad at estimating where they’re using time, long feedback loops filled with phone time and slow ass conversation don’t help.
When an agent just plows ahead with a wrong interpretation or understanding of something, I like to ask them why they didn't stop to ask for clarification.
Just a few days ago, while refactoring minor stuff, I had an agent replace all sqlite-related code in that codebase with MariaDB-based code.
Asked why that happened, the answer was that there was a confusion about MariaDB vs. sqlite because the code in question is dealing with, among other things, MariaDB Docker containers. So the word MariaDB pops up a few times in code and comments.
I then asked if there is anything I could do to prevent misinterpretations from producing wild results like this.
So I got the advice to put an instruction in AGENTS.md that would urge agents to ask for clarification before proceeding.
But I didn't add it. Out of the 25 lines of my AGENTS.md, many are already variations of that.
The first three:
- Do not try to fill gaps in your knowledge with overzealous assumptions.
- When in doubt: Slow down, double-check context, and only touch what was explicitly asked for.
- If a task seems to require extra changes, pause and ask before proceeding.
If these are not enough to prevent stuff like that, I don't know what could.
Are agents actually capable of answering why they did things? An LLM can review the previous context, add your question about why it did something, and then use next token prediction to generate an answer. But is that answer actually why the agent did what it did?
It depends. If you have an LLM that uses reasoning the explanation for why decisions are made can often be found in the reasoning token output. So if the agent later has access to that context it could see why a decision was made.
of course not, but it can often give a plausible answer, and it's possible that answer will actually happen to be correct - not because it did any - or is capable of any - introspection, but because it's token outputs in response to the question might semi-coincidentally be a token input that changes the future outputs in the same way.
Isn't that question a category error? The "why" the agent did that is that it was the token that best matched the probability distribution of the context and the most recent output (modulo a bit of randomness). The response to that question will, again, be the tokens that best match the probability distribution of the context (now including the "why?" question and the previous failed attempt).
if the agent can review its reasoning traces, which i think is often true in this era of 1M token context, then it may be able to provide a meaningful answer to the question.
Just this morning I have run across an even narrower case of how AGENTS.md (in this case with GPT-5.3 Codex) can be completely ignored even if filled with explicit instructions.
I have a line there that says Codex should never use Node APIs where Bun APIs exist for the same thing. Routinely, Claude Code and now Codex would ignore this.
I just replaced that rule with a TypeScript-compiler-powered AST based deterministic rule. Now the agent can attempt to commit code with banned Node API usage and the pre-commit script will fail, so it is forced to get it right.
I've found myself migrating more and more of my AGENTS.md instructions to compiler-based checks like these - where possible. I feel as though this shouldn't be needed if the models were good, but it seems to be and I guess the deterministic nature of these checks is better than relying on the LLM's questionable respect of the rules.
We have pre-commit hooks to prevent people doing the wrong thing. We have all sorts of guardrails to help people.
And the “modern” approach when someone does something wrong is not to blame the person, but to ask “how did the system allow this mistake? What guardrails are missing?”
It seems like LLMs in general still have a very hard time with the concepts of "doubt" and "uncertainty". In the early days this was very visible in the form of hallucinations, but it feels like they fixed that mostly by having better internal fact-checking. The underlying problem of treating assumptions as truth is still there, just hidden better.
LLMs are basically improv theater. If the agent starts out with a wildly wrong assumption it will try to stick to it and adapt it rather than starting over. It can only do "yes and", never "actually nevermind, let me try something else".
I once had an agent come up with what seemed like a pointlessly convoluted solution as it tried to fit its initial approach (likely sourced from framework documentation overemphasizing the importance of doing it "the <framework> way" when possible) to a problem for which it to me didn't really seem like a good fit. It kept reassuring me that this was the way to go and my concerns were invalid.
When I described the solution and the original problem to another agent running the same model, it would instantly dismiss it and point out the same concerns I had raised - and it would insist on those being deal breakers the same way the other agent had dimissed them as invalid.
In the past I've often found LLMs to be extremely opinionated while also flipping their positions on a dime once met with any doubt or resistance. It feels like I'm now seeing the opposite: the LLM just running with whatever it picked up first from the initial prompt and then being extremely stubborn and insisting on rationalizing its choice no matter how much time it wastes trying to make it work. It's sometimes better to start a conversation over than to try and steer it in the right direction at that point.
I really hate that the anthropomorphizing of these systems has successfully taken hold in people's brains. Asking it why it did something is completely useless because you aren't interrogating a person with a memory or a rationale, you’re querying a statistical model that is spitting out a justification for a past state it no longer occupies.
Even the "thinking" blocks in newer models are an illusion. There is no functional difference between the text in a thought block and the final answer. To the model, they are just more tokens in a linear sequence. It isn't "thinking" before it speaks, the "thought" is the speech.
Treating those thoughts as internal reflection of some kind is a category error. There is no "privileged" layer of reasoning happening in the silicon that then gets translated into the thought block. It’s a specialized output where the model is forced to show its work because that process of feeding its own generated strings back into its context window statistically increases the probability of a correct result. The chatbot providers just package this in a neat little window to make the model's "thinking" part of the gimmick.
I also wouldn't be surprised if asking it stuff like this was actually counter productive, but for this I'm going off vibes. The logic being that by asking that, you're poisoning the context, similar to how if you try generate an image by saying "It should not have a crocodile in the image", it will put a crocodile into the image. By asking it why it did something wrong, it'll treat that as the ground truth and all future generation will have that snippet in it, nudging the output in such a way that the wrong thing itself will influence it to keep doing the wrong thing more and more.
You're entirely correct in that it's a different model with every message, every token. There's no past memory for it to reference.
That said it can still be useful because you have a some weird behavior and 199k tokens of context, with no idea where the info is that's nudging it to do the weird thing.
In this case you can think of it less as "why did you do this?" And more "what references to doing this exist in this pile of files and instructions?"
Agreed. I wish more people understood the difference between tokens, embeddings, and latent space encodings. The actual "thinking" if you can call it that, happens in latent space. But many (even here on HN) believe the thinking tokens are the thoughts themselves. Silly meatbags!
> I really hate that the anthropomorphizing of these systems has successfully taken hold in people's brains. Asking it why it did something is completely useless because you aren't interrogating a person with a memory or a rationale, you’re querying a statistical model that is spitting out a justification for a past state it no longer occupies.
"Thinking meat! You're asking me to believe in thinking meat!"
While next-token prediction based on matrix math is certainly a literal, mechanistic truth, it is not a useful framing in the same sense that "synapses fire causing people to do things" is not a useful framing for human behaviour.
The "theory of mind" for LLMs sounds a bit silly, but taken in moderation it's also a genuine scientific framework in the sense of the scientific method. It allows one to form hypothesis, run experiments that can potentially disprove the hypothesis, and ultimately make skillful counterfactual predictions.
> By asking it why it did something wrong, it'll treat that as the ground truth and all future generation will have that snippet in it, nudging the output in such a way that the wrong thing itself will influence it to keep doing the wrong thing more and more.
In my limited experience, this is not the right use of introspection. Instead, the idea is to interrogate the model's chain of reasoning to understand the origins of a mistake (the 'theory of mind'), then adjust agents.md / documentation so that the mistake is avoided for future sessions, which start from an otherwise blank slate.
I do agree, however, that the 'theory of mind' is very close to the more blatantly incorrect kind of misapprehension about LLMs, that since they sound humanlike they have long-term memory like humans. This is why LLM apologies are a useless sycophancy trap.
> Asking it why it did something is completely useless because you aren't interrogating a person with a memory or a rationale, you’re querying a statistical model that is spitting out a justification for a past state it no longer occupies.
Asking it why it did something isn’t useless, it just isn’t fullproof. If you really think it’s useless, you are way too heavily into binary thinking to be using AI.
Quite a surprising result: “across multiple coding agents and LLMs, we find that context files tend to reduce task success rates compared to providing no repository context, while also increasing inference cost by over 20%.”
Hey, paper author here.
We did try to get an even sample - we include both SWE-bench repos (which are large, popular and mostly human-written) and a sample of smaller, more recent repositories with existing AGENTS.md (these tend to contain LLM written code of course). Our findings generalize across both these samples. What is arguably missing are small repositories of completely human-written code, but this is quite difficult to obtain nowadays.
I think that is a rather fitting approach to the problem domain. A task being a real GitHub issue is a solid definition by any measure, and I see no problem picking language A over B or C.
If you feel strongly about the topic, you are free to write your own article.
Yesterday while i was adding some nitpicks to a CLAUDE.md/AGENTS.md file, I thought « this file could be renamed CONTRIBUTING.md and be done with it ».
Maybe I’m wrong but sure feels like we might soon drop all of this extra cruft for more rationale practices
You could have claude --init create this hook and then it gets into the context at start and resume
Or create it in some other way
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "<contents of your file here>"
}
}
I thought it was such a good suggestion that I made this just now and made it global to inject README at startup / resume / post compact - I'll see how it works out
And that makes total sense. Honestly working since a few days with Opus 4.6, it really feels like a competent coworker, but need some explicit conventions to follow … exactly when onboarding a new IC! So i think there is a bright light to be seen: this will force having proper and explicit contribution rules and conventions, both for humans and robots
Exactly, it's the same documentation any contributor would need, just actually up-to-date and pared down to the essentials because it's "tested" continuously. If I were starting out on a new codebase, AGENTS.md is the first place I'd look to get my bearings.
LLMs are generally bad at writing non-noisy prompts and instructions. It's better to have it write instructions post hoc. For instance, I paste this prompt into the end of most conversations:
If there’s a nugget of knowledge learned at any point in this conversation (not limited to the most recent exchange), please tersely update AGENTS.md so future agents can access it. If nothing durable was learned, no changes are needed. Do not add memories just to add memories.
Update AGENTS.md **only** if you learned a durable, generalizable lesson about how to work in this repo (e.g., a principle, process, debugging heuristic, or coding convention). Do **not** add bug- or component-specific notes (for example, “set .foo color in bar.css”) unless they reflect a broader rule.
If the lesson cannot be stated without referencing a specific selector or file, skip the memory and make no changes. Keep it to **one short bullet** under an appropriate existing section, or add a new short section only if absolutely necessary.
It hardly creates rules, but when it does, it affects rules in a way that positively affects behavior. This works very well.
Another common mistake is to have very long AGENTS.md files. The file should not be long. If it's longer than 200 lines, you're certainly doing it wrong.
> If nothing durable was learned, no changes are needed.
Off topic, but oh my god if you don't do this, it will always do the thing you conditionally requested it to do. Not sure what to call this but it's my one big annoyance with LLMs.
It's like going to a sub shop and asking for just a tiny bit of extra mayo and they heap it on.
I'd be interested to see results with Opus 4.6 or 4.5
Also, I bet the quality of these docs vary widely across both human and AI generated ones. Good Agents.md files should have progressive disclosure so only the items required by the task are pulled in (e.g. for DB schema related topics, see such and such a file).
Then there's the choice of pulling things into Agents.md vs skills which the article doesn't explore.
I do feel for the authors, since the article already feels old. The models and tooling around them are changing very quickly.
Agree that progressive disclosure is fantastic, but
> (e.g. for DB schema related topics, see such and such a file).
Rather than doing this, put another AGENTS.md file in a DB-related subfolder. It will be automatically pulled into context when the agent reads any files in the file. This is supported out of the box by any agent worth its salt, including OpenCode and CC.
IMO static instructions referring an LLM to other files are an anti-pattern, at least with current models. This is a flaw of the skills spec, which refers to creating a "references" folder and such. I think initial skills demos from Anthropic also showed this. This doesn't work.
> This is supported out of the box by any agent worth its salt, including OpenCode and CC.
I thought Claude Code didn't support AGENTS.md? At least according to this open issue[0], it's still unsupported and has to be symlinked to CLAUDE.md to be automatically picked up.
Progressive disclosure is good for reducing context usage but it also reduces the benefit of token caching. It might be a toss-up, given this research result.
Any well-maintained project should already have a CONTRIBUTING.md that has good information for both humans and agents.
Sometimes I actually start my sessions like this "please read the contributing.md file to understand how to build/test this project before making any code changes"
- Don't state the obvious: I wouldn't hand a senior human dev a copy of "Clean Code" before every ticket and expect them to work faster.
- File vs. Prompt is a false dichotomy: The paper treats "Context Files" as a separate entity, but technically, an AGENTS.md is just a system prompt injection. The mechanism is identical. The study isn't proving that "files are bad," it's proving that "context stuffing" is bad. Whether I paste the rules manually or load them via a file, the transformer sees the same tokens.
- Latent vs. Inferable Knowledge: This is the key missing variable. If I remove context files, my agents fail at tasks requiring specific process knowledge - like enforcing strict TDD or using internal wrapper APIs that aren't obvious from public docs. The agent can't "guess" our security protocols or architectural constraints. That's not a performance drag; it's a requirement. The paper seems to conflate "adding noise" with "adding constraints."
I only put things when the LLM gets something wrong and I need to correct it. Like “no, we create db migrations using this tool” kind of corrections. So far it made them behave correctly in those situations.
Their definition of context excludes prescriptive specs/requirements files. They are only talking about a file that summarizes what exists in the codebase, which is information that's otherwise discoverable by the agent through CLI (ripgrep, etc), and it's been trained to do that as efficiently as possible.
Also important to note that human-written context did help according to them, if only a little bit.
Effectively what they're saying is that inputting an LLM generated summary of the codebase didn't help the agent. Which isn't that surprising.
I find it surprising. The piece of code I'm working on is about 10k LoC to define the basic structures and functionality and I found Claude Code would systematically spend significant time and tokens exploring it to add even basic functionality. Part of the issue is this deals with a problem domain LLMs don't seem to be very well trained on, so they have to take it all in, they don't seem to know what to look for in advance.
I went through a couple of iterations of the CLAUDE.md file, first describing the problem domain and library intent (that helped target search better as it had keywords to go by; note a domain-trained human would know these in advance from the three words that comprise the library folder name) and finally adding a concise per-function doc of all the most frequently used bits. I find I can launch CC on a simple task now, without it spending minutes reading the codebase before getting started.
The article is interesting but I think it deviates from a common developer experience as many don't work on Python libraries, which likely heavily follow patterns that the model itself already contains.
Hey, a paper author here :)
I agree, if you know well about LLMs it shouldn't be too surprising that autogenerated context files are not helping - yet this is the default recommendation by major AI companies which we wanted to scrutinize.
> Their definition of context excludes prescriptive specs/requirements files.
Can you explain a bit what you mean here? If the context file specifies a desired behavior, we do check whether the LLM follows it, and this seems generally to work (Section 4.3).
each role owns specific files. no overlap means zero merge conflicts across 1800+ autonomous PRs. planning happens in `.sys/plans/{role}/` as written contracts before execution starts. time is the mutex.
AGENTS.md defines the vision. agents read the gap between vision and reality, then pull toward it. no manager, no orchestration.
agents ship features autonomously. 90% of PRs are zero human in the loop. the one pain point is refactors. cross-cutting changes don't map cleanly to single-role ownership
AGENTS.md works when it encodes constraints that eliminate coordination. if it's just a roadmap, it won't help much.
I'd take any paper like this with a grain of salt. I imagine what holds true for models in time period X could drastically be different just given a little more time.
Doesn't mean it's not worth studying this kind of stuff, but this conclusion is already so "old" that it's hard to say it's valid anymore with the latest batch of models.
I use AGENTS.md daily for my personal AI setup. The biggest win is giving the agent project-specific context — things like deployment targets, coding conventions, and what not to do. Without it, the agent makes generic assumptions that waste time.
In my experience AGENTS.md files only save a bit of time, they don't meaningfully improve success. Agents are smart enough to figure stuff out on their own, but you can save a few tool calls and a bit of context by telling them how to build your project or what directories do what rather than letting it stumble its way there.
What is the purpose of an AGENTS.md file when there are so many different models? Which model or version of the model is the file written for? So much depends on assumptions here. It only makes sense when you know exactly which model you are writing for. No wonder the impact is 'all over the place'.
Many of the practices in this field are mostly based on feelings and wishful thinking, rather than any demonstrable benefit. Part of the problem is that the tools are practically nondeterministic, and their results can't be compared reliably.
The other part is fueled by brand recognition and promotion, since everyone wants to make their own contribution with the least amount of effort, and coming up with silly Markdown formats is an easy way to do that.
EDIT: It's amusing how sensitive the blue-pilled crowd is when confronted with reality. :)
If I understand the paper correctly, the researchers found that AGENTS.md context files caused the LLMs to burn through more tokens as they parsed and followed the instructions, but they did not find a large change in the success rate (defined by "the PR passes the existing unit tests in the repo").
What wasn't measured, probably because it's almost impossible to quantify, was the quality of the code produced. Did the context files help the LLMs produce code that matched the style of the rest of the project? Did the code produced end up reasonably maintainable in the long run, or was it slop that increased long-term tech debt? These are important questions, but as they are extremely difficult to assign numbers to and measure in an automated way, the paper didn't attempt to answer them.
The only thing I use CLAUDE.md for is explaining the purpose and general high level design principles of the project so I don't have to waste my time reiterating this every time I clear the context. Things like this is a file manager, the deliverable must always be a zipapp, Wayland will never be supported.
I added these to that file because otherwise I will have to tell claude these things myself, repeatedly. But the science says... Respectfully, blow it out your ass.
Research has shown that most earlier "techniques" to get better LLM response no longer work and are actively harmful with modern models. I'm so grateful that there's actual studies and papers about this and that they keep coming out. Software developers are super cargo culty and will do whatever the next guy does (and that includes doing whatever is suggested in research papers)
Software developers don't have to be cargo-culty... if they're working on systems that are well-documented or are open-source (or at least source-available) so that you can actually dig in to find out how the system works.
But with LLMs, the internals are not well-documented, most are not open-source (and even if the model and weights are open-source, it's impossible for a human to read a grid of numbers and understand exactly how it will change its output for a given input), and there's also an element of randomness inherent to how the LLM behaves.
Given that fact, it's not surprising to find that developers trying to use LLMs end up adding certain inputs out of what amounts to superstition ("it seems to work better when I tell it to think before coding, so let's add that instruction and hopefully it'll help avoid bad code" but there's very little way to be sure that it did anything). It honestly reminds me of gambling fallacies, e.g. tabletop RPG players who have their "lucky" die that they bring out for important rolls. There's insufficient input to be sure that this line, which you add to all your prompts by putting it in AGENTS.md, is doing anything — but it makes you feel better to have it in there.
(None of which is intended as a criticism, BTW: that's just what you have to do when using an opaque, partly-random tool).
Most of these AI-guiding "techniques" seem more like reading into tea leaves to me than anything actually useful.
Even with the latest and greatest (because I know people will reflexively immediately jump down my throat if I don't specify that, yes, I've used Opus 4.6 and Gemini 3 Pro etc. etc. etc. etc., I have access to all of the models by way of work and use them regularly), my experience has been that it's basically a crapshoot that it'll listen to a single one of these files, especially in the long run with large chats. The amount of times I still have to tell these things to not generate React in my Vue codebase that has literally not a single line of JSX anywhere and instructions in every single possible file I can put it in to NOT GENERATE FUCKING REACT CODE makes me want to blow my brains out every time it happens. In fact it happened to me today with the supposed super intelligence known as Opus 4.6 that has 18 trillion TB of context or whatever in a fresh chat when I asked for a quick snippet I needed to experiment with.
I'm not even paying for this crap (work is) and I still feel scammed approximately half the time, and can't help but think all of these suggestions are just ways to inflate token usage and to move you into the usage limit territory faster.
Claude/Opus 4.6 Can you add a console.log in food XYZ?
No problem, x agents, hundreds/closed to one million token usage to add a line of code.
Gemini 3 : can you review the commit A (console.log one ) you have made the most significant change in your 200kloc code base, this key change will allow you to get great insight into your software.
Codex : I have reviewed your change, you are missing tests and integration tests.
But I fully agree, overall I feel there are a lot of tea leaves readers online and LinkedIn.
What are you putting in the file? When I’ve looked at them they just looked like a second readme file without the promotional material in a typical GitHub readme.
That's basically all it is. It's a readme file that is guaranteed to be read. So the agent doesn't spend 10 minutes trying to re-configure the toolchain because the first command it guessed didn't work.
I read the study. I think it does the opposite of what the authors suggest - it's actually vouching for good AGENTS.md files.
> Surprisingly, we observe that developer-provided files only marginally improve performance compared to omitting them entirely (an increase of 4% on average), while LLM- generated context files have a small negative effect on agent performance (a decrease of 3% on average).
This "surprisingly", and the framing seems misplaced.
For the developer-made ones: 4% improvement is massive! 4% improvement from a simple markdown file means it's a must-have.
> while LLM- generated context files have a small negative effect on agent performance (a decrease of 3% on average)
This should really be "while the prompts used to generate AGENTS files in our dataset..". It's a proxy for prompts, who knows if the ones generated through a better prompt show improvement.
The biggest usecase for AGENTS.md files is domain knowledge that the model is not aware of and cannot instantly infer from the project. That is gained slowly over time from seeing the agents struggle due to this deficiency. Exactly the kind of thing very common in closed-source, yet incredibly rare in public Github projects that have an AGENTS.md file - the huge majority of which are recent small vibecoded projects centered around LLMs. If 4% gains are seen on the latter kind of project, which will have a very mixed quality of AGENTS files in the first place, then for bigger projects with high-quality .md's they're invaluable when working with agents.
Hey thanks for your review, a paper author here.
Regarding the 4% improvement for human written AGENTS.md: this would be huge indeed if it were a _consistent_ improvement. However, for example on Sonnet 4.5, performance _drops_ by over 2%. Qwen3 benefits most and GPT-5.2 improves by 1-2%.
The LLM-generated prompts follow the coding agent recommendations. We also show an ablation over different prompt types, and none have consistently better performance.
But ultimately I agree with your post. In fact we do recommend writing good AGENTS.md, manually and targetedly. This is emphasized for example at the end of our abstract and conclusion.
Without measuring quality of output, this seems irrelevant to me.
My use of CLAUDE.md is to get Claude to avoid making stupid mistakes that will require subsequent refactoring or cleanup passes.
Performance is not a consideration.
If anything, beyond CLAUDE.md I add agent harnesses that often increase the time and tokens used many times over, because my time is more expensive than the agents.
6 replies →
You're measuring binary outcomes, so you can use a beta distribution to understand the distribution of possible success rates given your observations, and thereby provide a confidence interval on the observed success rates. This week help us see whether that 4% success rate is statistically significant, or if it is likely to be noise.
2 replies →
> Regarding the 4% improvement for human written AGENTS.md: this would be huge indeed if it were a _consistent_ improvement. However, for example on Sonnet 4.5, performance _drops_ by over 2%. Qwen3 benefits most and GPT-5.2 improves by 1-2%.
Ok so that's interesting in itself. Apologies if you go into this in the paper, not had time to read it yet, but does this tell us something about the models themselves? Is there a benchmark lurking here? It feels like this is revealing something about the training, but I'm not sure exactly what.
1 reply →
Thank you for turning up here and replying!
> The LLM-generated prompts follow the coding agent recommendations. We also show an ablation over different prompt types, and none have consistently better performance.
I think the coding agent recommended LLM-generated AGENTS.md files are almost without exception really bad. Because the AGENTS.md, to perform well, needs to point out the _non_-obvious. Every single LLM-generated AGENTS.md I've seen - including by certain vendors who at one point in time out-of-the-box included automatic AGENTS.md generation - wrote about the obvious things! The literal opposite of what you want. Indeed a complete and utter waste of tokens that does nothing but induce context rot.
I believe this is because creating a good one consumes a massive amount of resources and some engineering for any non-trivial codebase. You'd need multiple full-context iterations, and a large number of thinking tokens.
On top of that, and I've said this elsewhere, most of the best stuff to put in AGENTS.md is things that can't be inferred from the repo. Things like "Is this intentional?", "Why is this the case?" and so on. Obviously, the LLM nor a new-to-the-project human could know this or add them to the file. And the gains from this are also hard to capture by your performance metric, because they're not really about the solving of issues, they're often about direction, or about the how rather than the what.
As for the extra tokens, the right AGENTS.md can save lots of tokens, but it requires thinking hard about them. Which system/business logic would take the agent 5 different file reads to properly understand, but can we summarize in 3 sentences?
2 replies →
In Theory There Is No Difference Between Theory and Practice, While In Practice There Is.
In large projects, having a specific AGENTS.md makes the difference between the agent spending half of its context window searching for the right commands, navigating the repo, understanding what is what, etc., and being extremely useful. The larger the repository, the more things it needs to be aware of and the more important the AGENTS.md is. At least that's what I have observed in practice.
> The biggest usecase for AGENTS.md files is domain knowledge that the model is not aware of and cannot instantly infer from the project. That is gained slowly over time from seeing the agents struggle due to this deficiency.
This. I have Claude write about the codebase because I get tired of it grepping files constantly. I rather it just know “these files are for x, these files have y methods” and I even have it breakdown larger files so it fits the entire context window several times over.
Funnily enough this makes it easier for humans to parse.
My pet peeve with AI is that it tends to work better in codebase where humans do well and for the same reason.
Large orchestration package without any tests that relies on a bunch of microservices to work? Claude Code will be as confused as our SDEs.
This in turns lead to broader effort to refactor our antiquated packages in the name of "making it compatible with AI" which actually means compatible with humans.
2 replies →
This reads a lot like bargaining stage. If agentic AI makes me a 10 times more productive developer, surely a 4% improvement is barely worth the token cost.
> If agentic AI makes me a 10 times more productive
I'm not sure what you are suggesting exactly, but wanted to highlight this humongous "if".
It's not only about the token cost! It's also my TIME cost! Much-much more expensive than tokens, it turns out ;)
If something makes you 10x as effective and then you improve that thing by 4%...
10x is that quantity or quality?
1 reply →
Honestly, the more research papers I read, the more I am suspicious. This "surprisingly" and other hyperbole is just to make reviewers think the authors actually did something interesting/exciting. But the more "surprises" there are in a paper, the more I am suspicious of it. Often such hyperbole ought to be at best ignored, at worst the exact opposite needs to be examined.
It seems like the best students/people eventually end up doing CS research in their spare time while working as engineers. This is not the case for many other disciplines, where you need e.g. a lab to do research. But in CS, you can just do it from your basement, all you need is a laptop.
Well, you still need time (and permission from your employer)! Research is usually a more than full time job on its own.
4% is yuuuge. In hard projects, 1% is the difference between getting it right with an elegant design or going completely off the rails.
This is why I only add information to AGENTS.md when the agent has failed at a task. Then, once I've added the information, I revert the desired changes, re-run the task, and see if the output has improved. That way, I can have more confidence that AGENTS.md has actually improved coding agent success, at least with the given model and agent harness.
I do not do this for all repos, but I do it for the repos where I know that other developers will attempt very similar tasks, and I want them to be successful.
You can also save time/tokens if you see that every request starts looking for the same information. You can front-load it.
Also take the randomness out of it. Sometimes the agent executing tests one way, sometimes the other way.
1 reply →
Don't forget to update it regularly then
That's a sensible approach, but it still won't give you 100% confidence. These tools produce different output even when given the same context and prompt. You can't really be certain that the output difference is due to isolating any single variable.
So true! I've also setup automated evaluations using the GitHub Copilot SDK so that I can re-run the same prompt and measure results. I only use that when I want even more confidence, and typically when I want to more precisely compare models. I do find that the results have been fairly similar across runs for the same model/prompt/settings, even though we cannot set seed for most models/agents.
same with people, no matter what info you give a person you cant be sure they will follow it the same every time
Agree. I also found out that rule discovery approach like this perform better. It is like teaching a student, they probably have already performed well on some task, if we feed in another extra rule that they already well verse at, it can hinder their creativity.
My personal experience is that it’s worthwhile to put instructions, user-manual style, into the context. These are things like:
- How to build.
- How to run tests.
- How to work around the incredible crappiness of the codex-rs sandbox.
I also like to put in basic style-guide things like “the minimum Python version is 3.12.” Sadly I seem to also need “if you find yourself writing TypeVar, think again” because (unscientifically) it seems that putting the actual keyword that the agent should try not to use makes it more likely to remember the instructions.
I also try to avoid negative instructions. No scientific proof, just a feeling the same as you, "do not delete the tmp file" can lead too often to deleting the tmp file.
It’s like instructing a toddler.
3 replies →
For TypeVar I’d reach for a lint warning instead.
Then little toddler LLM will announce something like “I implemented what you requested and we’re all done. You can run the lint now.” And I’ll reply “do it yourself.”
I can only assume that everyone reporting amazing success with agent swarms and very long running tasks are using a different harness than I am :)
I also have felt like these kinds of efforts at instructions and agent files have been worthwhile, but I am increasingly of the opinion that such feelings represent self-delusion from seeing and expecting certain things aided by a tool that always agrees with my, or its, take on utility. The agent.md file looks like it’d work, it looks how you’d expect, but then it fails over and over. And the process of tweaking is pleasant chatting with supportive supposed insights and solutions, which means hours of fiddling with meta-documentation without clear rewards because of partial adherence.
The papers conclusions align with my personal experiments at managing a small knowledge base with LLM rules. The application of rules was inconsistent, the execution of them fickle, and fundamental changes in processing would happen from week-to-week as the model usage was tweaked. But, rule tweaking always felt good. The LLM said it would work better, and the LLM said it had read and understood the instructions and the LLM said it would apply them… I felt like I understoood how best to deliver data to the LLMs, only to see recurrent failures.
LLMs lie. They have no idea, no data, and no insights into specific areas, but they’ll make pleasant reality-adjacent fiction. Since chatting is seductive, and our time sense is impacted by talking, I think the normal time versus productivity sense is further pulled out of ehack. Devs are notoriously bad at estimating where they’re using time, long feedback loops filled with phone time and slow ass conversation don’t help.
When an agent just plows ahead with a wrong interpretation or understanding of something, I like to ask them why they didn't stop to ask for clarification. Just a few days ago, while refactoring minor stuff, I had an agent replace all sqlite-related code in that codebase with MariaDB-based code. Asked why that happened, the answer was that there was a confusion about MariaDB vs. sqlite because the code in question is dealing with, among other things, MariaDB Docker containers. So the word MariaDB pops up a few times in code and comments.
I then asked if there is anything I could do to prevent misinterpretations from producing wild results like this. So I got the advice to put an instruction in AGENTS.md that would urge agents to ask for clarification before proceeding. But I didn't add it. Out of the 25 lines of my AGENTS.md, many are already variations of that. The first three:
- Do not try to fill gaps in your knowledge with overzealous assumptions.
- When in doubt: Slow down, double-check context, and only touch what was explicitly asked for.
- If a task seems to require extra changes, pause and ask before proceeding.
If these are not enough to prevent stuff like that, I don't know what could.
Are agents actually capable of answering why they did things? An LLM can review the previous context, add your question about why it did something, and then use next token prediction to generate an answer. But is that answer actually why the agent did what it did?
It depends. If you have an LLM that uses reasoning the explanation for why decisions are made can often be found in the reasoning token output. So if the agent later has access to that context it could see why a decision was made.
6 replies →
of course not, but it can often give a plausible answer, and it's possible that answer will actually happen to be correct - not because it did any - or is capable of any - introspection, but because it's token outputs in response to the question might semi-coincidentally be a token input that changes the future outputs in the same way.
Well, the entire field of explainable AI has mostly thrown in the towel..
Isn't that question a category error? The "why" the agent did that is that it was the token that best matched the probability distribution of the context and the most recent output (modulo a bit of randomness). The response to that question will, again, be the tokens that best match the probability distribution of the context (now including the "why?" question and the previous failed attempt).
if the agent can review its reasoning traces, which i think is often true in this era of 1M token context, then it may be able to provide a meaningful answer to the question.
15 replies →
Just this morning I have run across an even narrower case of how AGENTS.md (in this case with GPT-5.3 Codex) can be completely ignored even if filled with explicit instructions.
I have a line there that says Codex should never use Node APIs where Bun APIs exist for the same thing. Routinely, Claude Code and now Codex would ignore this.
I just replaced that rule with a TypeScript-compiler-powered AST based deterministic rule. Now the agent can attempt to commit code with banned Node API usage and the pre-commit script will fail, so it is forced to get it right.
I've found myself migrating more and more of my AGENTS.md instructions to compiler-based checks like these - where possible. I feel as though this shouldn't be needed if the models were good, but it seems to be and I guess the deterministic nature of these checks is better than relying on the LLM's questionable respect of the rules.
Not that much different from humans.
We have pre-commit hooks to prevent people doing the wrong thing. We have all sorts of guardrails to help people.
And the “modern” approach when someone does something wrong is not to blame the person, but to ask “how did the system allow this mistake? What guardrails are missing?”
I wonder if some of these could be embedded in the write tool calls?
> So I got the advice to put an instruction in AGENTS.md that would urge agents to ask for clarification before proceeding.
You may want to ask the next LLM versions the same question after they feed this paper through training.
It seems like LLMs in general still have a very hard time with the concepts of "doubt" and "uncertainty". In the early days this was very visible in the form of hallucinations, but it feels like they fixed that mostly by having better internal fact-checking. The underlying problem of treating assumptions as truth is still there, just hidden better.
LLMs are basically improv theater. If the agent starts out with a wildly wrong assumption it will try to stick to it and adapt it rather than starting over. It can only do "yes and", never "actually nevermind, let me try something else".
I once had an agent come up with what seemed like a pointlessly convoluted solution as it tried to fit its initial approach (likely sourced from framework documentation overemphasizing the importance of doing it "the <framework> way" when possible) to a problem for which it to me didn't really seem like a good fit. It kept reassuring me that this was the way to go and my concerns were invalid.
When I described the solution and the original problem to another agent running the same model, it would instantly dismiss it and point out the same concerns I had raised - and it would insist on those being deal breakers the same way the other agent had dimissed them as invalid.
In the past I've often found LLMs to be extremely opinionated while also flipping their positions on a dime once met with any doubt or resistance. It feels like I'm now seeing the opposite: the LLM just running with whatever it picked up first from the initial prompt and then being extremely stubborn and insisting on rationalizing its choice no matter how much time it wastes trying to make it work. It's sometimes better to start a conversation over than to try and steer it in the right direction at that point.
Doubt and uncertainty is left for us humans.
I really hate that the anthropomorphizing of these systems has successfully taken hold in people's brains. Asking it why it did something is completely useless because you aren't interrogating a person with a memory or a rationale, you’re querying a statistical model that is spitting out a justification for a past state it no longer occupies.
Even the "thinking" blocks in newer models are an illusion. There is no functional difference between the text in a thought block and the final answer. To the model, they are just more tokens in a linear sequence. It isn't "thinking" before it speaks, the "thought" is the speech.
Treating those thoughts as internal reflection of some kind is a category error. There is no "privileged" layer of reasoning happening in the silicon that then gets translated into the thought block. It’s a specialized output where the model is forced to show its work because that process of feeding its own generated strings back into its context window statistically increases the probability of a correct result. The chatbot providers just package this in a neat little window to make the model's "thinking" part of the gimmick.
I also wouldn't be surprised if asking it stuff like this was actually counter productive, but for this I'm going off vibes. The logic being that by asking that, you're poisoning the context, similar to how if you try generate an image by saying "It should not have a crocodile in the image", it will put a crocodile into the image. By asking it why it did something wrong, it'll treat that as the ground truth and all future generation will have that snippet in it, nudging the output in such a way that the wrong thing itself will influence it to keep doing the wrong thing more and more.
You're entirely correct in that it's a different model with every message, every token. There's no past memory for it to reference.
That said it can still be useful because you have a some weird behavior and 199k tokens of context, with no idea where the info is that's nudging it to do the weird thing.
In this case you can think of it less as "why did you do this?" And more "what references to doing this exist in this pile of files and instructions?"
Agreed. I wish more people understood the difference between tokens, embeddings, and latent space encodings. The actual "thinking" if you can call it that, happens in latent space. But many (even here on HN) believe the thinking tokens are the thoughts themselves. Silly meatbags!
1 reply →
> I really hate that the anthropomorphizing of these systems has successfully taken hold in people's brains. Asking it why it did something is completely useless because you aren't interrogating a person with a memory or a rationale, you’re querying a statistical model that is spitting out a justification for a past state it no longer occupies.
"Thinking meat! You're asking me to believe in thinking meat!"
While next-token prediction based on matrix math is certainly a literal, mechanistic truth, it is not a useful framing in the same sense that "synapses fire causing people to do things" is not a useful framing for human behaviour.
The "theory of mind" for LLMs sounds a bit silly, but taken in moderation it's also a genuine scientific framework in the sense of the scientific method. It allows one to form hypothesis, run experiments that can potentially disprove the hypothesis, and ultimately make skillful counterfactual predictions.
> By asking it why it did something wrong, it'll treat that as the ground truth and all future generation will have that snippet in it, nudging the output in such a way that the wrong thing itself will influence it to keep doing the wrong thing more and more.
In my limited experience, this is not the right use of introspection. Instead, the idea is to interrogate the model's chain of reasoning to understand the origins of a mistake (the 'theory of mind'), then adjust agents.md / documentation so that the mistake is avoided for future sessions, which start from an otherwise blank slate.
I do agree, however, that the 'theory of mind' is very close to the more blatantly incorrect kind of misapprehension about LLMs, that since they sound humanlike they have long-term memory like humans. This is why LLM apologies are a useless sycophancy trap.
> Asking it why it did something is completely useless because you aren't interrogating a person with a memory or a rationale, you’re querying a statistical model that is spitting out a justification for a past state it no longer occupies.
Asking it why it did something isn’t useless, it just isn’t fullproof. If you really think it’s useless, you are way too heavily into binary thinking to be using AI.
Perfect is the enemy of useful in this case.
2 replies →
This is like trying to fix hallucination by telling LLM not to hallucinate.
so many times have ended up here :
"You're absolutely correct. I should have checked my skills before doing that. I'll make sure I do it in the future."
Quite a surprising result: “across multiple coding agents and LLMs, we find that context files tend to reduce task success rates compared to providing no repository context, while also increasing inference cost by over 20%.”
Well, task == Resolving real GitHub Issues
Languages == Python only
Libraries (um looks like other LLM generated libraries -- I mean definitely not pure human: like Ragas, FastMCP, etc)
So seems like a highly skewed sample and who knows what can / can't be generalized. Does make for a compelling research paper though!
Hey, paper author here. We did try to get an even sample - we include both SWE-bench repos (which are large, popular and mostly human-written) and a sample of smaller, more recent repositories with existing AGENTS.md (these tend to contain LLM written code of course). Our findings generalize across both these samples. What is arguably missing are small repositories of completely human-written code, but this is quite difficult to obtain nowadays.
5 replies →
I think that is a rather fitting approach to the problem domain. A task being a real GitHub issue is a solid definition by any measure, and I see no problem picking language A over B or C.
If you feel strongly about the topic, you are free to write your own article.
> Libraries (um looks like other LLM generated libraries -- I mean definitely not pure human: like Ragas, FastMCP, etc)
How does this invalidate the result? Aren't AGENTS.md files put exactly into those repos that are partly generated using LLMs?
Yesterday while i was adding some nitpicks to a CLAUDE.md/AGENTS.md file, I thought « this file could be renamed CONTRIBUTING.md and be done with it ».
Maybe I’m wrong but sure feels like we might soon drop all of this extra cruft for more rationale practices
Exactly my thoughts... the model should just auto ingest README and CONTRIBUTING when started.
You could have claude --init create this hook and then it gets into the context at start and resume
Or create it in some other way
I thought it was such a good suggestion that I made this just now and made it global to inject README at startup / resume / post compact - I'll see how it works out
https://gist.github.com/lawless-m/fa5d261337dfd4b5daad4ac964...
with this hook
1 reply →
And that makes total sense. Honestly working since a few days with Opus 4.6, it really feels like a competent coworker, but need some explicit conventions to follow … exactly when onboarding a new IC! So i think there is a bright light to be seen: this will force having proper and explicit contribution rules and conventions, both for humans and robots
Exactly, it's the same documentation any contributor would need, just actually up-to-date and pared down to the essentials because it's "tested" continuously. If I were starting out on a new codebase, AGENTS.md is the first place I'd look to get my bearings.
LLMs are generally bad at writing non-noisy prompts and instructions. It's better to have it write instructions post hoc. For instance, I paste this prompt into the end of most conversations:
It hardly creates rules, but when it does, it affects rules in a way that positively affects behavior. This works very well.
Another common mistake is to have very long AGENTS.md files. The file should not be long. If it's longer than 200 lines, you're certainly doing it wrong.
> If nothing durable was learned, no changes are needed.
Off topic, but oh my god if you don't do this, it will always do the thing you conditionally requested it to do. Not sure what to call this but it's my one big annoyance with LLMs.
It's like going to a sub shop and asking for just a tiny bit of extra mayo and they heap it on.
Llms generally seem trained with the assumption that if you mention it, you want it.
I don't think the instruction following benches test for this much and I don't know how you'd measure it well
I'd be interested to see results with Opus 4.6 or 4.5
Also, I bet the quality of these docs vary widely across both human and AI generated ones. Good Agents.md files should have progressive disclosure so only the items required by the task are pulled in (e.g. for DB schema related topics, see such and such a file).
Then there's the choice of pulling things into Agents.md vs skills which the article doesn't explore.
I do feel for the authors, since the article already feels old. The models and tooling around them are changing very quickly.
Agree that progressive disclosure is fantastic, but
> (e.g. for DB schema related topics, see such and such a file).
Rather than doing this, put another AGENTS.md file in a DB-related subfolder. It will be automatically pulled into context when the agent reads any files in the file. This is supported out of the box by any agent worth its salt, including OpenCode and CC.
IMO static instructions referring an LLM to other files are an anti-pattern, at least with current models. This is a flaw of the skills spec, which refers to creating a "references" folder and such. I think initial skills demos from Anthropic also showed this. This doesn't work.
> This is supported out of the box by any agent worth its salt, including OpenCode and CC.
I thought Claude Code didn't support AGENTS.md? At least according to this open issue[0], it's still unsupported and has to be symlinked to CLAUDE.md to be automatically picked up.
[0] https://github.com/anthropics/claude-code/issues/6235
3 replies →
This is probably the best comment in the thread. I've totally forgotten about nested AGENTS.md files, gonna try implementation today.
1 reply →
Progressive disclosure is good for reducing context usage but it also reduces the benefit of token caching. It might be a toss-up, given this research result.
Those are different axes - quality vs money.
Progressive disclosure is invaluable because it reduces context rot. Every single token in context influences future ones and degrades quality.
I'm also not sure how it reduces the benefit of token caching. They're still going to be cached, just later on.
It is still baffling to me why we need AGENTS.md
Any well-maintained project should already have a CONTRIBUTING.md that has good information for both humans and agents.
Sometimes I actually start my sessions like this "please read the contributing.md file to understand how to build/test this project before making any code changes"
If the harnesses had a simple system prompt "read repository level markdown and honor the house style"?
Think of the agent app store people's children man, it would be a sad Christmas.
Just symlink CONTRIBUTING as AGENTS
This only works on systems that support symlinks. It also pollutes the root folder with more files.
I understand the sentiment, but it is really strange that the people that are pushing for agents.md haven't seen https://contributing.md/
Is it even mentioned at GitHub docs https://docs.github.com/en/communities/setting-up-your-proje...
My opinions about the study:
- Don't state the obvious: I wouldn't hand a senior human dev a copy of "Clean Code" before every ticket and expect them to work faster.
- File vs. Prompt is a false dichotomy: The paper treats "Context Files" as a separate entity, but technically, an AGENTS.md is just a system prompt injection. The mechanism is identical. The study isn't proving that "files are bad," it's proving that "context stuffing" is bad. Whether I paste the rules manually or load them via a file, the transformer sees the same tokens.
- Latent vs. Inferable Knowledge: This is the key missing variable. If I remove context files, my agents fail at tasks requiring specific process knowledge - like enforcing strict TDD or using internal wrapper APIs that aren't obvious from public docs. The agent can't "guess" our security protocols or architectural constraints. That's not a performance drag; it's a requirement. The paper seems to conflate "adding noise" with "adding constraints."
I only put things when the LLM gets something wrong and I need to correct it. Like “no, we create db migrations using this tool” kind of corrections. So far it made them behave correctly in those situations.
Their definition of context excludes prescriptive specs/requirements files. They are only talking about a file that summarizes what exists in the codebase, which is information that's otherwise discoverable by the agent through CLI (ripgrep, etc), and it's been trained to do that as efficiently as possible.
Also important to note that human-written context did help according to them, if only a little bit.
Effectively what they're saying is that inputting an LLM generated summary of the codebase didn't help the agent. Which isn't that surprising.
I find it surprising. The piece of code I'm working on is about 10k LoC to define the basic structures and functionality and I found Claude Code would systematically spend significant time and tokens exploring it to add even basic functionality. Part of the issue is this deals with a problem domain LLMs don't seem to be very well trained on, so they have to take it all in, they don't seem to know what to look for in advance.
I went through a couple of iterations of the CLAUDE.md file, first describing the problem domain and library intent (that helped target search better as it had keywords to go by; note a domain-trained human would know these in advance from the three words that comprise the library folder name) and finally adding a concise per-function doc of all the most frequently used bits. I find I can launch CC on a simple task now, without it spending minutes reading the codebase before getting started.
That's also my experience.
The article is interesting but I think it deviates from a common developer experience as many don't work on Python libraries, which likely heavily follow patterns that the model itself already contains.
Hey, a paper author here :) I agree, if you know well about LLMs it shouldn't be too surprising that autogenerated context files are not helping - yet this is the default recommendation by major AI companies which we wanted to scrutinize.
> Their definition of context excludes prescriptive specs/requirements files.
Can you explain a bit what you mean here? If the context file specifies a desired behavior, we do check whether the LLM follows it, and this seems generally to work (Section 4.3).
we've been running AGENTS.md in production on helios (https://github.com/BintzGavin/helios) for a while now.
each role owns specific files. no overlap means zero merge conflicts across 1800+ autonomous PRs. planning happens in `.sys/plans/{role}/` as written contracts before execution starts. time is the mutex.
AGENTS.md defines the vision. agents read the gap between vision and reality, then pull toward it. no manager, no orchestration.
we wrote about it here: https://agnt.one/blog/black-hole-architecture
agents ship features autonomously. 90% of PRs are zero human in the loop. the one pain point is refactors. cross-cutting changes don't map cleanly to single-role ownership
AGENTS.md works when it encodes constraints that eliminate coordination. if it's just a roadmap, it won't help much.
Might be some interesting nuggets in your article but my eyes rolled so hard at this part I had to stop reading:
"The system does not assign tasks.
It defines gravity."
Helios looks cool though!
I'd take any paper like this with a grain of salt. I imagine what holds true for models in time period X could drastically be different just given a little more time.
Doesn't mean it's not worth studying this kind of stuff, but this conclusion is already so "old" that it's hard to say it's valid anymore with the latest batch of models.
This is life of an LLM researcher. We literally ran the last experiments only a month ago on what were the latest models back then...
[dead]
I use AGENTS.md daily for my personal AI setup. The biggest win is giving the agent project-specific context — things like deployment targets, coding conventions, and what not to do. Without it, the agent makes generic assumptions that waste time.
In my experience AGENTS.md files only save a bit of time, they don't meaningfully improve success. Agents are smart enough to figure stuff out on their own, but you can save a few tool calls and a bit of context by telling them how to build your project or what directories do what rather than letting it stumble its way there.
What is the purpose of an AGENTS.md file when there are so many different models? Which model or version of the model is the file written for? So much depends on assumptions here. It only makes sense when you know exactly which model you are writing for. No wonder the impact is 'all over the place'.
[dead]
This paper shoulda just done a study on elixirs usage_rules?
https://github.com/ash-project/usage_rules
I think they can be helpful for humies too: the act of writing the instructions and describing your stuff in a clear way, and also reading it later.
I've found that even documenting non-obvious dependencies between tasks can significantly improve agent performance and reduce debugging time
Many of the practices in this field are mostly based on feelings and wishful thinking, rather than any demonstrable benefit. Part of the problem is that the tools are practically nondeterministic, and their results can't be compared reliably.
The other part is fueled by brand recognition and promotion, since everyone wants to make their own contribution with the least amount of effort, and coming up with silly Markdown formats is an easy way to do that.
EDIT: It's amusing how sensitive the blue-pilled crowd is when confronted with reality. :)
If I understand the paper correctly, the researchers found that AGENTS.md context files caused the LLMs to burn through more tokens as they parsed and followed the instructions, but they did not find a large change in the success rate (defined by "the PR passes the existing unit tests in the repo").
What wasn't measured, probably because it's almost impossible to quantify, was the quality of the code produced. Did the context files help the LLMs produce code that matched the style of the rest of the project? Did the code produced end up reasonably maintainable in the long run, or was it slop that increased long-term tech debt? These are important questions, but as they are extremely difficult to assign numbers to and measure in an automated way, the paper didn't attempt to answer them.
I pretty much add to my prompt bunch of stuff, with AGENST.md or any file I can just add one line "hey read up that file".
Check the logs, no one really requests AGENTS.md from the server.
The only thing I use CLAUDE.md for is explaining the purpose and general high level design principles of the project so I don't have to waste my time reiterating this every time I clear the context. Things like this is a file manager, the deliverable must always be a zipapp, Wayland will never be supported.
I added these to that file because otherwise I will have to tell claude these things myself, repeatedly. But the science says... Respectfully, blow it out your ass.
I chuckled at "Wayland will never be supported" :-D
Research has shown that most earlier "techniques" to get better LLM response no longer work and are actively harmful with modern models. I'm so grateful that there's actual studies and papers about this and that they keep coming out. Software developers are super cargo culty and will do whatever the next guy does (and that includes doing whatever is suggested in research papers)
Software developers don't have to be cargo-culty... if they're working on systems that are well-documented or are open-source (or at least source-available) so that you can actually dig in to find out how the system works.
But with LLMs, the internals are not well-documented, most are not open-source (and even if the model and weights are open-source, it's impossible for a human to read a grid of numbers and understand exactly how it will change its output for a given input), and there's also an element of randomness inherent to how the LLM behaves.
Given that fact, it's not surprising to find that developers trying to use LLMs end up adding certain inputs out of what amounts to superstition ("it seems to work better when I tell it to think before coding, so let's add that instruction and hopefully it'll help avoid bad code" but there's very little way to be sure that it did anything). It honestly reminds me of gambling fallacies, e.g. tabletop RPG players who have their "lucky" die that they bring out for important rolls. There's insufficient input to be sure that this line, which you add to all your prompts by putting it in AGENTS.md, is doing anything — but it makes you feel better to have it in there.
(None of which is intended as a criticism, BTW: that's just what you have to do when using an opaque, partly-random tool).
Most of these AI-guiding "techniques" seem more like reading into tea leaves to me than anything actually useful.
Even with the latest and greatest (because I know people will reflexively immediately jump down my throat if I don't specify that, yes, I've used Opus 4.6 and Gemini 3 Pro etc. etc. etc. etc., I have access to all of the models by way of work and use them regularly), my experience has been that it's basically a crapshoot that it'll listen to a single one of these files, especially in the long run with large chats. The amount of times I still have to tell these things to not generate React in my Vue codebase that has literally not a single line of JSX anywhere and instructions in every single possible file I can put it in to NOT GENERATE FUCKING REACT CODE makes me want to blow my brains out every time it happens. In fact it happened to me today with the supposed super intelligence known as Opus 4.6 that has 18 trillion TB of context or whatever in a fresh chat when I asked for a quick snippet I needed to experiment with.
I'm not even paying for this crap (work is) and I still feel scammed approximately half the time, and can't help but think all of these suggestions are just ways to inflate token usage and to move you into the usage limit territory faster.
Claude/Opus 4.6 Can you add a console.log in food XYZ?
No problem, x agents, hundreds/closed to one million token usage to add a line of code.
Gemini 3 : can you review the commit A (console.log one ) you have made the most significant change in your 200kloc code base, this key change will allow you to get great insight into your software.
Codex : I have reviewed your change, you are missing tests and integration tests.
But I fully agree, overall I feel there are a lot of tea leaves readers online and LinkedIn.
[dead]
What are you putting in the file? When I’ve looked at them they just looked like a second readme file without the promotional material in a typical GitHub readme.
That's basically all it is. It's a readme file that is guaranteed to be read. So the agent doesn't spend 10 minutes trying to re-configure the toolchain because the first command it guessed didn't work.
[dead]
2 replies →
[dead]
[dead]
[dead]
[dead]
[dead]