← Back to context

Comment by matheus-rr

11 hours ago

Fair point on the contradiction. The "never use negative instructions" wisdom comes from general prompting where mentioning the unwanted thing can increase its likelihood. AGENTS.md is a different context though, the model is reading persistent rules for a session, not doing a single completion where priming effects matter as much.

But you're right that "better" isn't "reliable." In practice it went from "constantly ignored" to "followed maybe 80% of the time." The remaining 20% is the model encountering situations where it decides the instruction doesn't apply to this specific case.

Honest answer is probably somewhere between "they don't work" and "write them right and you're fine." They raise the floor but don't guarantee anything. I still use them because 80% beats 20%, but I wouldn't bet production correctness on them.