Comment by pacoWebConsult
5 days ago
Why would you bloat the (already crowded) context window with 27 tools instead of the 2 simplest ones: Save Memory & Search Memory? Or even just search, handling the save process through a listener on a directory of markdown memory files that Claude Code can natively edit?
MCP's are toys for point-and-click devs that no self-respecting dev has any business using.
Case in point; I'm mostly a Claude user, which has decent background process / BashOutput support to get a long-running process's stdout.
I was using codex just now, and its processes support is ass.
So I asked it, give me 5 options using cli tools to implement process support. After 3 min back and forth, I got this: https://github.com/offline-ant/shellagent-tools/blob/main/ba...
Add single line in AGENTS.md.
> the `background` tool allows running programs in the background. Calling `background` outputs the help.
Now I can go "background ./server; try thing. investigate" and it has access to the stdout.
Stop pre-trashing your context with MCPs people.
Yep. Most MCP servers are best repackaged as a cli tool, and telling the LLM to invoke `toolname --help` to learn how to use it at runtime. The primary downside is that they LLM will have a lower proclivity to invoke those tools unless explicitly reminded to.
If you do like rolling your own MCP servers, I've had great success recently refactoring a bunch of my own to consume fewer tokens by, instead of creating many different tools, consolidating the tools and passing through different arguments.
People are just ricing out AI like they rice out Linux, nvim or any other thing. It's pretty simple to get results from the tech. Use the CLI and know what you're doing.
Fair points, share how you are learning - seems to be more than one way to the same result.
Maintain a good agents.md with notes on code grammar/structure/architecture conventions your org uses, then for each problem, prompt it step-by-step as if you were a junior engineer's monologue.
e.g. as I am dropped into a new codebase:
1. Ask Claude to find the section of code that controls X
2. Take a look manually
3. Ask it to explain the chain of events
4. Ask it to implement change Y, in order to modify X to do behavior we want
5. Ask it about any implementation details you don't understand, or want clarification on -- it usually self-edits well.
6. You can ask it to add comments, tests, etc., at this point, and it should run tests to confirm everything works as expected.
7. Manually step through tests, then code, to sanity check (it can easily have errors in both).
8. Review its diff to satisfaction.
9. Ask it to review its own diff as if it was a senior engineer.
This is the method I've been using, as I onboard onto week 1 in a new codebase. If the codebase is massive, and READMEs are weak, AI copilot tools can cut down overall PR time by 2-3x.
I imagine overall performance dips after developer familiarity increases. From my observation, it's especially great for automating code-finding and logic tracing, which often involves a bunch of context-switching and open windows--human developers often struggle with this more than LLMs. Also great for creating scaffolding/project structure. Overall weak at debugging complex issues, less-documented public API logic, often has junior level failures.
2 replies →
That's a great point, the reality is that context, at least from personal experience, is brittle and over time will start to lose precision. This is a always there, persistent way for claude to access "memories". I've been running with it for about a week now and did not feel that the context would get bloated.
I do notice building up context makes a difference. Having the context modular helps too.
Yes, exactly this. But idiot VC funding (which YC is also somewhat engaged in I imagine) cries for MCP. Hence multi billion valuations and many million dollar salaries and bonuses being thrown around.
It's ridiculous and ties into the overall state of the world tbh. Pretty much given up hoping that we'll become an enlightened species.
So let's enjoy our stupid MCP and stupid disposable plastic because I don't see any way that we aren't gonna cook ourselves to extinction on this planet. :)
While I totally agree with you, I also can see a world where we just throw a ton of calls in the MCP and then wrap it in a subagent that has a short description listing every verb it has access to.
Absolutely. Remember these are just tools, how each one of us uses them it's a diffrent story. A lot can be leveraged as well by adding a couple of lines to CLAUDE.md on how he should use this memory solution, or not, it's totally up to anyone. You can also have a subagent that is responsible for project management that is in charge of managing memory or having a coordinator. Again a lot of testing needs to be done :)