Comment by giancarlostoro
14 hours ago
This sounds a little bit like rkt? Which trims output from other CLI applications like git, find and the most common tools used by Claude. This looks like it goes a little further which is interesting.
I see some of these AI companies adopting some of these ideas sooner or later. Trim the tokens locally to save on token usage.
Haven't looked at rtk closely but from the description it sounds like it works at the CLI output level, trimming stdout before it reaches the model. Context-mode goes a bit further since it also indexes the full output into a searchable FTS5 database, so the model can query specific parts later instead of just losing them. It's less about trimming and more about replacing a raw dump with a summary plus on-demand retrieval.
Yeah I like this approach too. I made a tool similar to Beads and after learning about RTK I updated mine to produce less token hungry output. I'm still working on it.
https://github.com/Giancarlos/guardrails
Does context mode only work with MCPs? Or does it work with bash/git/npm commands as well?
I'm not sure it actually works with MCPs *at all*, trying to get that clarified. How can context-mode get "into the MCP loop"?
See my comment above, context-mode has no way to inject itself into the MCP tool-call - response loop.
Still high-value, outside MCPs.
I’m also trying to see which one makes more sense. Discussion about rtk started today: https://news.ycombinator.com/item?id=47189599