← Back to context

Comment by theshrike79

5 months ago

I don't use subagents to do things, they're best for analysing things.

Like "evaluate the test coverage" or "check if the project follows the style guide".

This way the "main" context only gets the report and doesn't waste space on massive test outputs or reading multiple files.

This is only a problem if an agent is made in a lazy way (all of them).

Chat completion sends the full prompt history on every call.

I am working on my own coding agent and seeing massive improvements by rewriting history using either a smaller model or a freestanding call to the main one.

It really mitigates context poisoning.

  • Everyone complains that when you compact the context, Claude tends to get stupid

    Which as far as I understand it is summarizing the context with a smaller model.

    Am I misunderstanding you, as the practical experience of most people seem to contradict your results.

    • One key insight I have from having worked on this from the early stages of LLMs (before chatgpt came out) is that the current crop of LLM clients or "agentic clients" don't log/write/keep track of success over time. It's more of a "shoot and forget" environment right now, and that's why a lot of people are getting vastly different results. Hell, even week to week on the same tasks you get different results (see the recent claude getting dumber drama).

      Once we start to see that kind of self feedback going in next iterations (w/ possible training runs between sessions, "dreaming" stage from og RL, distilling a session, grabbing key insights, storing them, surfacing them at next inference, etc) then we'll see true progress in this space.

      The problem is that a lot of people work on these things in silos. The industry is much more geared towards quick returns now, having to show something now, rather than building strong fo0undations based on real data. Kind of an analogy to early linux dev. We need our own Linus, it would seem :)

      7 replies →

  • There's a large body of research on context pruning/rewriting (I know because I'm knee deep in benchmarks in release prep for my context compiler), definitely don't ad hoc this.

  • I do something similar and I have the best results of not having a history at all, but setting the context new with every invokation.