← Back to context

Comment by kevingadd

2 days ago

I think it makes sense that GP is skeptical of this article considering it contains things like:

> this tool is improving itself, learning from every interaction

which seem to indicate a fundamental misunderstanding of how modern LLMs work: the 'improving' happens by humans training/refining existing models offline to create new models, and the 'learning' is just filling the context window with more stuff, not enhancement of the actual model or the model 'learning' - it will forget everything if you drop the context and as the context grows it can 'forget' things it previously 'learned'.

When you consider the "tool" as more than just the LLM model, but the stuff wrapped around calling that model then I feel like you can make a good argument it's improving when it keeps context in a file on disk and constantly updates and edits that file as you work throguh the project.

I do this routinely for large initiatives I'm kicking off through Claude Code - it writes a long detailed plan into a file and as we work through the project I have it constantly updating and rewriting that document to add information we have jointly discovered from each bit of the work. That means every time I come back and fire it back up, it's got more information than when it started, which looks a lot more improvement from my perspective.