← Back to context

Comment by geraneum

2 days ago

> In short: the implementation was performed in a very similar way to how a human programmer would do it, and not outputting a complete implementation from scratch “uncompressing” it from the weights.

> Instead, different classes of instructions were implemented incrementally, and there were bugs that were fixed…

Not sure the author fully grasps how and why LLM agents work this way. There’s a leap of logic here: the agent runs in a loop where command outputs get fed back as context for further token generation, which is what produces the incremental human like process he’s observing. It’s still that “decompression” from the weights, still the LLM’s unique way of extracting and blending patterns from training data, that’s doing the actual work. The agentic scaffolding just lets it happen in many small steps against real feedback instead of all at once. So the novel output is real, but he’s crediting the wrong thing for it.