Comment by seeknotfind
3 days ago
Yeah, from the title, it sounds like perhaps the entire operation is differentiable and therefore trainable as a whole model and that such training is done. However, upon close inspection, I can't find any evidence that more is done than calling the model repeatedly.
No, there's no training going on, here, as far as I can tell. E.g., they use GPT-5 as their base model. Also, AFAICT from a quick skim/search there's no mention of loss functions or derivatives, FWIW.
The derivative being a grad(ient) student sampling scaffolds against evals + qualitative observations: most prompt-based llm papers