← Back to context

Comment by addcn

8 hours ago

This reduces down to the problem of summarization - a quite difficult one. At commit time it’s difficult to know what questions readers will have. You can get close but never all the way there.

Pre AI when engineers couldn’t find the answer in commit messages or documentation they would ask the author “why” and that human would “compute” the summary on demand.

I think that’s what I expect to do with these agent sessions - I don’t want more markdown, I want to ask it questions on demand. Git AI (https://github.com/git-ai-project/git-ai) uses the prompts that way. I think that model will win out. Save sessions. Read/ask questions relevant to the current agent’s work.

On asking peers. This is regrettably on the way out today - I’ll ask engineers about complex code they generated and they can’t give good answers. I think it’s because it all happened so fast — they didn’t sit with the problem for 48 hours. So even if they steered the agent thoughtfully it’s hard to remember all the decisions they made a week later.