Comment by mark_undoio

1 day ago

I've recently been thinking how AI agents could affect this.

If you're lucky enough to be able to code significant amounts with a modern agent (someone's paying, your task is amenable to it, etc) then you may experience development shifting (further) from "type in the code" to "express the concepts". Maybe you still write some code - but not as much.

What does this look like for debugging / understanding? There's a potential outcome of "AI just solves all the bugs" but I think it's reasonable to imagine that AI will be a (preferably helpful!) partner to a human developer who needs to debug.

My best guess is:

* The entities you manage are "investigations" (mapping onto agents) * You interact primarily through some kind of rich chat (includes sensibly formatted code, data, etc) * The primary artefact(s) of this workflow are not code but something more like "clues" / "evidence".

Managing all the theories and snippets of evidence is already core to debugging the old fashioned way. I think having agents in the loop gives us an opportunity to make that explicit part of the process (and then be able to assign agents to follow up gaps in the evidence, or investigate them yourself or get someone else to...).

This resonates a lot. Framing debugging as an investigation — with hypotheses, evidence and gaps — feels much closer to how I experience real debugging, especially on complex systems. What I’m still unsure about is how much of that investigative process tooling should make explicit, without overwhelming the developer or turning it into a full case-management workflow.