I've been using AI code assistants like Copilot a lot, and I've noticed what I call the "AI Trust Paradox": AI-generated code often looks correct but has subtle runtime bugs, leading me to spend more time debugging its output than if I had written it myself. The core issue is that these AIs are static—they can read code but can't see it run.
So, I built Augur.
Augur is an attempt to solve this by making the AI "runtime-aware." It's an open-source VS Code extension that hooks into the debug session using the Debug Adapter Protocol (DAP). When your code hits a breakpoint, Augur:
Intercepts the stopped event.
Gathers the live runtime state (stack trace, variable values, etc.) via DAP requests.
Formats this into a clean "golden context" prompt and sends it to an LLM (currently Gemini, but it's extensible).
The LLM analyzes this real-time state and decides the next debugging action (e.g., stepOver, stepInto).
Augur translates this decision back into a DAP command and executes it, creating an autonomous debugging loop.
To make the concept easy to grasp, I've created two components:
The VS Code Extension: This is the real tool you can install and use in your own projects.
A Web Visualizer (Live Demo): This is a React-based simulator that lets you see the "Context Engineering" and AI decision-making process in your browser without installing anything. It's the best way to understand the core logic.
It's still early days, but I'm really excited about the core idea of using DAP to give AI agents runtime perception. I would love to get your feedback, thoughts, and even harsh criticism on the approach.
Hi HN,
I've been using AI code assistants like Copilot a lot, and I've noticed what I call the "AI Trust Paradox": AI-generated code often looks correct but has subtle runtime bugs, leading me to spend more time debugging its output than if I had written it myself. The core issue is that these AIs are static—they can read code but can't see it run.
So, I built Augur.
Augur is an attempt to solve this by making the AI "runtime-aware." It's an open-source VS Code extension that hooks into the debug session using the Debug Adapter Protocol (DAP). When your code hits a breakpoint, Augur:
Intercepts the stopped event.
Gathers the live runtime state (stack trace, variable values, etc.) via DAP requests.
Formats this into a clean "golden context" prompt and sends it to an LLM (currently Gemini, but it's extensible).
The LLM analyzes this real-time state and decides the next debugging action (e.g., stepOver, stepInto).
Augur translates this decision back into a DAP command and executes it, creating an autonomous debugging loop.
To make the concept easy to grasp, I've created two components:
The VS Code Extension: This is the real tool you can install and use in your own projects.
A Web Visualizer (Live Demo): This is a React-based simulator that lets you see the "Context Engineering" and AI decision-making process in your browser without installing anything. It's the best way to understand the core logic.
It's still early days, but I'm really excited about the core idea of using DAP to give AI agents runtime perception. I would love to get your feedback, thoughts, and even harsh criticism on the approach.
GitHub: https://github.com/UPwith-me/Augur-Runtime-Debugging-Agent
Thanks for checking it out!