← Back to context Comment by Havoc 15 hours ago Do you write the test yourself or get the agent to do it? 4 comments Havoc Reply hu3 13 hours ago No OP but I also guide LLMs with TDD and it's a mixture of LLMs write tests for happy paths and I write tests for edge cases.Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed. Havoc 9 hours ago > Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.Oh that’s clever. Thanks wilg 12 hours ago I try to get the agent to create a failing test first, so we can verify its fix is real. wilg 12 hours ago I get the agent to do it generally. (I realize this seems incestuous, but its fairly easy to validate the tests are sensible as you add features, because the biggest risk is regressions as the AI does something dumb later.)
hu3 13 hours ago No OP but I also guide LLMs with TDD and it's a mixture of LLMs write tests for happy paths and I write tests for edge cases.Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed. Havoc 9 hours ago > Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.Oh that’s clever. Thanks wilg 12 hours ago I try to get the agent to create a failing test first, so we can verify its fix is real.
Havoc 9 hours ago > Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.Oh that’s clever. Thanks
wilg 12 hours ago I try to get the agent to create a failing test first, so we can verify its fix is real.
wilg 12 hours ago I get the agent to do it generally. (I realize this seems incestuous, but its fairly easy to validate the tests are sensible as you add features, because the biggest risk is regressions as the AI does something dumb later.)
No OP but I also guide LLMs with TDD and it's a mixture of LLMs write tests for happy paths and I write tests for edge cases.
Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.
> Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.
Oh that’s clever. Thanks
I try to get the agent to create a failing test first, so we can verify its fix is real.
I get the agent to do it generally. (I realize this seems incestuous, but its fairly easy to validate the tests are sensible as you add features, because the biggest risk is regressions as the AI does something dumb later.)