Comment by Havoc 2 months ago Do you write the test yourself or get the agent to do it? 4 comments Havoc Reply hu3 2 months ago No OP but I also guide LLMs with TDD and it's a mixture of LLMs write tests for happy paths and I write tests for edge cases.Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed. Havoc 2 months ago > Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.Oh that’s clever. Thanks wilg 2 months ago I try to get the agent to create a failing test first, so we can verify its fix is real. wilg 2 months ago I get the agent to do it generally. (I realize this seems incestuous, but its fairly easy to validate the tests are sensible as you add features, because the biggest risk is regressions as the AI does something dumb later.)
hu3 2 months ago No OP but I also guide LLMs with TDD and it's a mixture of LLMs write tests for happy paths and I write tests for edge cases.Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed. Havoc 2 months ago > Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.Oh that’s clever. Thanks wilg 2 months ago I try to get the agent to create a failing test first, so we can verify its fix is real.
Havoc 2 months ago > Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.Oh that’s clever. Thanks
wilg 2 months ago I try to get the agent to create a failing test first, so we can verify its fix is real.
wilg 2 months ago I get the agent to do it generally. (I realize this seems incestuous, but its fairly easy to validate the tests are sensible as you add features, because the biggest risk is regressions as the AI does something dumb later.)
No OP but I also guide LLMs with TDD and it's a mixture of LLMs write tests for happy paths and I write tests for edge cases.
Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.
> Also when I use LLM to fix a bug, I tell it to write a test to prevent regression of the bug at the end of the session, after the bug is fixed.
Oh that’s clever. Thanks
I try to get the agent to create a failing test first, so we can verify its fix is real.
I get the agent to do it generally. (I realize this seems incestuous, but its fairly easy to validate the tests are sensible as you add features, because the biggest risk is regressions as the AI does something dumb later.)