← Back to context

Comment by jatins

9 hours ago

If Agent is writing the tests itself, does it offer better correctness guarantees than letting it write code and tests?

It is definitely not foolproof but IMHO, to some extent, it is easier to describe what you expect to see than to implement it so I don't find it unreasonable to think it might provide some advantages in terms of correctness.

  • That definitely depends upon the situation. More often than not, properly testing a component takes me more time than writing it.

    • In my experience, this tends to be more related to instrumentation / architecture than a lack of ability to describe correct results. TDD is often suggested as a solution.

Given the issues with AWS with Kiro and Github, We already have just a few high-profile examples of what happens when AI is used at scale and even when you let it generate tests which is something you should absolutely not do.

Otherwise in some cases, you get this issue [0].

[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

  • Don't "let it" generate tests. Be intentional. Define them in a way that's slightly oblique to how the production code approaches the problem, so the seams don't match. Heck, that's why it's good to write them before even thinking about the prod side.

  • The linked article does not speak of tests, it speaks of a team that failed to properly review an LLM refactor then proceeds to blame the tooling.

    LLMs are good at writing tests in my experience.