← Back to context

Comment by disgruntledphd2

9 hours ago

> (And the automated test suite will help them confirm that the refactoring worked properly, because naturally you had them construct an automated test suite when they built those original features, right?)

I dunno, maybe I have high standards but I generally find that the test suites generated by LLMs are both over and under determined. Over-determined in the sense that some of the tests are focused on implementation details, and under-determined in the sense that they don't test the conceptual things that a human might.

That being said, I've come across loads of human written tests that are very similar, so I can see where the agents are coming from.

You often mention that this is why you are getting good results from LLMs so it would be great if you could expand on how you do this at some point in the future.

I work in Python which helps a lot because there are a TON of good examples of pytest tests floating around in the training data, including things like usage of fixture libraries for mocking external HTTP APIs and snapshot testing and other neat patterns.

Or I can say "use pytest-httpx to mock the endpoints" and Claude knows what I mean.

Keeping an eye on the tests is important. The most common anti-pattern I see is large amounts of duplicated test setup code - which isn't a huge deal, I'm much more more tolerant of duplicated logic in tests than I am in implementation, but it's still worth pushing back on.

"Refactor those tests to use pytest.mark.parametrize" and "extract the common setup into a pytest fixture" work really well there.

Generally though the best way to get good tests out of a coding agent is to make sure it's working in a project with an existing test suite that uses good patterns. Coding agents pick the existing patterns up without needing any extra prompting at all.

I find that once a project has clean basic tests the new tests added by the agents tend to match them in quality. It's similar to how working on large projects with a team of other developers work - keeping the code clean means when people look for examples of how to write a test they'll be pointed in the right direction.

One last tip I use a lot is this:

  Clone datasette/datasette-enrichments
  from GitHub to /tmp and imitate the
  testing patterns it uses

I do this all the time with different existing projects I've written - the quickest way to show an agent how you like something to be done is to have it look at an example.

  • > Generally though the best way to get good tests out of a coding agent is to make sure it's working in a project with an existing test suite that uses good patterns. Coding agents pick the existing patterns up without needing any extra prompting at all.

    Yeah, this is where I too have seen better results. The worse ones have been in places where it was greenfield and I didn't have an amazing idea of how to write tests (a data person working on a django app).

    Thanks for the information, that's super helpful!

  • I work in Python as well and find Claude quite poor at writing proper tests, might be using it wrong. Just last week, I asked Opus to create a small integration test (with pre-existing examples) and it tried to create a 200-line file with 20 tests I didn't ask for.

    I am not sure why, but it kept trying to do that, although I made several attempts.

    Ended up writing it on my own, very odd. This was in Cursor, however.

In my experience asking the model to construct an automated test suite, with no additional context, is asking for a bad time. You'll see tests for a custom exception class that you (or the LLM) wrote that check that the message argument can be overwritten by the caller, or that a class responds to a certain method, or some other pointless and/or tautological test.

If you start with an example file of tests that follow a pattern you like, along with the code the tests are for, it's pretty good at following along. Even adding a sentence to the prompt about avoiding tautological tests and focusing on the seams of functions/objects/whatever (integration tests) can get you pretty far to a solid test suite.

  • 1 agent writes the tests, threads the needle.

    Another agent reviews the tests, finds duplicate code, finds poor testing patterns, looks for tests that are only following the "happy path", ensures logic is actually tested and that you're not wasting time testing things like getters and setters. That agent writes up a report.

    Give that report back to the agent that wrote the test or spin up a new agent and feed the report to it.

    Don't do all of this blindly, actually read the report to make sure the llm is on the right path. Repeat that one or two times.

  • Yeah I've seen this too. Bangs out five hundred line unit test file, but half of them are as you describe.

    Just writing one line in CLAUDE.md or similar saying "don't test library code; assume it is covered" works.

    Half the battle with this stuff is realizing that these agents are VERY literal. The other half is paring down your spec/token usage without sacrificing clarity.

Once the agent writes your tests, have another agent review them and ask that agent to look for pointless tests, to make sure testing is around more than just the "happy path", etc. etc.

Just like anything else in software, you have to iterate. The first pass is just to thread the needle.

> I dunno, maybe I have high standards

I don't get it. I have insanely high standards so I don't let the LLM get away with not meeting my standards. Simple.

I get the sense that many programmers resent writing tests and see them as a checkbox item or even boilerplate, not a core part of their codebase. Writing great tests takes a lot of thought about the myriad of bizarre and interesting ways your code will run. I can’t imagine that prompting an LLM to “write tests for this code” will result in anything but the most trivial of smoke test suites.

Incidentally, I wonder if anyone has used LLMs to generate complex test scenarios described in prose, e.g. “write a test where thread 1 calls foo, then before hitting block X, thread 2 calls bar, then foo returns, then bar returns” or "write a test where the first network call Framework.foo makes returns response X, but the second call returns error Y, and ensure the daemon runs the appropriate mitigation code and clears/updates database state." How would they perform in this scenario? Would they add the appropriate shims, semaphores, test injection points, etc.?

Embrace TDD? Write those tests and tell the agent to write the subject under test?

  • Different strokes for different folks and all, but that sounds like automating all of the fun parts and doing all of the drudgery by hand. If the LLM is going to write anything, I'd much rather make it write the tests and do the implementation myself.

    • This is a serious problem with professional software development — programmers see testing as a chore, and self-indulge in the implementation.