Systematically generating tests that would have caught Anthropic's top‑K bug

3 days ago (theorem.dev)

Fuzzing as a concept is heavily underused in routine testing. People will usually focus on positive flows and some obvious/typical negative ones. But it's almost impossible to have the time to write exhaustive testing to cover all negative and boundary scenarios. But the good news is, you don't actually have to. There are so many tools now that can almost exhaustively generate tests for you at all levels. The bad news, they are not so widely used.

Recently asked Claude Code how to do more thorough tests and described how I imagine it and it set up Hypothesis and mutmut testing. The latter is quite cool, it introduces bugs in the code like swapping values and relational operators and checks if any test catches the bug. If not, your tests are probably not thorough enough. Better than just line coverage checks.

  • Is my intuition correct that Mutmut has far better ROI than Hypothesis? And its as necessary as code coverage?

    • Mutation testing and PBT frameworks like Hypothesis are complementary. One can use the latter to find tests that kill mutants.

I appreciate that I am not the only one seeing the connection between property based testing and proofs.

I will quibble a little with their characterization of proofs as being more computationally impractical.

Proof verification is cheap. On a good day it is as cheap as type checking. Type checking being a kind of proof verification. That said writing proofs can be tricky.

I am still figuring out what writing proofs requires. Anything beyond what your type system can express currently requires a different set of tools (Rocq, Lean, etc) than writing asserts and ordinary programs. Plus writing proofs tends to have lots of mundane details that can be tedious to write.

So while I agree proofs seem impractical. I won't agree the reason is computational cost.

  • There is a tradeoff between the compute required to generate a proof and the compute required to check it. Fully generic methods such as SMT solvers require compute exponential in the number of variables in scope and lines of code in a single function. Breaking the exponential requires (and is perhaps equivalent to) understanding the code in sufficient detail (cf https://arxiv.org/abs/2406.11779). In practice, the computational cost of semi-automated proof generation is a significant bottleneck, cf https://dspace.mit.edu/handle/1721.1/130763?show=full and https://jasongross.github.io/papers/2022-superlinear-slownes... .

  • I've been working on this thing where the proofs (using the esbmc library) check the safety properties and the unit tests check the correctness so the state space doesn't explode and it takes a year to run the verification. Been working out pretty well so far (aside from spending more time tracking down esbmc bugs than working on my own code) and found some real issues, mostly integer overflow errors but other ones too.

    Kind of loosely based on the paper "A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification" (https://arxiv.org/abs/2305.14752) which, I believe, was posted to HN not too long ago.

Using the phrase "without the benefit of hindsight" is interesting. The hardest thing with any technology is knowing when to spend the effort/money on applying it. The real question is: do you want to spend your innovation tokens on things like this? If so, how many? And where?

Not knocking this, just saying that it is easy to claim improvements if you know there are improvements to be had.

That’s anthropic fault for continuing to use top-K, a stoneage tier shitty sampler. Your own head of mechanistic interpretability invented a better one called tail free sampling in 2019.

  • That seems to have nice properties, but 2019 was a while ago. Is the problem of top-k sampling still relevant with much better frontier models?