Comment by chriswarbo
16 days ago
> But it's still a "blind" fuzzer and it would be nice to write one that gets feedback from code coverage somehow
There have been simplistic attempts at this, e.g. instead of performing 100 tests, just keep going as long as coverage increases.
The Choice Gradient Sampling algorithm from https://arxiv.org/pdf/2203.00652 feels like a nice way to steer generators in a more nuanced way. That paper uses it to avoid discards when rejection-sampling; but I have a feeling it could be repurposed to "reward" based on new coverage instead/as-well.
It's not like how it's done in that paper, but oddly enough I did end up implementing some conversions in both directions, from an array of choices to a JavaScript object and back again.
https://jsr.io/@skybrian/repeat-test/doc/core/~/Domain