Comment by Izkata

1 day ago

I vaguely remember the same advice, it's pretty old. How you use the randomness is test specific, for example in math_add() it'd be something like:

  jitter = random(5)
  assertEqual(3 + jitter, math_add(1, 2 + jitter))

If it was math_multiply(), then adding the jitter would fail - that would have to be multiplied in.

Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.

> it's pretty old.

Damn, must be why only white hair is growing on my head now.

>Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.

So the concept of random is still there but expressed differently ? (= Am I partially right ?)

  • Yes, the randomness is still there but less manually specified by the developer. But also I haven't actually used it myself but had seen stuff on it before, so I had the wrong term: it's "property-based testing" you want to look for.

    Here's an example with a python library: https://hypothesis.readthedocs.io/en/latest/tutorial/introdu...

    The strategy "st.lists(st.integers())" generates a random list of integers that get passed into the test function.

    And also this page says by default tests would be run (up to) 100 times: https://hypothesis.readthedocs.io/en/latest/tutorial/setting...

    So I'm thinking... (not tested)

      @given(st.integers(), st.integers())
      def test_math_add(a, b):
          assert a + b == math_add(a, b)
    

    ...which is of course a little silly, but math_add() is a bit of a silly function anyway.