← Back to context

Comment by supernova87a

3 years ago

If I were to use a mental analogy with numerical methods that you get taught in grad school, isn't this kind of just a problem of identifying where poles are in your functions?

If you're cycling through all possible numbers and seeing where the output might blow up or go pathological, you probably instead (to be efficient about it) test the far limits of the input number range, and then sample the internal space with an adaptive spacing to see where things are changing and whether it's about to go crazy somewhere. Like divide by zero, subtract 2 small numbers etc.? Or analyze the function to see where subtraction (or whatever operation is applicable here) might misbehave? Rather than spend all your compute equally sampling every possible number.

I'm no expert but it seems to be the same type of problem.

The insight is that 32-bit space is so cheap to exhaustively explore now that you don’t need to resort to possible misses through sampling or spend the effort to make and verify a smart way of sampling. Just brute force the darn thing and be done.

  • Ah, I see, for a test that you'll only run once or rarely, might as well!

    • No no no! That’s really the realisation they’re pushing at, this is fine for routine tests too! In this almost ten year old post the runtime was ~90s, which isn’t too painful. On some threads here, times on modern hardware are quoted as milliseconds or less.

      The takeaway might be more: spend time to optimise testing everything, rather than spending time optimising the selection of the subset to test.

      Massive caveat of course is that some search spaces are just big, but 32bit space shouldn’t be considered big any more.

      1 reply →