← Back to context

Comment by mehrdadn

6 years ago

Am I right to suspect the only reason "there can’t be any performance overhead" is that this is being done in a lazy language like Haskell? Meaning the statement won't hold in >99% of practical cases? Or did I misunderstand something?

No, this doesn't have anything to do with Haskell's lazy evaluation (at least not in the NonEmpty list example presented). The general idea is that if you are going to perform validation checks at some point in your dynamic language, you won't lose any performance performing those checks up-front in your static language.

  • But how can this possibly be true in general? As an example, just imagine I want to check that a string represents a list of space-delimited integers, where each one is also less than 10 digits. It's far more trivial to verify that than to actually parse the integers. And by performing the verification pass separately before the parsing pass, I can reject invalid inputs early, leading to much faster rejections than if I parse them as I validate them. The only way I can see there being zero cost difference is if everything is implicitly lazy, such that at runtime the verification won't even happen until the parsing needs to be performed too. Right?

    • It's true that in the failure case, you can get faster reults by short-circuiting. If failure cases are a substantial portion of your runtime, then yes, doing a fast pre-pass can be more efficient.

      I'd speculate that in the real world, 99% of cases have time dominated by success-cases. Exceptions would be things like DOS attacks.

      3 replies →