Comment by spuz
6 years ago
No, this doesn't have anything to do with Haskell's lazy evaluation (at least not in the NonEmpty list example presented). The general idea is that if you are going to perform validation checks at some point in your dynamic language, you won't lose any performance performing those checks up-front in your static language.
But how can this possibly be true in general? As an example, just imagine I want to check that a string represents a list of space-delimited integers, where each one is also less than 10 digits. It's far more trivial to verify that than to actually parse the integers. And by performing the verification pass separately before the parsing pass, I can reject invalid inputs early, leading to much faster rejections than if I parse them as I validate them. The only way I can see there being zero cost difference is if everything is implicitly lazy, such that at runtime the verification won't even happen until the parsing needs to be performed too. Right?
It's true that in the failure case, you can get faster reults by short-circuiting. If failure cases are a substantial portion of your runtime, then yes, doing a fast pre-pass can be more efficient.
I'd speculate that in the real world, 99% of cases have time dominated by success-cases. Exceptions would be things like DOS attacks.
No that's just wrong. It's not just failures. Imagine if I verified they were all zero then I wouldn't have to do a full parse. Or if I verify they have only a few digits then I would avoid bignum parsing. I think this proves what I'm saying - this simply isn't true in general.
2 replies →