← Back to context

Comment by derefr

2 years ago

Consider an at-the-time novel hashing algorithm, e.g. Keccak.

• It's decidedly non-trivial — you'd have to 1. be a mathematician/cryptographer, and then 2. read the paper describing the algorithm and really understand it, before you could implement it.

• But also, it's usually just one file with a few hundred lines of C that just manipulates stack variables to turn a block of memory into another block of memory. Nothing that changes with new versions of the language. Nothing that rots. Uses so few language features it would have compiled the same 40 years ago.

Someone writes such code once; nobody ever modifies it again. No bugs, unless they're bugs in the algorithm described by the paper. Almost all libraries in HLLs are FFI wrappers for the same one core low-level reference implementation.

In practice, this code will use a variety of target-specific optimizations or compiler intrinsics blocked behind #ifdefs that need to be periodically updated or added for new targets and toolchains. If it refers to any kind of OS-specific APIs (like RNG) then it will also need to be updated from time to time as those APIs change.

That's not to say that code can't change slowly, just the idea that it never changes is extremely rare in practice.