Comment by codeflo
1 year ago
I'm not trying to be overly dramatic, but I think it's precisely when the industry accepted this as a law -- instead of treating it as something that needs be trained out of junior programmers -- is when software quality started to tank.
Consider any other form of engineering. Take some kind of screw. It has a documented specification in terms of torque, material strength and whatnot. Good engineers on the customer side will use the screws in a way that keeps within the spec. And good engineers on the supplier side will find ways to fulfill that specification as cheaply (which usually also means as narrowly) as possible.
There could be a kind of Hyrum's Law at play, if hardware engineers were idiots. Let's say that the screws accidentally overfulfil the specification today by 20%, and a customer measures the material to figure this out, and starts to depend on that. A year later, the supplier finds a cheaper way to produce the screws (or introduces a binning process) and as a consequence, the screws only exceed the spec by 5%, and the customer's product breaks. Who's responsible? The customer. And I should add, obviously.
This is fixed by training engineers. No one in their right mind would start to introduce artificial faults into their screws purely for the reason to prevent users from depending on the excess strength or something. But that's exactly the kind of thing that's regularly suggested to guard against Hyrum's Law.
So whenever I see software developers on my team look at the source code of a library to figure out "whether it's thread-safe" or "whether the sort order is stable" or whatever, I die a little on the inside. The problem is, you almost have to do that, because were two generations of software engineers into this and the practice is so accepted now that libraries no longer bother to document what they do (and don't) guarantee.
And then people wonder why every existing piece of software needs a fully staffed team nowadays just to keep it working. And why as a consequence, even core products built by competent companies (like Google Maps) have regressions in 15-year-old features every other week.
Yes, the relationship between "Hyrum's Law" and having a terrible documentation culture is strong.
The important part of the "Law" is the bit where it says "it does not matter what you promise in the contract".
That's definitely going to be true in a place where neither writing nor reading documentation is taken seriously (and one of the main things this site teaches us is that Google is such a place).
Yeah, no, humans can't hold complicated documentation in their heads. The whole experience of being human is finding a way to get by with a simple model which fits in your head but isn't wrong enough to cause trouble, and that's exactly what Hyrum is reflecting.
What you're getting at is the C++ "Just don't make any mistakes" approach to software engineering, which is a disaster that has cost our civilisation a great deal.
That is emphatically not what I am "getting at". I cannot express my opposition to that approach strongly enough.
I do not believe that the place people should hold documentation is "in their heads".
I do not believe that Hyrum's "Law" is helpful in getting to a situation where people's beliefs about what software will do match reality.
This isn't the first time I've come across people (particularly around the Rust community) who have got the idea into their heads that saying "reading documentation is important" is somehow close to saying "Real programmers don't make mistakes". I think that conflation is doing great harm.
4 replies →
> it's precisely when the industry accepted this as a law -- instead of treating it as something that needs be trained out of junior programmers -- is when software quality started to tank.
And precisely when exactly do you think that happened, when do you think there was this golden age where people actually relied on only documented behavior?
Windows and Linux both have been bending backwards to retain all sorts of undocumented features since early/mid 90s. Glibc was notoriously hampered by emacs relying on some details of its internals. C programs relying on implictly or explicitly undefined behavior has caused endless handwringing as long as there has been a C standard. The list goes on and on. Relying on implementation details has been the modus operandi in computing since day 1.
It is the difference between CS and Engineering, and why I favor hiring engineers.