Hyrum's Law

12 hours ago (hyrumslaw.com)

There's a corollary:

   Even if you explicitly deny a guarantee of a certain behavior in your contract,
   if you usually deliver that behavior,
   most of your customers will depend on it.

Some examples:

If you make a queueing system, it's impossible to guarantee anything other than delivery "at most once" (some loss occurs), or "at least once" (some duplication occurs), but if you usually provide "exactly once" in practice, most of your customers will depend on this.

If you provide a data bucket service, and guarantee availability, but not performance, and you usually provide 100MB/s throughput, your customers will have major problems if you only provide 10MB/s throughput in some cases.

If you make a self-driving car, and it requires human monitoring, but it's really good, say one intervention per year of driving . . . your customers will die because they aren't paying attention.

  • “Die normative Kraft des Faktischen” or “the normative force of the factual” is a thing and usually not seen as necessarily bad.

    It recognizes that legitimacy often emerges organically from social acceptance rather than top-down imposition. In technology we often see that evolving reference implementations work better than elaborate specifications.

    • In its form as ‘the normalisation of deviance’ it’s generally recognised as bad.

  • If I'm not mistaken, CPython's dict preserved insertion order as an implementation detail at first, but because too many users came to rely on it, it was made part of the language specification starting in Python 3.7.

    • I don't think that's quite right. CPython dicts only started preserving insertion order in 3.6. There were concerns that other Python implementations might not be able to do this or it would be a performance bottleneck, but by Python 3.7 these were satisfied enough to make it part of the spec.

  • Recency bias is how nuclear reactors go bad and people slip past security. There just hasn't been an incident in so long that we become complacent.

    I thought Netflix was nuts for Chaos Monkey, but having read several more treatises on human cognition and particularly cognitive biases, I now see they are crazy like a fox. Guaranteeing something breaks every week keeps things at the mental forefront.

  • > If you make a queueing system, it's impossible to guarantee anything other than delivery "at most once" (some loss occurs), or "at least once" (some duplication occurs), but if you usually provide "exactly once" in practice, most of your customers will depend on this.

    That's only a condition at termination. For ongoing communication, you can guarantee exactly once delivery. When communication ceases, the final state between the ends is indeterminate. If you can keep talking, or resume after breaks, it's a solvable problem.

    • That's true, but it doesn't help the customers.

      In a large system, terminations (both planned and unplanned) happen all the time. For the unplanned ones, it is very difficult to ensure exactly once, at least from the perspective of the queueing service, who can't check that the processing of the message did or did not occur outside of the original connection and it's "acks".

  • This is explicitly recognized in contract law: course of performance / dealing is a factor courts will consider in evaluating the nature of a deal. (Most contracts will try and carve it out).

  • Can't disagree with anything you said, though I think there are steps to address at least some of them: for queueing systems, testing with a chaos monkey isn't a bad idea... you'd want a test environment representative of production workloads, which is hard to do, but anything should be better than nothing.

    In the self-driving car scenario, you'd probably go with cold statistics: is it killing fewer people than ones that need more interventions? Just like queueing though, experiments in production could be problematic.

    • You can look at airport security as an example. Bombs and guns are quite rare in carry-on luggage. It would be far too boring for most operators, which would mean that they tend to tune out of their screening job.

      So what the x-ray interface does, is randomly insert guns and bombs into the scan at a relatively frequent rate. The operator must click on these problem areas. If it is a synthetic object, it then disappears and the operator can continue screening the bag. If it isn't synthetic, the bag gets shunted for manual inspection.

      So for a self-driving car, if it must be monitored (it's not L5 within the driving mission), then you would perhaps need the car to randomly ask the driver to take over, even though it's unnecessary. Or randomly appear to be making a mistake, to see if the user reacts.

      If the user doesn't react appropriately or in time, then self-driving is disabled for a time the next time the car starts.

      For the queuing system, it perhaps makes sense to inject a certain number of duplicates by default. Say 0.1%. Enough that it simply can't be ignored during development of the clients. Then, when duplicates arise as a consequence of system failures, the code is already expecting this and there's no harm to the workload.

    • > In the self-driving car scenario, you'd probably go with cold statistics

      No. There is a big difference in an accident caused by human error and an accident caused by machine failure.

      We tolerate much more of the former than the latter.

      This feels like a cognitive failure, but I do not think it is

From an API designer's standpoint (especially if that API has paying customers), Hyrum's Law is something that has to be taken into account. But from a user's standpoint, it is engineering malpractice, plain and simple. At the very least, relying on quirks of someone else's implementation is a risk that should be understood and accounted for, and no one has any reasonable grounds for complaint if those quirks suddenly change in a new version.

  • > At the very least, relying on quirks of someone else's implementation is a risk that should be understood and accounted for, and no one has any reasonable grounds for complaint if those quirks suddenly change in a new version.

    It's almost always unintentional. Someone wrote some code, it works, they ship it, not realizing it only works if the list comes back in a specific order, or with a specific timing. Then a year or two later they do some updates, the list comes back in a different order, or something is faster or slower, and suddenly what worked before doesn't work.

    This is why in Golang, for instance, when you iterate over map keys, it purposely does it in a random order -- to make sure that your program doesn't accidentally begin to rely on the internal implementation of the hash function.

    ETA: But of course, that's not truly random, just pseudorandom. It's not impossible that someone's code only works because of the particular pseudorandom order they're generating, and that if Golang even changes the pseudorandom number generator they're using to evade Hyrum's Law that someone's code will break.

    • There's probably at least one game out there somewhere that uses Go's map iteration order to shuffle a deck of cards, and would thus be broken by Go removing the thing that's supposed to prevent you from depending on implementation details.

    • Intent enters into it when someone complains about something that is obviously out of the specification breaking.

      Prior that, yeah, that's just a bug.

      > This is why in Golang, for instance, when you iterate over map keys, it purposely does it in a random order

      It could be that Go's intentions are different here, but IIRC languages will mix randomization into hashtables as it is otherwise a security issue. (You typically know the hash function, usually, so without randomization you can force hash collisions & turn O(1) lookups into O(n).)

      4 replies →

  • I was a Java user when Java 5 came out. Java 5 had a new hashtable implementation, and that implementation was also used for the reflection API.

    That was an era where lots of testing code barely worked, and what we found over and over again is that we had tests that were dependent on each other and the tests only passed because they ran in a particular order.

    And now Java 5 changed the order in which the test functions were being enumerated in the test files. Oops.

  • The problem is that people commonly don't even realize they're depending on implementation quirks.

    For example, they write code that unintentionally depends on some distantly-invoked async tasks resolving in a certain order, and then the library implementation changes performance characteristics and the other order happens instead, and it creates a new bug in the application.

  • I don't think such usage is malicious, so much as ignorant - it's sometimes hard to know that a behavior _isn't_ part of the API, especially if the API is poorly documented to begin with.

    I maintain a number of such poorly-documented systems (you could, loosely, call them "APIs") for internal customers. We've had a number of scenarios where we've found a bug, flagged it as a breaking change (which it is), said "there's _no way_ anybody's depending on that behavior", only to have one or two teams reach out and say yes, they are in fact depending on that behavior.

    For that reason, we end up shipping many of those types of changes ship with a "bug flag". The default is to use the correct behavior; the flag changes the behavior to remain buggy, to keep the internal teams happy. It's then up to us to drive the users to change their ways, which.. doesn't always happen efficiently, let's say.

  • Exactly. Hyrum's Law should always be paired with Postel's Law: Be conservative in what you do, be liberal in what you accept from others.

    • Being liberal in what you accept also leads to users depending on you accepting marginal input that exploits implementation quirks, either because the quirks get the job done or for more nefarious reasons.

      3 replies →

  • Hard disagree. If my users are exploiting some unintended, unannounced part of my API then me patching that out is something they’re just going to have to deal with. In well-described systems these sorts of behaviors lead to nasty bugs down the line, sometimes months in the future (e.g. “Huh, why aren’t my tax reports tying out?”).

  • > From an API designer's standpoint (especially if that API has paying customers), Hyrum's Law is something that has to be taken into account.

    How good-of-an-idea / best practice is API versioning?

        /api/v1/foo
        /api/v2/foo
    

    What are the pluses and minuses?

    • A couple of considerations are:

      - You have to decide whether to bump the entire API version or only the /foo endpoint. The former can be a big deal (and you don't want to do it often), the latter is messy. Especially if you end up with some endpoints on /v1 (you got it right first time) while others are on /v4 or /v5. Some clients like to hard-code the URL prefix of your API, including the version, as a constant.

      - You still have to decide what your deprecation and removal policy will be. Does there come a time when you remove /api/v1/foo completely, breaking even the clients who are using it correctly, or will you support it forever?

      It's not easy at all, especially if you have to comply with a backwards compatibility policy. I've had many debates about whether it's OK to introduce breaking changes if we consider them to be bug fixes. It depends on factors like whether either behaviour is documented and subjective calls on how "obviously unintended" the behaviour might be.

    • Plus, easy to see that you might have to do something different to move over to v2 as client.

      Minus, You will support v1 forever. It's almost impossible to make it go away.

    • you end up with a lot of versions if you version everything that could change some non-guaranteed behavior in some corner case.

  • Depends on the product. Sometimes you are completely dependent on an API ecosystem (iOS, Android, Windows) where the only way to achieve something is a quirk

  • > But from a user's standpoint

    Not true generally. One man's engineering malpractice is another man's clever hack.

    Users of Windows 95 complained that Windows 95 broke SimCity.

    What did Windows 95 break? It fixed an obscure allocator bug SimCity was relying on.

    Users loved Windows 95, for ""fixing"" this. How was it fixed? By introducing an obscure switch to old allocator if it detected SimCity in the app name.

    https://arstechnica.com/gadgets/2022/10/windows-95-went-the-...

    • Different users. The users that GP was accusing of malpractice would be the Maxis devs in this case, not the end users who were trying to install SimCity on their Windows 95 machine.

      Microsoft has a commitment to backwards compatibility that I think is going too far, but I understand why. Raymond Chen has explained that if a user buys the new version of Windows and their programs stop working, they will blame MS regardless because they don't have any way to know it's the program's fault. So MS is incentivized to go out of their way to enable these other programs' bad behavior, because it keeps their (Microsoft's) customers happy.

      1 reply →

> This effect serves to constrain changes to the implementation, which must now conform to both the explicitly documented interface, as well as the implicit interface captured by usage.

Let's be clear that this is one interpretation of the phenomenon described here, which we might call "The Doomerist Interpretation of Hyrum's Law". For everyone else, the whole reason that we bother categorize interface details into "public" and "private" buckets is precisely so we have the moral high ground to to tell people to go kick rocks when they get they get uppity about their own failure to adhere to the publicly documented interface.

This reminds me of the "illegal opcodes" in Commodore 64. I believe they were even present in their entirety in Commodore 128's (which uses 8502, not 6502 as its main CPU) C64 emulation mode. If someone knows how or why they remained part of 6502's instruction set over C64's lifetime, I really would like to know. I suspect, it was deliberate since "customers relied on them".

This is super interesting to think about in LLM world where lot of software is getting replaced with LLM calls.

In terms of output of an LLM, there is no clear promise in the contract, only observable behaviour. Also the observable behaviour is subject to change with every update in LLM. So all the downstream systems have to have evals to counter this.

One good example is claude code where now people have started complaining them switching models effecting their downstream coding workflows.

  • Yes.

    This is the unfortunate thing about wrapping LLMs in API calls to provide services.

    Unless you control the model absolutely (even then?) you can prompt the model with a well manicured prompt on Tuesday and get an answer - a block of text - and on Thursday, using the exact same prompt, get a different answer.

    This is very hard to build good APIs around. If done expect rare corner case errors that cannot be fixed.

    Or reproduced.

Actually, I think that in The Mythical Man-Month Brooks mentioned users depending on nominally undefined, practically consistent behavior, e.g. what was left in some part of a register.

It seems to me that there's some advantages to undertaking "Freedom of Navigation Operations" by randomizing implementations from time to time to discourage any dependence on internal behaviors.

For instance, traversal order of maps in Go is always randomized, to prevent subtle bugs caused by depending on the order.

As AI generated code becomes cheaper, it may be worthwhile to change some subset of your internal behaviors from release to release, so that users don't become too complacent.

This is why an API should always have an

    "assertions": true

option. Why should normal function calls have assertion/invariant checks, and not API calls?

  • This idea looks good. Have you used it in practice? Can you share how?

    • Yes, you basically use the option whenever you have assertions turned on in your code.

      Then the service running the API will do extra checking when the assertions option is true, basically making it less forgiving and following the specification closely.

A good example of defence against this is go maps randomize iteration order just so that people don't rely on it being consistent.

> if an interface has enough consumers, they will collectively depend on every aspect of the implementation ...

Yep!

> [and that] constrains changes to the implementation, which must now conform to both the explicitly documented interface, as well as the implicit interface captured by usage

Nope!

Software authors define the rules for the software that they author. I understand it's a spectrum and the rules are different in different circumstances but at the end of the day my API is what I say it is and if you rely on something that I don't guarantee that's on you and not me. Hyrum's Law describes a common pathology, it doesn't define an expected rule or requirement.