Comment by timschmidt

10 hours ago

The badness cannot be overstated. "Hostile codebase" would be an appropriate label. Much more information available in Giovani Bechis's presentation: https://www.slideshare.net/slideshow/libressl/42162879

If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.

> If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.

I wonder who could possibly be incentivized to make the cryptography package used by most of the worlds computers and communications networks full of subtly exploitable hard to find bugs. Surely everyone would want such a key piece of technology to be air tight and easy to debug

But also: surely a technology developed in a highly adversarial environment would be easy to maintain and keep understandable. You definitely would have no reason to play whackamole with random stuff as it arises

  • > Surely everyone would want such a key piece of technology to be air tight and easy to debug

    1. Tragedy of the Commons (https://en.wikipedia.org/wiki/Tragedy_of_the_commons) / Bystander Effect (https://en.wikipedia.org/wiki/Bystander_effect)

    2. In practice, the risk of introducing a breakage probably makes upstream averse to refactoring for aesthetics alone; you’d need to prove that there’s a functional bug. But of course, you’re less likely to notice a functional bug if the aesthetic is so bad you can’t follow the code. And when people need a new feature, that will get shoehorned in while changing as little code as possible, because nobody fully understands why everything is there. Especially when execution speed is a potential attack vector.

    So maybe shades of the trolley problem too - people would rather passively let multiple bugs exist, than be actively responsible for introducing one.

    • I wonder what adoption would actually look like.

      It reminds me of Google Dart, which was originally pitched as an alternate language that enabled web programming in the style Google likes (strong types etc.). There was a loud cry of scope creep from implementors and undo market influence in places like Hacker News. It was so poorly received that Google rescinded the proposal to make it a peer language to JavaScript.

      Granted, the interests point in different directions for security software v.s. a mainstream platform. Still, audiences are quick to question the motives of companies that have the scale to invest in something like making a net-new security runtime.

      2 replies →

  • > Surely everyone would want such a key piece of technology to be air tight and easy to debug

    The incentives of different parties / actors are different. 'Everyone' necessarily comprises an extremely broad category, and we should only invoke that category with care.

    I could claim "Everyone" wants banks to be secure - and you would be correct to reject that claim. Note that if the actual sense of the term in that sentence is really "almost everyone, but definitely not everyone", then threat landscape is entirely different.

    • I read that whole paragraph with a tinge of sarcasm. There's bad actors out there that want to exploit these security vulnerabilities for personal gain and then there's nation-state actors that just want to spy on everyone.

  • > highly adversarial environment

    Except it's not. Literally nobody ever in history had their credit card number stolen because of SSL implementation issues. It's security theater.

I expected much worse to be honest. Vim’s inline #ifdef hell is on a whole other level. Look at this nightmare to convince yourself: https://geoff.greer.fm/vim/#realwaitforchar

  • That's a lot of ifdefs, sure. But at least Vim doesn't have it's own malloc which never frees and can be dynamically replaced at runtime and occasionally logs sensitive information.

    • As long as you don't statically link you can easily replace malloc (LD_PRELOAD). Many debug libraries do. Why is this so special in openssl? (I don't know if there is some special reason, though openssl is a weird one to begin with)

      1 reply →

See also The State of OpenSSL for pyca/cryptography

https://news.ycombinator.com/item?id=46624352

> Finally, taking an OpenSSL public API and attempting to trace the implementation to see how it is implemented has become an exercise in self-flagellation. Being able to read the source to understand how something works is important both as part of self-improvement in software engineering, but also because as sophisticated consumers there are inevitably things about how an implementation works that aren’t documented, and reading the source gives you ground truth. The number of indirect calls, optional paths, #ifdef, and other obstacles to comprehension is astounding. We cannot overstate the extent to which just reading the OpenSSL source code has become miserable — in a way that both wasn’t true previously, and isn’t true in LibreSSL, BoringSSL, or AWS-LC.

Also,

> OpenSSL’s CI is exceptionally flaky, and the OpenSSL project has grown to tolerate this flakiness, which masks serious bugs. OpenSSL 3.0.4 contained a critical buffer overflow in the RSA implementation on AVX-512-capable CPUs. This bug was actually caught by CI — but because the crash only occurred when the CI runner happened to have an AVX-512 CPU (not all did), the failures were apparently dismissed as flakiness. Three years later, the project still merges code with failing tests: the day we prepared our conference slides, five of ten recent commits had failing CI checks, and the day before we delivered the talk, every single commit had failing cross-compilation builds.

Even bugs caught by CI get ignored and end up in releases.

  • Wow, that is just crazy. You should investigate when developing software, but for something like OpenSSL... Makes me think this must be a heaven for state actors.

> If someone meant to engineer a codebase to hide subtle bugs which might be remotely exploitable, leak state, behave unexpectedly at runtime, or all of the above, the code would look like this.

I'd wager if someone did that the codebase would look better than OpenSSLs

The codebase designed to hide bug would look just good enough that rewriting it doesn't seem worth it.

OpenSSL is so bad that looking at it there is just desire to rip parts straight out and replace them, and frankly only fear-mongering around writing security code kept people from doing just that and only after heartbleed the forks started to try. And that would also get rid of any hidden exploit.