← Back to context

Comment by bastawhiz

2 months ago

> some medical device segfaulting or OOMing in an unmanaged way

Memory safety is arguably always a security issue. But a library segfaulting when NOT dealing with arbitrary external input wouldn't be a CVE in any case, it's just a bug. An external third party would need to be able to push a crafted config to induce a segfault. I'm not sure what kind of medical device, short of a pacemaker that accepts Bluetooth connections, might fall into such a category, but I'd argue that if a crash in your dependencies' code prevents someone's heart from beating properly, relying CVEs to understand the safety of your system is on you.

Should excessive memory allocation in OpenCV for certain visual patterns be a CVE because someone might have built an autonomous vehicle with it that could OOM and (literally) crash? Just because you put the code in the critical path of a sensitive application doesn't mean the code has a vulnerability.

> 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.

Of course! It's a security problem for me in my usage of a library because I made the failure mode of the library have security implications. I don't want my service to go offline, but that doesn't mean I should be entitled to having my application's exposure to failure modes affecting availability be treated on equal footing to memory corruption or an RCE or permissions bypass.

I agree on the first part, but it's useful to be more formal on the latter --

1. Agreed it's totally fine for a system to have some bugs or CVEs, and likewise fine for OSS maintainers to not feel compelled to address them. If someone cares, they can contribute.

2. Conversely, it's very useful to divorce some application's use case from the formal understanding of whether third-party components are 'secure' because that's how we stand on the shoulders of giants. First, it lets us make composable systems: if we use CIA parts, with some common definition of CIA, we get to carry that through to bigger parts and applications. Second, on a formal basis, 10-20 years after this stuff was understood to be useful, the program analysis community further realized we can even define them mathematically in many useful ways, where different definitions lead to different useful properties, and enables us to provably verify them, vs just test for them.

So when I say CIA nowadays, I'm actually thinking both mathematically irrespective of downstream application, and from the choose-your-own-compliance view. If some library is C+I but not A... that can be fine for both the library and the downstream apps, but it's useful to have objective definitions. Likewise, something can gradations of all this -- like maybe it preserves confidentiality in typical threat models & definitions, but not something like "quantitative information flow" models: also ok, but good for everyone to know what the heck they all mean if they're going to make security decisions on it.

  • > So when I say CIA nowadays, I'm actually thinking both mathematically irrespective of downstream application, and from the choose-your-own-compliance view.

    That doesn't help anyone, because it is far too primitive.

    A medical device might have a deadly availability vulnerability. That in itself doesn't tell you anything about the actual severity of the vulnerability, because the exploit path might need "the same physical access as pulling the power plug". So not actually a problem.

    Or the fix might need a long downtime which harms a number of patients. So maybe a problem, but the cure would be worse than the disease.

    Or the vulnerability involves sending "I, Eve Il. Attacker, identified by badge number 666, do want to kill this patient" to the device. So maybe not a problem because an attacker will be caught and punished for murder, because the intent was clear.

    • We're talking about different things. I agree CVE ratings and risk/severity/etc levels in general for third party libraries are awkward. I don't have a solution there. That does not mean we should stop reporting and tracking C+I+A violations - they're neutral, specific, and useful.

      Risk, severity, etc are careful terms that are typically defined contextually relative to the application... yet CVEs do want some sort of prioritization level reported too for usability reasons, so it feels shoe-horned. Those words are useful in operational context where a team can prioritize based on them, and agreed, a third-party rating must be reinterpreted for the application's rating. CVE ratings is an area where it seems "something is better than nothing", and I don't think about it enough to have an opinion on what would be better.

      Conversely, saying a library has a public method with an information flow leak is a statement that we can compositionally track (e.g., dataflow analysis). It's useful info that lets us stand on the shoulders of giants.

      FWIW, in an age of LLMs, both kinds of information will be getting even more accessible and practical for many more people. I can imagine flipping my view on risk/severity to being more useful as the LLM can do the compositional reasoning in places the automated symbolic analyzers cannot.

Yes it should, software will eventually be liable like in any other industry that has been around for centuries.

  • Who should be liable? The person who sells you the software? Or the person who put some code on GitHub that the first guy used?

    • In principle the guy that sells the software, that is why now SBOMs (Software Bill of Materials) are a thing in security assessments.