Comment by JCattheATM

5 days ago

Their view that security bugs are just normal bugs remains very immature and damaging. It it somewhat mitigated by Linux having so many eyes on it and so many developers, but a lot of problems in the past could have bee avoided if they adopted the stance the rest of the industry recognizes as correct.

From their perspective, on their project, with the constraints they operate under, bugs are just bugs. You're free to operationalize some other taxonomy of bugs in your organization; I certainly wouldn't run with "bugs are just bugs" in mine (security bugs are distinctive in that they're paired implicitly with adversaries).

To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.

  • For sure, but you don't need to file CVEs for every regular bug.

    • In the context of the kernel, it’s hard to say when that’s true. It’s very easy to fix some bug that resulted in a kernel crash without considering that it could possibly be part of some complex exploit chain. Basically any bug could be considered a security bug.

      10 replies →

  • > From their perspective, on their project, with the constraints they operate under, bugs are just bugs.

    That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.

    • > almost any bugfix at the level of an operating system kernel can be a “security issue” given the issues involved (memory leaks, denial of service, information leaks, etc.)

      On the level of the Linux kernel, this does seem convincing. There is no shared user space on Linux where you know how each component will react/recover in the face of unexpected kernel behaviour, and no SKUs targeting specific use cases in which e.g. a denial of service might be a worse issue than on desktop.

      I guess CVEs provide some of this classification, but they seem to cause drama amongst kernel people.

    • You have a pretty strongly worded stance, but you don't provide an argument for it. May I suggest you detail why exactly you think their perspective is wrong, apart from "a lot of problems in the past could have been avoided"?

      1 reply →

Classifying bugs as security bugs is just theater - and any company or organization that tries to classify bugs that way is immature and hasn't put any thought into it.

First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?

Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.

It is meaningless in all cases.

  • > nearly every bug can be be exploited in a malicious way This is a bit contextually dependent. "This widget is the wrong color" is probably not a security issue in most cases, unless the widget happens to be a traffic signal, in which case it is a major safety concern.

    Even the line between "this is a bug" and "this is just a missing, incomplete, or poorly thought out feature" can get a bit blurry. At a certain point, many engineers get frustrated trying to pick apart the difference between all these ways of classifying the code they are writing and just want to get on with making the system work better.

  • > First of all "security" is undefined.

    No it isn't. Security boundaries exist and are explicit. It isn't undefined at all. Going from user X to user Y without permission to do so is an explicit vulnerability.

    The kernel has permissions boundaries. They are explicit. It is defined.

    > Second, nearly every bug can be be exploited in a malicious way,

    No they can't.

Linus has been very clear on avoiding the opposite, which is the OpenBSD situation: they obsess about security so much that nothing else matters to them, which is how you end up with a mature 30 year old OS that still has a dogshit unreliable filesystem in 2026.

To paraphrase LT, security bugs are important, but so are all the other bugs.

  • OpenBSD doesn't really stress about security so much as they made that their identity and marketing campaign - their OS is lacking too many basic capabilities a security focused OS should have.

    > To paraphrase LT, security bugs are important, but so are all the other bugs.

    Right, this is wrong, and that's the problem. Security bugs as a class are always going to be more important than certain other classes of bugs.

    • I have to disagree it's worst than you think ; open-bsd has so many mitigation in place that your computer will probably run 50% slower than a traditional OS. In reality you do not want to be playing 100% safety everywhere because this is simply expensive. You might prefer to create an isolated network on which you can set up un-mitigated servers - those will be able to run at 100% capacity.

      This can be looked upon when compiling the linux kernel, the mitigation options are rather numerous - and you'll have to also pick a sleep time ; what i'm saying is - currently linux only allows you to tune a machine to a specific requirement - it's not a spaceship on which you can change the sleep time frequency; dynamically shutdown mitigation ; and imagine that you are performing - In the same spirit, if you are holding keys on anything else than open-bsd ; I hope for you that you have properly looked up what you were installing.

    • And their ‘no remote holes’ is true for a base install with no packages, not necessarily a full system.

      I think the OpenBSD approach of secure coding is outdated. The goal should have always been to take human error out of the equation as much as possible. Rust and other modern memory safe languages move things in that direction, you don’t need ultra strict coding standards and a bible of compiler flags.

      1 reply →

This feels almost too obvious to be worth saying, but “the rest of the industry” does not in fact have a uniform shared stance on this.

The rest of the industry relies on following a CVE list and ticking off vulnerabilities as a way to ensure "owners" are correctly assigned risk and sign it off - because there is nothing else that "owners" could do. The whole security through CVE is broken and is designed to be useful to create large "security organizations" that have the single purpose of annoying everyone with reports without solving any issues.