Linux kernel security work

2 days ago (kroah.com)

Recently, things have been advancing which may finally allow a seamless virtualization experience on the Linux desktop (and match QubesOS in some security aspects).

GPU drivers supporting native contexts with Mesa support.

Wayland sharing between guest and host. It used to be somewhat sloppy (involved protocol parsing; sommilier & wayland-proxy-virtwl) but recently someone undertook a project to do it properly that may soon bear fruit: https://codeberg.org/drakulix/wl-cross-domain-proxy

A VMM to utilize these features: https://github.com/AsahiLinux/muvm

And a solution which ties these things together: https://git.clan.lol/clan/munix

  • This is really cool!

    I wonder if it would be possible to live-migrate apps from one machine running munix to another. You could pause, transfer, and resume the virtual machine.

this is why redhat is still relevant in 2025. there's always a need for this kind of work.

  • I think you could make a stronger case for the opposite. How does Redhat know which commits to cherry when upstream explicitly won't tell you which are relevant to security?

    • Because Red Hat pays the salaries of dozens (hundreds?) of kernel maintainers all over different subsystems. So they’re subject matter experts, and know exactly which ones are relevant to Red Hat.

      3 replies →

Sometimes I dream about a 100% secure OS. Maybe formal verification is the key, or Rust, I don’t know. But I would love to know that I can't be hacked.

  • > But I would love to know that I can't be hacked.

    Cool. So social engineering it is. You are your own worst enemy anyways.

    • A world in which the only way to get hacked is to be tricked would be an insane improvement over today. There are a lot of ways to solve social engineering issue with tech solutions too - FIDO2 is one example, as would be app isolation, etc.

  • The problem is that for the overwhelming majority of use cases the isolation features that are violated by security bugs are not being used for real isolation, but for manageability and convenience. Virtualization, physical host segregation, etc are used to achieve greater isolation. People don't necessarily care about these flaws because they aren't actually exposed to the worst case preconditions. So the amount of contributor attention you could get behind a "100% secure OS" might not be as large as you are hoping. Anyway if you want to work on such things there are various OS development efforts floating around.

    • Isolation is one thing, correctness is another. You may have architecturally perfect, hardware-assisted isolation, but triggering a bug would breach it. This is how a typical break out of a VM, or a container, or a privilege escalation, happens.

      There is a difference between a provably secure-by-design system, and a formally proven secure implementation, like Sel4.

Meanwhile it's 2026 and Greg's own website still doesn't support TLS.

  • Honestly, until encrypted client hello has widespread support, why bother? I mean I did it for fun the first time and now with caddy its not a lot of effort. But for a personal blog, a completely static site, what benefit do you get from the encryption? Anyone monitoring the traffic will see the domain in clear text anyway. And they'd see the destination IP, which I imagine in this case being one server that has exactly one domain pointed at it.

    • Men in the middle including predatory ISPs can not only spy but also enrich. Injecting JavaScript and embedding ads is the best case scenario. You don't want that.

      In addition even without bad actors TLS will prevent random corruption due to flaky infrastructure from breaking the page and even caching those broken assets, preventing a reload from fixing it. TCP/IP alone doesn't sufficiently prevent this.

      7 replies →

    • Integrity. TLS does prevent man-in-the-middle attacks. For a personal blog, that may not be important but you _do_ get a benefit, even if the encryption is not necessary.

      1 reply →

if they really think that, they should have remove their CNA, no?

  • Nah, "removing CNA" = "let any security researcher decide what kernel vulnerability is"

    And unfortunately, there are plenty of security researchers who are only interested in personal CVE counts, and will try to assign highest priority to a mostly harmless bug.

> If you are forced to use encryption to report security problems, please reconsider this policy as it feels counterproductive (UK government, this means you…)

LOL

"A bug is a bug" lol.

There's a massive difference between "DoS requiring root" and "I can own you from an unprivileged user with one system call". You can say "but that DoS could have been a privesc! We don't know!" but no one is arguing otherwise? The point is that we do know the impact of some bugs is strictly a superset of other bugs, and when those bugs give control or allow a violation of a defined security boundary, those are security bugs.

This has all been explained to Greg for decades, nothing will change so it's just best to accept the state - I'm glad it's been documented clearly.

Know this - your kernel is not patched unless you run the absolute latest version. CVEs are discouraged, vuln fixes are obfuscated, and you should operate under that knowledge.

Attackers know how to watch the commit log for these hidden fixes btw, it's not that hard.

edit: Years later and I'm still rate limited so I can't reply. @dang can this be fixed? I was rate limited for posting about Go like... years ago.

To the person who replies to me:

> This is correct for a lot of different software, probably most of it. Why is this a point that needs to be made?

That's not true at all. You can know if you're patched for any software that discloses vulnerabilities by checking if your release is up to date. That is not true of Linux, by policy, hence this entire post by Greg and the talks he's given about suggesting you run rolling releases.

Sorry but it's too annoying to reply further with this rate limiting, so I'll be unable to defend my points.

  • > Know this - your kernel is not patched unless you run the absolute latest version.

    This is correct for a lot of different software, probably most of it. Why is this a point that needs to be made?

    • (Parent has already replied by editing their original comment, but I'll tack on a bit more info, from my perspective.)

      The reason this has to be emphasized is that all new code runs the risk of regressions, and in a production environment, you hate regressions. Therefore, not only do you not want new features, but you also don't want irrelevant bug fixes. Bug fixes, even security fixes, are not magically free of independent regressions. Therefore a valid incentive exists to minimize backports to production environments. And such a balancing act depends on the careful investigation of the impact of known bugs, one by one.

      From the fine blog post:

      > For those that are always worried “what if a bugfix causes problems”, they should remember that a fix for a known bug is better than the potential of a fix causing a future problem as future problems, when found, will be fixed then.

      A whole lot of users can disagree with this. For good, practical reasons. The expected damage of a known bug may be estimated, while an unknown regression brought in by the fix for the known bug may cause way worse damage.

[flagged]

  • Why? What benefit would https provide over http when visiting a pure information (and I'm guessing statically generated) website?

    • It is a valid thing to point out, when implementing https on gkh's site would take all of 15 minutes to set up (let's encrypt or cloudflare or whatever you wish).

      Things should be https by default these days. There's zero downside anymore.

And then all our customers will demand fixes for them in our docker images, because they’re that smart.

There must be a way to ship a docker image without a kernel, since it doesn’t get used for anything anyway.

I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.

I think all the other explanations are just double-think. Why? If "bugs are just bugs" is really a true sentiment, why is there a separate disclosure process for security bugs? What does it even mean to classify a bug as a security bug during reporting if it's no different than any other bug report? Why are fixes developed in secret & potential embargoes sometimes invoked? I guess some bugs are more equal than others?

  • As mentioned in the article, every bug is potentially a security problem to someone.

    If you know that something is a security issue to your organization, you definitely don't want to paint a target on your back by reporting the bug publicly with an email address <your_name>@<your_org>.com. In the end, it is really actually quite rare (given the size of the code base and the popularity of linux) that a bug has a very wide security impact.

    The vast majority of security issues don't affect organizations that are serious about security (yes really, SELinux eliminates or seriously reduces the impact of the vast majority of security bugs).

    • The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue. Security researchers unaffiliated not impacted by any such issue still report it this way (eg Project Zero reporting issues that don’t impact Google at all).

      Also Android uses SELinux and still has lots of kernel exploits. Believing SELinux solves the vast majority of security issues is fallacious, especially since it’s primarily about securing userspace, not the kernel itself .

      8 replies →

  • > I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.

    It doesn't work. I've looked at the kernel commit log and found vulnerabilities that aren't announced/ marked. Attackers know how to do this. Not announcing is a pure negative.

    • Linus argument against labeling some bugs, or even lack of features, as security vulnerabilities, is that all bugs can, with enough work and together with other circumstances, be a security vulnerability. Essentially every commit would need to be labeled as a cve fix, and then it’s just extra work for nothing.

      1 reply →

Their view that security bugs are just normal bugs remains very immature and damaging. It it somewhat mitigated by Linux having so many eyes on it and so many developers, but a lot of problems in the past could have bee avoided if they adopted the stance the rest of the industry recognizes as correct.

  • From their perspective, on their project, with the constraints they operate under, bugs are just bugs. You're free to operationalize some other taxonomy of bugs in your organization; I certainly wouldn't run with "bugs are just bugs" in mine (security bugs are distinctive in that they're paired implicitly with adversaries).

    To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.

    • > From their perspective, on their project, with the constraints they operate under, bugs are just bugs.

      That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.

      3 replies →

  • Classifying bugs as security bugs is just theater - and any company or organization that tries to classify bugs that way is immature and hasn't put any thought into it.

    First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?

    Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.

    It is meaningless in all cases.

    • > nearly every bug can be be exploited in a malicious way This is a bit contextually dependent. "This widget is the wrong color" is probably not a security issue in most cases, unless the widget happens to be a traffic signal, in which case it is a major safety concern.

      Even the line between "this is a bug" and "this is just a missing, incomplete, or poorly thought out feature" can get a bit blurry. At a certain point, many engineers get frustrated trying to pick apart the difference between all these ways of classifying the code they are writing and just want to get on with making the system work better.

    • > First of all "security" is undefined.

      No it isn't. Security boundaries exist and are explicit. It isn't undefined at all. Going from user X to user Y without permission to do so is an explicit vulnerability.

      The kernel has permissions boundaries. They are explicit. It is defined.

      > Second, nearly every bug can be be exploited in a malicious way,

      No they can't.

  • Linus has been very clear on avoiding the opposite, which is the OpenBSD situation: they obsess about security so much that nothing else matters to them, which is how you end up with a mature 30 year old OS that still has a dogshit unreliable filesystem in 2026.

    To paraphrase LT, security bugs are important, but so are all the other bugs.

    • OpenBSD doesn't really stress about security so much as they made that their identity and marketing campaign - their OS is lacking too many basic capabilities a security focused OS should have.

      > To paraphrase LT, security bugs are important, but so are all the other bugs.

      Right, this is wrong, and that's the problem. Security bugs as a class are always going to be more important than certain other classes of bugs.

      3 replies →

  • This feels almost too obvious to be worth saying, but “the rest of the industry” does not in fact have a uniform shared stance on this.

  • The rest of the industry relies on following a CVE list and ticking off vulnerabilities as a way to ensure "owners" are correctly assigned risk and sign it off - because there is nothing else that "owners" could do. The whole security through CVE is broken and is designed to be useful to create large "security organizations" that have the single purpose of annoying everyone with reports without solving any issues.