Greg K-H: "Writing new code in Rust is a win for all of us"

3 days ago (lore.kernel.org)

Linus response here seems relevant to this context: https://lore.kernel.org/rust-for-linux/CAHk-=wgLbz1Bm8QhmJ4d...

  • Linus's reply is perfect in tone and hopefully will settle this issue.

    He is forceful in making his points, but respectful in the way he addressed Christoph's concerns.

    This gives me great hope that the Linux maintainer community and contributers using Rust will be able to continue working together, find more common ground, and have more success.

    • The response addressed Christoph's concerns _in word_.

      According to the policy, rust folks should fix the rust binding when C changes breaks the binding. The C maintainer don't need to care rust, at all.

      In practice, though, I would expect this needs lots of coordination. A PR with C only changes that breaks the whole building (because rust binding is broken) is unlikely to be merged to mainline.

      Linus can reiterate his policy, but the issue can't be resolved without some rust developers keep on their persistent work and builds up their reputation.

      30 replies →

    • Linus is one of the few people who can forcefully argue the case for moderation, and I've recognized some of the lines I've used to shift really contentious meetings back into place. There's the "shot-and-chaser" technique (a) this is what needs to happen now for the conversation...

      "I respect you technically, and I like working with you[...] there needs to be people who just stand up to me and tell me I'm full of shit[...] But now I'm calling you out on YOURS."

      ...and (b) this is me recognizing that me taking charge of a conversation is a different thing than me taking control of your decisions:

      "And no, I don't actually think it needs to be all that black-and-white."

      (Of course Linus has changed over time for the better, he's recognized that, and I've learned a lot with him and have made amends with old colleagues.)

      1 reply →

  • The whole "I respect you technically, and I like working with you." in the middle of being firm and typing in caps is such a vibe shift from the Linus of a decade ago. We love to see it!

    • My impression is that this has always been part of the core of his character, but he had to learn to put it into writing.

      Contrast this to people who are good at producing the appearance of an upstanding character when it suits them, but being quite vindictive and poisonous behind closed doors when it doesn't.

      1 reply →

  • I have always thought Linus may not like Rust or at least not Pro-Rust, and the only reason Rust is marching inside the Kernel is most of his close lieutenant are extremely pro rust. So there is this Rust experiment.

    But looking at all the recent responses it seems Rusted Linux is inevitable. He is Pro Rust.

  • Boy that response would've been helpful like a week ago, before several key Rust maintainers resigned in protest due to Linus's radio silence on the matter.

  • Huh, thanks. Really good to know where Linus stands here. Seems to me like Linus is completely okay with introduction of Rust to the kernel and will not allow maintainers blocking its adoption.

    Really good sign. Makes me hopeful about the future of this increasingly large kernel

  • This is indeed an excellent response and will hopefully settle the issues. Aside from the ones already settled by Linus's previous email, such as whether social media brigading campaigns are a valid part of the kernel development process.

  • Honestly I was waiting for a reply from Linux like this to put Hellwig in his place.

    > The fact is, the pull request you objected to DID NOT TOUCH THE DMA LAYER AT ALL.

    > It was literally just another user of it, in a completely separate subdirectory, that didn't change the code you maintain in _any_ way, shape, or form.

    > I find it distressing that you are complaining about new users of your code, and then you keep bringing up these kinds of complete garbage arguments.

    Finally. If he had been sooner maybe we wouldn't have lost talented contributors to the kernel.

    • Ah I can't believe I misspelled Linus as Linux, seems like it should happen often enough but honestly I think I rarely make that typo.

      1 reply →

    • > Finally. If he had been sooner maybe we wouldn't have lost talented contributors to the kernel.

      I feel that departure of the lead R4L developer was a compromise deliberately made to not make Hellwig feel like a complete loser. This sounds bad of course.

      5 replies →

The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain.

Possibly also that a significant portion of the suggested gain may be achievable via other means.

i.e. bounds checking and some simple (RAII-like) allocation/freeing simplifications may be possible without rust, and that those are (from the various papers arguing for Rust / memory safety elsewhere) the larger proportion of the safety bugs which Rust catches.

Possibly just making clang the required compiler, and adopting these extension may give an easier bang-for-buck: https://clang.llvm.org/docs/BoundsSafety.html

Over and above that, there seem to be various complaints about the readability and aesthetics of Rust code, and a desire not to be subjected to such.

  • > Possibly also that a significant portion of the suggested gain may be achievable via other means.

    Things like that have been said many times, even before Rust came around. You can do static analysis, you can put in asserts, you can use this restricted C dialect, you can...

    But this never gets wider usage. Even if the tools are there, people are going to ignore them. https://en.wikipedia.org/wiki/Cyclone_(programming_language) started 23 years ago...

    It took us decades to get to non executable stack and W^X and there are still occasional issues with that.

  • I think it's because C devs often think that they never make a mistake, so they see rust bringing on value.

    I had an argument about rust with a freebsd developer that had the same "I never make a mistake" attitude. I've made a PR to his project that fixes bugs that weren't possible in rust to being with. Not out of petty, but because his library was crashing my application. In fact, he tried to blame my rust wrapper for it when I raised an issue.

    • I have definitely done such things out of pettiness. Sometimes people just attract your attention as deserving of an attempt to humble them. I hope people will humble me as well when my vociferousness outstrips my talent. It's good to be sent directly back to home every now and then.

      2 replies →

  • > The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain.

    Read the above email. Greg KH is pretty certain it is worth the gain.

    > Possibly also that a significant portion of the suggested gain may be achievable via other means.

    I think this is a valid POV, if someone shows up and does the work. And I don't mean 3 years ago. I mean -- now is as good a time as any to fix C code, right? If you have some big fixes, it's not like the market won't reward you for them.

    It's very, very tempting to think there is some other putatively simpler solution on the horizon, but we haven't seen one.

    > Over and above that, there seem to be various complaints about the readability and aesthetics of Rust code, and a desire not to be subjected to such.

    No accounting for taste, but I don't think C is beautiful! Rust feels very understandable and explicit to my eye, whereas C feels very implicit and sometimes inscrutable.

    • > Read the above email. Greg KH is pretty certain it is worth the gain.

      I don't think GP or anyone is under the impression that Greg KH thinks otherwise. He's not the "some folks" referred here.

      1 reply →

  • > The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain. [..] Possibly also that a significant portion of the suggested gain may be achievable via other means.

    Sure, but opinions are always going to differ on stuff like this. Decision-making for the Linux kernel does not require unanimous consent, and that's a good thing. Certainly this Rust push hasn't been handled perfectly, by any means, but I think they at least have a decent plan in place to make sure maintainers who don't want to touch Rust don't have to, and those who do can have a say in how the Rust side of their subsystems look.

    I agree with the people who don't believe you can get Rust-like guarantees using C or C++. C is just never going to give you that, ever, by design. C++ maybe will, someday, years or decades from now, but you'll always have the problem of defining your "safe subset" and ensuring that everyone sticks to it. Rust is of course not a silver bullet, but it has some properties that mean you just can't write certain kind of bugs in safe Rust and get the compiler to accept it. That's incredibly useful, and you can't get that from C or C++ today, and possibly not ever.

    Yes, there are tools that exist for C to do formal verification, but for whatever reason, no one wants to use them. A tool that people don't want to use might as well not exist.

    But ultimately my or your opinion on what C and C++ can or can't deliver is irrelevant. If people like Torvalds and Kroah-Hartman think Rust is a better bet than C/C++-based options, then that's what matters.

  • If you look at the CVE lists, about 70-80% of all c memory bugs are related to OOB Read and Write. Additionally, like rust, fbounds-safety can remove redundant checks if it can determine the bounds. My question is how likely can it be adopted in the kernel (likely high).

    I will need to read their conversations more to see if it's the underlying fear, but formalization makes refactoring hard and code brittle (ie. having to start from scratch on a formal proof after substantially changing a subsystem). One of the key benefits of C/Kernel have been their malleability to new hardware and requirements.

    • > My question is how likely can it be adopted in the kernel (likely high).

      My guess is, it cannot. The way -fbounds-safety works, as far as I understand, is that it aborts the program in case of an out-of-bounds read or write. This is similar to a Rust panic.

      Aborting or panicking the kernel is absolutely not a better alternative to simply allowing the read/write to happen, even if it results in a memory vulnerability.

      Turning people's computer off whenever a driver stumbles on a bug is not acceptable. Most people cannot debug a kernel panic, and won't even have a way to see it.

      Rust can side-step this with its `.get()` (which returns an Option, which can be converted to an error value), and with iterators, which often bypass the need for indexing in the first place.

      Unfortunately, Rust can still panic in case of a normal indexing operation that does OOB access; my guess is that the index operation will quickly be fixed to be completely disallowed in the kernel as soon as the first such bug hits production servers and desktop PCs.

      Alternatively, it might be changed to always do buf[i % buf.size()], so that it gives the wrong answer, but stays within bounds (making it similar to other logic errors, as opposed to a memory corruption error).

      14 replies →

  • the problem is that Rust sucks the air out of the programming ecosystem because its proponents throw down the safety hammer, and research on other safe alternatives is slow. we do have an alternative low level memory safe language (Ada) but for whatever reason that's a nonstarter... there's no compelling reason that rust has to be the only way to achieve memory safety (much less in the OS domain where for example you don't have malloc/free so rust's default heap allocation can't be trivially used).

    it might do to wait until some other memory safe alternative appears.

    • Linus doesn't like ADA much, and the talent pool is FAR smaller and also FAR older on average. The compelling reason to use Rust over other languages is precisely that it hit escape velocity where others failed to do so, and it did that partially by being accessible to less senior programmers.

      And I don't understand how you can go from opining that Rust shouldn't be the only other option, to opining that they should have waited before supporting Rust. That doesn't make sense unless you just have a particular animus towards Rust.

      4 replies →

    • > there's no compelling reason that rust has to be the only way to achieve memory safety

      I don't think anyone is saying that Rust is the only way to achieve that. It is a way to achieve it, and it's a way that enough people are interested in working on in the context of the Linux kernel.

      Ada just doesn't have enough developer momentum and community around it to be suitable here. And even if it did, you still have to pick one of the available choices. Much of that decision certainly is based on technical merits, but there's still enough weight put toward personal preference and more "squishy" measures. And that's fine! We're humans, and we don't make decisions solely based on logic.

      > it might do to wait until some other memory safe alternative appears.

      Perhaps, but maybe people recognize that it's already late to start making something as critical as the Linux kernel more safe from memory safety bugs, and waiting longer will only exacerbate the problem. Sometimes you need to work with what you have today, not what you hope materializes in the future.

    • > research on other safe alternatives is slow

      It's slow because the potential benefits are slim and the costs of doing that research are high. The simple reality is that there just isn't enough funding going into that research to make it happen faster.

      > there's no compelling reason that rust has to be the only way to achieve memory safety

      The compelling reason is that it's the only way that has worked, that has reached a critical mass of talent and tooling availability that makes it suitable for use in Linux. There is no good Rust alternative waiting in the wings, not even in the kind of early-hype state where Rust was 15 years ago (Zig's safety properties are too weak), and we shouldn't let an imaginary better future stop us from making improvements in the present.

      > it might do to wait until some other memory safe alternative appears.

      That would mean waiting at least 10 years, and how many avoidable CVEs would you be subjecting every Linux user to in the meantime?

      3 replies →

    • > for whatever reason that's a nonstarter... there's no compelling reason

      Before rejecting a reason you at least have to know what it is!

      23 replies →

    • After all the Ada threads last week, I read their pdf @ Adacore's site (the Ada for Java/C++ Programmers version), and there were a lot of surprises.

      A few that I found: logical operators do not short-circuit (so both sides of an or will execute even if the left side is true); it has two types of subprograms (subroutines and functions; the former returns no value while the latter returns a value); and you can't fall through on the Ada equivalent of a switch statement (select..case).

      There are a few other oddities in there; no multiple inheritance (but it offers interfaces, so this type of design could just use composition).

      I only perused the SPARK pdf (sorry, the first was 75 pages; I wasn't reading another 150), but it seemed to have several restrictions on working with bare memory.

      On the plus side, Ada has explicit invariants that must be true on function entry & exit (can be violated within), pre- and post- conditions for subprograms, which can catch problems during the editing phase, and it offers sum types and product types.

      Another downside is it's wordy. I won't go so far as to say verbose, but compared to a language like Rust, or even the C-like languages, there's not much shorthand.

      It has a lot of the features we consider modern, but it doesn't look modern.

      5 replies →

  • > the readability and aesthetics of Rust code

    I've been writing C/C++ code for the last 16 years and I think a lot of mental gymnastics is required in order to call C "more readable" than Rust. C syntax is only "logical" and "readable" because people have been writing it for the last 60 years, most of it is literally random hacks made due to constraints ({ instead of [ because they thought that array would be more common than blocks, types in front of variables because C is just B with types, wonky pointer syntax, ...). It's like claiming that English spelling is "rational" and "obvious" only because it's the only language you know IMHO.

    Rust sure has more features but it also way more regular and less quirky. And it has real macros, instead of insane text replacement, every C project over 10k lines I've worked on has ALWAYS had some insane macro magic. The Linux kernel itself is full of function-like macros that do any sort of magic due to C not having any way to run code at compile-time at all.

  • > The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain.

    You're correct that there is a honest-to-god split of opinion by smart people who can't find a consensus opinion. So it's time for Linus to step up and mandate and say "discussion done, we are doing x". No serious organization of humans can survive without a way to break a deadlock, and it seems long past the time this discussion should have wrapped up with Linus making a decree (or whatever alternative voting mechanism they want to use).

Regarding the Linux development process: How do Linux maintainers / contributors have time to read these long threads of long posts? Just this one discussion looks like it would take hours to read and these are busy developers.

How does it work? Are there only a few threads that they read? Which ones?

  • Not sure if you've made this experience yet, but the one thing I've learned about being an involved maintainer of a sizeable open source project is that it's mostly about communicating.

    You'll be talking to a lot of people and making sure that everyone is on the same page, and that's what's going on here, hopefully. If you just shut up and write code all day, you probably aren't gonna get there and there will be conflict, especially if other people are touching your systems and aren't expecting your changes.

    • In the 20 years that I've been working on sizeable closed source projects, it's also mostly about communicating. Even if the team is small, it's mostly about communicating. Occasionally some developers don't want to communicate, and prefer to shut up and write code all day, like you said. That usually creates more conflict due to different expectations, regardless of how brilliant you are.

      1 reply →

  • tooling and practice.

    First you use a tool designed around following mailing lists. text based mail readers. they represent the threads in a compact form, allow to collapse threads and have them only resurrect if new content shows up. they also allow for pattern based tagging and highlighting of content "relevant to you", senders of interest, direct mentioning of your name/email address, ... and minor UX niceties like hiding duplicate subject in responses (Re: yadda <- we know that, it's at the top of the thread already)

    such tool ergonomics allow you to focus on what's relevant to you

    Hint: Outlook doesn't cut it.

    And then with the right tool you practice, you learn how to skim the thread view like you maybe learned to skim the newspaper for relevant content.

    and with the right tool and practice in place you can readily skim mailing lists during the day when you feel like it and can easily catch up after vacation.

  • Writing code in large teams is maybe 20% of time spent working, guesstimating on average. There are great engineers writing absolutely nothing mergable for weeks.

  • This hacker news post has more comments than the mailing list thread that inspired it. A roughly comparable amount of text. It’s a lot, but certainly doable.

    That + having a couple decades to refine your email client setup goes a long way.

  • I imagine this works just like it works for anyone: they prioritize what's important to them, and if they don't get to the things lower on their priority list, that's just life.

    I don't think it would be necessary for most kernel developers to read that entire email thread. I feel like I could get through the entire thing in a half hour by ruthlessly skimming and skipping replies that don't tell me anything I care about, and only reading in full and in detail the handful or two of emails that really interest me.

    And as a sibling says, a huge part of software development, especially when you're working with a large community of distributed developers, is communication. I expect most maintainers spend the majority of their time on communication, and less on writing code. And a lot of the contributors who write a lot of kernel code probably don't care too much about a lot of the organizational/policy-type discussion that goes on.

  • One possibility is that they only use a small amount of time, mental effort, and context size to go over all of the messages at a relatively shallow level. If there is anything that lets them send the ball back into somebody else's court without fully digesting a message or thread, they will go for it. That other person will then be responsible for the effort of replying at all, thinking about the subject matter, accounting for other peoples' messages, and composing the reply message itself. They also probably further minimize reading intellectual subthreads, and instead keep practical, concrete items at the top of their stack.

    Overall, this means that they will sometimes err on the side of being deaf or dismissive.

  • First of all, this is what? A month or two of posts? Spreading the time to read out over that make the cost almost go away. You can do it while drinking coffee or whatever, and when reading in better formats (say, in your inbox), you will see what a mail is about and then skip it if you are not interested in this particular tangent.

    But also, don't expect this kind of flame war to be a regular thing. Most discussions are a lot smaller and involve few people.

    • > First of all, this is what? A month or two of posts?

      It's 3 days of posts, according to the dates in the outline structure at the bottom.

In my honest opinion, it's not a good idea to mix two programming languages into the same monolithic codebase side by side. It would be less problematic if used for different purposes or layers, like frontend and backend. But we know it still creates unpleasant friction when you have to work on both sides on your own. Otherwise, it creates technical AND communication friction if the C devs and Rust devs work separately. As someone who works with embedded systems at times, I can imagine the pain that you have to set up two toolchains (with vastly different build infra beasts like GNU Make and Cargo) and the prolonged build time of CI and edit-compile-run debugging cycles given the notorious slow compile time of the Rust/LLVM compiler.

  • >It would be less problematic if used for different purposes or layers, like frontend and backend.

    Good news! At the present moment, Rust is only being used for drivers. Who knows if that will change eventually, but it's already the case that the use case is contained.

  • Greg K-H's email acknowledges that mixed-language projects are difficult to deal with. But he makes a good mitigating point: they are all Linux kernel maintainers and developers, and they all already work on very hard things. They can handle this.

  • The Rust in kernel doesn't use Cargo, does it? (Genuine question - someone do confirm)

    That being said, it depends on how well the two languages integrate with each other - I think.

    Some of the best programming experience I had so far was when using Qt C++ with QML for the UI. The separation of concerns was so good, QML was really well suited for what it was designed for - representing the Ui state graph and scripting their interactions etc ... And it had a specific role to fill.

    Rust in the kernel - does it have any specific places where it would fit well?

    • Yes, cargo is involved. R4L currently works by invoking kbuild to determine the CFLAGS, then passes them to bindgen to generate the rust kernel bindings. It then invokes cargo under the hood, which uses the bindings and the crate to generate a static lib that the rest of the kernel build system can deal with.

      1 reply →

  • > the prolonged build time of CI and edit-compile-run debugging cycles

    Does Linux kernel development have hot reload on the C side as a comparison?

    • It used to, until Oracle bought it out. It is not usable for changes to the ABI though; only kernel functions. The use case was hot-patching a running kernel to fix a security vulnerability in e.g. a device driver, but it could be used to modify almost any function.

      https://en.wikipedia.org/wiki/Ksplice

  • > It would be less problematic if used for different purposes or layers, like frontend and backend.

    Wouldn't a microkernel architecture shine here? Drivers could, presumably, reside in their own projects and therefore be written in any language: Rust, Zig, Nim, D, whatever.

I do not understand how this is supposed to work in practice. If there are "Rust bindings" then the kernel cannot have a freely evolving internal ABI, and the project is doomed to effectively split into the "C" core side and the "Rust" side which is more client oriented. Maybe it will be a net win for Linux for finally stabilizing the internal APIs, and even open the door to other languages and out-of-tree modules. On the other hand, if there are no "Rust bindings" then Rust brings very little to the table.

  • > I do not understand how this is supposed to work in practice. If there are "Rust bindings" then the kernel cannot have a freely evolving internal ABI...

    Perhaps I misunderstand your argument, but it sounds like: "Why have interfaces at all?"

    The Rust bindings aren't guaranteed to be stable, just as the internal APIs aren't guaranteed to be stable.

  • ABI is irrelevant. Only external APIs/ABIs are frozen, kernel-internal APIs have always been allowed to change from release to release. And Rust is only used for kernel-internal code like drivers. There's no stable driver API for linux.

    • External kernel APIs/ABIs are not frozen unless by external you only mean user space (eg externally loaded kernel modules try to keep up with dkms but source level changes require updates to the module source, often having to maintain multiple versions in one codebase with ifdef’s to select different kernel versions)

      1 reply →

  • I don't understand why rust bindings imply a freezing (or chilling) of the ABI—surely rust is bound by roughly the same constraints C is, being fully ABI-compatible in terms of consuming and being consumed. Is this commentary on how Rust is essentially, inherently more committed to backwards compatibility, or is this commentary on the fact that two languages will necessarily bring constraints that retard the ability to make breaking changes?

  • From what I have read, the intent seems to be that a C maintainer can make changes that break the Rust build. It’s then up to the Rust binding maintainer to fix the Rust build, if the C maintainer does not want to deal with Rust.

    The C maintainer might also take patches to the C code from the Rust maintainer if they are suitable.

    This puts a lot of work on the Rust maintainers to keep the Rust build working and requires that they have sufficient testing and CI to keep on top of failures. Time will tell if that burden is sustainable.

    • > Time will tell if that burden is sustainable.

      Most likely this burden will also change over time. Early in the experiment it makes sense to put most of the burden on the experimenters and avoid it from "infecting" the whole project.

      But if the experiment is successful then it makes sense to spread the workload in the way that minimizes overall effort.

      1 reply →

    But for new code / drivers, writing them in Rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? -- greg k-h

  • The question is to what extend this is true - given that Rust programmers also make stupid mistakes (e.g. https://rustsec.org/advisories/RUSTSEC-2023-0080.html) that look exactly like C bugs. Not that I think Rust does not have advantages in terms of safety, but probably not as much as some people seem to believe when making such arguments. The other question is at what cost it comes.

    • Granted, there are plenty of people who don't understand these issue very well who think "Rust = no bugs". Of course they're wrong. But that said, this CVE is an interesting example of just how high the bar is that Rust sets for correctness/security. The bug is that, if you pass 18446744073709551616 as the width argument to this array transpose function, you get undefined behavior. It's not clear whether any application has ever actually done this in practice; the CVE is only about how it's possible to do this. In most C libraries, on the other hand, UB for outrageous size/index parameters would be totally normal, not even a bug, much less a CVE. If an application screwed it up, maybe you'd open a CVE against the application.

      1 reply →

    • I'd argue that he addresses this with the two paragraphs immediately preceding the one quoted above:

      > As someone who has seen almost EVERY kernel bugfix and security issue for the past 15+ years (well hopefully all of them end up in the stable trees, we do miss some at times when maintainers/developers forget to mark them as bugfixes), and who sees EVERY kernel CVE issued, I think I can speak on this topic.

      The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)

      > I'm all for moving our C codebase toward making these types of problems impossible to hit, the work that Kees and Gustavo and others are doing here is wonderful and totally needed, we have 30 million lines of C code that isn't going anywhere any year soon. That's a worthy effort and is not going to stop and should not stop no matter what.

      > But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this?

    • This is a false tradeoff. The big win for Rust in the kernel is for new code. Bug density and impact is highest in newer code (it may, according to recent research, actually decay exponentially). There's no serious suggestion that existing code get forklifted out for new Rust code, only that the project create a streamlined affordance for getting new drivers into the kernel in Rust rather than C.

    • Rust doesn't claim to protect you from integer overflow bugs, so I'm not sure what you're trying to prove by linking to that security advisory.

      But it does protect against memory leaks, use-after-free, and illegal memory access. C does not.

      > The other question is at what cost it comes.

      I think I trust the kernel developers to decide for themselves if that cost is worth it. They seem to have determined it is, or at least worth it enough to keep the experiment running for now.

      Greg K-H even brings this up directly in the linked email, pointing out that he has seen a lot of bugs and security issues in the kernel (all of them that have been found, when it comes to security issues), and knows how many of them are just not possible to write in (safe?) Rust, and believes that any pain due to adopting Rust is far outweighed by these benefits.

      1 reply →

    • Are these people on the room with us right now? Come on, man. This is a horrible argument to make. Rust has these problems happen exceptionally rarely, in clearly marked places, and when they get fixed they strengthen all the code that relies on it. In C you have these bugs happen every hundred lines of code. It’s not even worth comparing. This is the programming equivalent of bringing up shark attacks versus car crashes.

      1 reply →

    • If I understand correctly, this particular issue that you've linked to can only trigger a buffer overflow because the implementation of transpose() is written in unsafe Rust.

      1 reply →

It's really all opinions what is better or worse, but i do respect the sentiment that there is some boundary, and on one side of the boundary, Rust makes a lot of sense, and the other side, Rust does not work at all. (managing global mutable resources). It weirds me out a bit there is even such discussions going on in projects like this. It seems obvious and proven at this point and if not that then atleast it should be obvious already for a long time that if you program within some large codebase or ecosystem, you are not the only voice, and you need to learn to collaborate with people with different views as you and make it work.

I really don't like rust, hence instead of wanting to contribute to projects which will inadvertently lead to more and more rust code being brought in, i start my own projects, when i can be the only voice of reason and have my joys of making things segfault :>... Its quite simple. If like me you are stuobborn and unflexible, you are a lone wolf. accept it and move on to be happy :) rather than trying to piss against the wind of change.

  • That's true. I often want to just make something cool and I don't want someone turning it into a research project basically because they like compilers.

> The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C .... Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes.

What's the reach here of linters/address san/valgrind?

Or a linter written specifically for the linux kernel? Require (error-path) tests? It feels excessive to plug another language if these are the main arguments? Are there any other arguments for using Rust?

And even without any extra tools to guard against common mistakes, how much effort is solving those bug fixes anyway? Is it an order of magnitude larger than the cognitive load of learning a (not so easy!) language and context-switching continuously between them?

  • You can't valgrind kernel space

    Linters might be helpful, but I don't remember there being good free ones

    The problem here is simple: C is "too simple" for its own good and it puts undue cognitive burden on developers

    And those who reply with "skill issue" are the first to lose a finger on it

These days, the bugs I generate in my own code are rarely programming errors. They're misunderstandings of the problem I am trying to solve, or misunderstandings of how to fit it into the rest of the (very complex) code.

For example, I cannot even recall the last time I had a double-free bug, though I used to do it often enough.

The emphasis for me is on a language that makes it easy to express algorithms.

  • > For example, I cannot even recall the last time I had a double-free bug

    Honestly, it's not the double-frees I worry about, since even in a language like C where you have no aids to avoid it, the natural structure of programs tends to give good guidance on who is supposed to free an object (and if it's unclear, risking a memory leak is the safer alternative).

    It's the use-after-free I worry about, because this can come about when you have a data structure that hands out a pointer to an element that becomes invalid by some concurrent but unrelated modification to that data structure. That's where having the compiler bonk me on the head for my stupidity is really useful.

  • +1 I’ve really enjoyed using more declarative languages in recent years.

    At work I’ve been helping push “use SQL with the best practices we learned from C++ and Java development” and it’s been working well.

    It’s identical to your point. We no longer need to care about pointers. We need to care about defining the algorithms and parallel processing (multi-threaded and/or multi-node).

    Fun fact: even porting optimized C++ to SQL has resulted in performance improvements.

Community and people are the main issue.

If the people who work on the kernel now don't like that direction then that's a big problem.

The Linux leadership don't seem very focused on the people issues.

Where is the evidence that there is buy in from the actual people doing kernel development now?

Or is it just Linus and Greg as commanders saying "thou shalt".

  • Plenty of Linux maintainers are either fully or partially on board with using Rust in drivers. Don't overindex on the opinions of two or three of them that are vocally opposed / skeptical.

    Christian is a special case because his subsystem (DMA) is essentially required for the vast majority of useful device drivers that one might want to write. Whereas other subsystems are allowed to go at their own pace, being completely blocked on DMA access by the veto of one salty maintainer would effectively doom the whole R4L project. So whereas normally Linus would be more willing to avoid stepping on any maintainer's toes, he kind of has to here.

    • I guess I simply don't understand why he's biased against rust folks using his API as long as they aren't mucking about on his lawn. Why does he care? If the API and calling conventions are adhered to it makes absolutely no difference to him or the hardware that it's running on. I don't understand his objections. If I write a c library or network service, I don't care if the person using it is using rust, c, ada, or cobol...

      2 replies →

  • > Where is the evidence that there is buy in from the actual people doing kernel development now?

    Are the people doing the work not good enough? See the maintainers list -- Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich, etc., etc...

    Who else exactly do you want to buy in?

    > If the people who work on the kernel now don't like that direction then that's a big problem.

    I think if you really want to lead/fight a counter-revolution, it will come down to effort. If you don't like Rust for Linux (for what could be a completely legitimate reason), then you need to show how it is wrongheaded.

    Like -- reverse engineer an M1 GPU or some other driver, and show how it can be done better with existing tooling.

    What I don't think you get to do is wait and do nothing and complain.

  • Another ”people perspective” point is the aging demograph of the kernel devs and the need to engage a new generation of devs. Betting on a modern language like rust might just be what’s needed on that note. And, according to Torvalds they have the folks willing to do the work today.

    • Is that a job for kernel folks to address or companies who hire people to work on the Linux kernel?

  • > Where is the evidence that there is buy in from the actual people doing kernel development now?

    https://lwn.net/Articles/1007921/

    > To crudely summarize: the majority of responses thought that the inclusion of Rust in the Linux kernel was a good thing; the vast majority thought that it was inevitable at this point, whether or not they approved.

This statement was sorely needed for this discussion to move forward. Hopefully the last section fills the needed parties with resolve

The actual project is "lets modernize the internal kernel api surface", and "how tolerable is it to write against this api in rust" is just the best metric at hand to measure the progress.

This is the correct frame for RFL proponents. You're welcome.

I wonder how Microsoft implements rust in their kernel.

As for this issue, it's just a nature of any project, people will come and go regardless, so why not let those C developers leave and keep the rust folks instead? At some point you have to steer the ship and there will always be a group of people unhappy about the course

  • From what I can tell, Microsoft seems to have the advantage that a lot of in-kernel interfaces are documented and relatively stable. Linux guarantees that the userland APIs don't change, but when a kernel component changes you're out of luck. Windows seems much more focused on internal consistency and stability. Probably in part because a lot of proprietary software uses a lot of internal APIs not meant for public consumption and there's nothing Microsoft can do to stop that, really.

    In a way, these Rust bindings are somewhat stabilizing the Linux API as well, by putting more expectations and implications from documentation into compiler-validated code. However, this does imply certain changes are sure to break any Rust driver code one might encounter, and if may take Rust devs a while to redesign the interfaces to maintain compatibility. It's hardly a full replacement for a stable API.

    At the moment, there aren't enough Rust developers to take over kernel maintenance. Those Rust developers would also need to accept giant code trees from companies updating their drivers, so you need experts in both.

    With the increasing amount of criticism languages like C are receiving online because we now have plain better tooling, I think the amount of new C developers will diminish over the coming years, but it still may take decades for the balance to shift.

  • or they can be adults and work it out. Sometimes you just ahve to put the kids in different sandboxes and keep them apart, that's why we have APIs and calling conventions.

  • Alternatively, there's nothing preventing the Rust folks building their own kernel from the ground up.

    • There are multiple kernels written in Rust already. Writing another one wouldn't be interesting.

      The point of R4L is that people want to write drivers for Linux in Rust. The corporate sponsors that are involved also are interested in writing drivers for Linux in Rust. Sure, Google could rebase Android on top of RedoxOS or Fuschia and Red Hat could spend a decade writing Linux Subsystem for RedoxOS, but neither want to do those things. They want to write drivers, for Linux, in Rust.

      Telling them to write a new kernel is a bit like telling them they should go write a new package manager. It's a completely different thing from what they actually care about.

      4 replies →

    • The kernel is not a problem. Drivers are. If it wasn’t for drivers we’d all be rolling our own custom kernels.

It's really disappointing to me to see a lot of the negative reactions and comments here. I know it's popular and in vogue now to hate on Rust, but:

Influential people who have worked on the ins and outs of the Linux kernel for years and decades believe that adopting Rust (or at least keeping the Rust experiment going) is worth the pain it will cause.

That's really all that matters. I see people commenting here about how they think RAII isn't suitable for kernel code, or how keeping C and Rust interfaces in sync will slow down important refactoring and changes, or how they think it's unacceptable that some random tiny-usage architecture that Rust/LLVM doesn't support will be left behind, or... whatever.

So what! I'm not a Linux kernel developer or maintainer, and I suspect most (if not all) of the people griping here aren't either. What does it matter to you if Linux adopts Rust? Your life will not be impacted in any way. All that matters is what the maintainers think. They think this is a worthwhile way to spend their time. The people putting in the work get to decide.

> the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible

What is he referring to?

  • I can think of several issues with the C++ committee that people can reasonably point to (some of them mutually contradictory even!), but I have no idea which of them is being referred to. It's possible he's referring to profiles, which is one of those cases where there's mutually contradictory criticisms that can be leveled against it so I have no idea in that case if he thinks they're a good or a bad thing.

    Personally, the biggest issue that gives me fear for C++'s future is that the committee seems to have more or less stopped listening to implementer feedback and concerns.

  • Presumably the endless documents they keep coming out with explaining how profiles will solve memory safety Or whatever

  • https://izzys.casa/2024/11/on-safe-cxx/ is a long and opinionated drama and swear-filled read on the topic. snips from it:

    > "many people reading this might be familiar with the addition of the very powerful #embed preprocessor directive that was added to C. This is literally years of work brought about by one person, and that is JeanHeyd Meneide. JeanHeyd is a good friend and also the current editor of the C standard. And #embed started off as the std::embed proposal. Man, if only everyone in the world knew what the C++ committee did to fucking shut that shit down..."

    > ... "Herb [Sutter] ... spun up a Study Group, SG15, at the recommendation of GDR to handling “tooling” in the C++ ecosystem. This of course, paved the way for modules to get absolutely fucking steamrolled into the standard while allowing SG15 to act as a buffer preventing any change to modules lest they be devoid of Bjarne [Stroustrup] and Gaby [Gabriel Dos Reis]’s vision. Every single paper that came out of SG15 during this time was completely ignored."

    > "Gaby [Gabriel Dos Reis] is effectively Bjarne’s protégé. ... when it came to modules Gaby had to “prove himself” by getting modules into the language. Usually, the standard requires some kind of proof of implementation. This is because of the absolute disaster that was export template, a feature that no compiler that could generate code ever implemented. Thus, proof of modules workability needed to be given. Here’s where I bring in my personal conspiracy theory. The only instance of modules being used prior to their inclusion in the standard was a single email to the C++ mailing lists (please recall the amount of work the committee demanded from JeanHeyd for std::embed) where Gaby claimed that the Microsoft Edge team was using the C++ Modules TS via a small script that ran NMake and was “solving their problem perfectly”." ... the face she made when I asked [a Microsoft Employee] about Gaby’s statement signaled to me that the team was not happy. Shortly after modules were confirmed for C++20, the Microsoft Edge team announced they were throwing their entire codebase into the goddamn garbage and just forking Chromium... Gaby Dos Reis fucking lied, but at least Bjarne got what he wanted. ... This isn’t the first time Gaby has lied regarding modules, obviously...."

    > ... "This [different] paper is just frankly insulting to anyone who has done the work to make safer C++ syntax, going on to call (or at least allude to) Sean Baxter’s proposal an “ad hoc collection of features”. Yet another case of Gaby’s vagueries where he can feign ignorance. As if profiles themselves are not ad hoc attributes, that have the exact same problem that Bjarne and others argue against, specifically that of the virality of features. The C++ committee has had 8 years (8 long fucking years) to worry about memory safety in C++, and they’ve ignored it. Sean Baxter’s implementation for both lifetime and concurrency safety tracking has been done entirely in his Circle compiler [which] is a clean room, from the ground up, implementation of a C++ compiler. If you can name anyone who has written a standards conforming C++ compiler frontend and parser and then added metaprogramming and Rust’s lifetime annotation features to it, I will not believe you until you show them to me. Baxter’s proposal, P3390 for Safe C++ has a very large run down on the various features available to us..."

    > "Bjarne has been going off the wall for a while now regarding memory safety. Personally I think NASA moving to Rust hurt him the most. He loves to show that image of the Mars rover in his talks. One of the earliest outbursts he’s had regarding memory safety is a very common thing I’ve seen which is getting very mad that the definition a group is using is not the definition he would use and therefore the whole thing is a goddamn waste of time."

    > "You can also look at how Bjarne and others talk about Rust despite clearly having never used it. And in specifically in Bjarne’s case he hasn’t even used anything outside of Visual Studio! It’s all he uses. He doesn’t even know what a good package manager would look like, because he doesn’t fucking care. He doesn’t care about how asinine of an experience that wrangling dependencies feels like, because he doesn’t have to. He has never written any actual production code. It is all research code at best, it is all C++, he does not know any other language."

    > "Orson Scott Card didn't write Ender's Game [link] -> Ender's Game is an apologia for Hitler"

    > "this isn’t a one off situation. It isn’t simply just Bjarne who does this. John Lakos of Bloomberg has also done this historically, getting caught recording conversations during the closing plenary meeting for the Kona 2019 meeting because he didn’t get his way with contracts. Ville is another, historically insulting members and contributors alike (at one point suggesting that the response to a rejected paper should be “fuck you, and your proposal”), and I’m sure there are others, but I’m not about to run down a list of names and start diagnosing people like I’m a prominent tumblr or deviantart user in 2017."

    > "the new proposed (but not yet approved) Boost website. This is located at boost.io and I’m not going to turn that into a clickable link, and that’s because this proposed website brings with it a new logo. This logo features a Nazi dog whistle. The Nazi SS lightning bolts. Here’s a side by side of the image with and without the bolts being drawn over (Please recall that Jon Kalb, who went out of his way to initially defend Arthur O’Dwyer, serves on the C++ Alliance Board)."

    > "Arthur O’Dwyer has learnt to keeps his hands to himself, he does not pay attention to or notice boundaries and really only focuses on his personal agenda. To quote a DM sent to me by a C++ community member about Arthur’s behavior “We are all NPCs to him”. He certainly doesn’t give a shit. He’s been creating sockpuppets, and using proxies to get his changes into the LLVM and Clang project. Very normal behavior by the way."

    > "This is the state C++ is in, though as I’ve said plenty of times in this post, don’t get it twisted. Bjarne ain’t no Lord of Cinder. We’re stuck in a cycle of people joining the committee to try to improve the language, burning out and leaving, or staying and becoming part of the cycle of people who burn out the ones who leave."

    • It is unfortunate that it is written in such a unhinged way as there are probably some valid points mixed in with the insanity..

    • I feel like this one rant has done untold damage to the credibility of those who have some reason to criticise C++

    • I have no dogs in C++ internal politics, I haven't written C++ for years.

      But the author of that post clearly has some very fairly serious mental problems.

      11 replies →

This might be a silly question, but why don't we have something like PR Gate pipelines that ensures it passes before being picked up by a maintainer?

It’s hard. Most people agree it should have memory safety, but also I’m not looking to become a full scale maintainer either.

"Rust also gives us the ability to define our in-kernel apis in ways that make them almost impossible to get wrong when using them. We have way too many difficult/tricky apis that require way too much maintainer review just to "ensure that you got this right" that is a combination of both how our apis have evolved over the years"

Funny, that's not Theodore T'so's position. The Rust guys tried to ask about interface semantics and he yelled at them:

https://www.youtube.com/watch?v=WiPp9YEBV0Q&t=1529s

  • I watched like 2 minutes of this and I don't understand what this is supposed to be saying about the current debate. There's a guy lecturing the audience about how there are 30 filesystems in the kernel and not all of them are going to be instantaneously converted to Rust. But gregkh and kees aren't suggesting that any of them be converted to Rust!

    • It's only relevant to the current debate in the sense that that event was the trigger for Wedson (the first and OG R4L project contributor) to quit, which was only a few months ago, so it's a fresh wound marinating in the background while essentially the same drama unfolds all over again.

We should have seen this post before Hector Martin got so fed up that he decided to resign(to be fair, he probably had other issues as well that contributed).

I was very confused by the lack of an actual response from Linus, he only said that social media brigading is bad, but he didn't give clarity on what would be the way forward on that DMA issue.

I have worked in a similar situation and it was the worst experience of my work life. Being stonewalled is incredibly painful and having weak ambiguous leadership enhances that pain.

If I were a R4L developer, I would stop contributing until Linus codifies the rules around Rust that all maintainers would have to adhere to because it's incredibly frustrating to put a lot of effort into something and to be shut down with no technical justification.

  • Clarity was apparently provided privately. However, I have to say that a public statement would have been better. I can only imagine how demoralizing it is for the R4L contributors to watch their work being trashed in public and the leadership is only privately willing to give reassurances. Not to mention bad for recruitment.

  • You know, the complaint is that R4L would add undue load to existing maintainers (at least that's about the only coherent technical thing I've gathered from Christoph's emails). What also adds undue load to existing maintainers is causing their peers to quit. Hector Martin is a talented individual and the loss of him will surely be felt.

I've been using Linux since 2005, and I've loved it in almost every circumstance. But the drama over the last couple of years surrounding Rust in the kernel has really soured me on it, and I'm now very pessimistic about its future. But I think beyond the emotional outbursts of various personalities, I don't think that the problem is which side is "right". Both sides have extremely valid points. I don't think the problem is actually solvable, because managing a 40M+ SLoC codebase is barely tenable in general, and super duper untenable for something that we rely on for security while running in ring 0.

My best hope is for replacement. I think we've finally hit the ceiling of where monolithic kernels can take us. The Linux kernel will continue to make extremely slow progress while it deals with internal politics fighting against an architecture that can only get bigger and less secure over time.

But what could be the replacement? There's a handful of fairly mature microkernels out there, each with extremely immature userspaces. There doesn't seem to be any concerted efforts behind any of them. I have a lot of hope for SeL4, but progress there seems to be slow mostly because the security model has poor ergonomics. I'd love to see some sort of breakout here.

  • Like 75% of those lines of code are in drivers or architecture-specific code (code that only runs for x86 or ARM or SPARC or POWER etc.)

    The amount of kernel code actually executing on any given machine at any given point in time is more likely to be around 9-12 million lines than anywhere near 40 million.

    And a replacement kernel won't eliminate the need for hardware drivers for a very wide range of hardware. Again, that's where the line count ramps up.not

    • Yes, of course. But apart from the (current) disadvantage that those drivers don't exist yet, those are all positives in favor of microkernel architectures. All of the massive SLOC codebases run in usermode and with full process isolation, require no specific language compatibility and can be written in any language, do not require upstreaming, and do not require extensive security evaluations from highly capable maintainers who have their focus scattered across 40m lines of code.

  • Not a kernel guy, but - what's stopping a microkernel from emulating the Linux userspace? I know Microsoft had some success implementing the Linux ABI with WSL v1.0.

    I suppose the main objection to that is accepting some degree of lock-in with the existing userspace (systemd, FHS...) over exploring new ideas for userspace at the same time.

    • FWIW Fuchsia has a not-quite-a-microkernel and has been building a Linux binary compatibility layer: https://fuchsia.dev/fuchsia-src/concepts/starnix?hl=en.

      (disclaimer: I work on Fuchsia, on Starnix specifically)

      EDIT: for extra HN karma and related to the topic of the posted email thread, Starnix (Fuchsia's Linux compat layer) is written in Rust. It does run on top of a kernel written in C++ but Zircon is much smaller than Linux.

      4 replies →

  • The rust drama is completely overblown considering rust is still years away from being a viable replacement. Sure it makes sense to start experimenting and maybe write a few drivers in rust but many features are still only available in nightly rust.

    I suspect many rust devs tend to be on the younger side, while the old C guard sees Linux development in terms of decades. Change takes time.

    Monolithic kernels are fine. The higher complexity and worse performance of a microkernel design are mostly not worth the theoretical architectural advantages.

    If you wanted to get out of the current local optimum you would have to think outside of the unix design.

    The main treat for Linux is the Linux Foundation that is controlled by big tech monopolists like Microsoft and only spends only a small fraction on actual Kernel development. It is embrace, extend, extinguish all over but people think Microsoft are the good guys now.

    • > but many features are still only available in nightly rust.

      Nope. The features are all in stable releases (Since last Spring in fact). However some of the features are still marked as unstable/experimental and have to be opted-in (so could in theory have breaking changes still). They're entirely features that are specific to kernel development and are only needed in the rust bindings layer to provide safe abstractions in a kernel environment.

  • > I have a lot of hope for SeL4, but progress there seems to be slow mostly because the security model has poor ergonomics.

    seL4 has its place, but that place is not as a Linux replacement.

    Modern general purpose computers (both their hardware, and their userspace ecosystems) have too much unverifiable complexity for a formally verified microkernel to be really worthwhile.

    • Oh don't worry, seL4 isn't formally proven on any multicore computer anyway.

      And the seL4 core architecture is fundamentally "one single big lock" and won't scale at all to modern machines. The intended design is that each core runs its own kernel with no coordination (multikernel a la Barrelfish) -- none of which is implemented.

      So as far as any computer with >4 cores is concerned, seL4 is not relevant at this time, and if you wish for that to happen your choice is really either funding the seL4 people or getting someone else to make a different microkernel (with hopefully a lot less CAmkES "all the world is C" mess).

      4 replies →

    • I agree that SeL4 won't replace Linux anytime soon, but I beg to differ on the benefits of a microkernel, formally verified or not.

      Any ordinary well-designed microkernel gives you a huge benefit: process isolation of core services and drivers. That means that even in the case of an insecure and unverified driver, you still have reasonable expectations of security. There was an analysis of Linux CVE's a while back and the vast majority of critical Linux CVEs to that date would either be eliminated or mitigated below critical level just by using a basic microkernel architecture (not even a verified microkernel). Only 4% would have remained critical.

      https://microkerneldude.org/2018/08/23/microkernels-really-d...

      The benefit of a verified microkernel like SeL4 is merely an incremental one over a basic microkernel like L4, capable of capturing that last 4% and further mitigating others. You get more reliable guarantees regarding process isolation, but architecturally it's not much different from L4. There's a little bit of clunkiness for writing userpace drivers for SeL4 that you wouldn't have for L4. That's what the LionsOS project is aiming to fix.

      8 replies →

    • I mean why does it have to be formally verified. Seems to me like the performance tradeoff for microkernels can be worth it to have drivers and other traditional kernel layer code, that don't bring down the system and can just be restarted in case of failures. Probably not something that will work for all hardware, but I would bet the majority would be fine with it.

      2 replies →

  • > But what could be the replacement? There's a handful of fairly mature microkernels out there

    Redox[0] has advantage that no-one will want to rewrite it in Rust.

    [0]: https://redox-os.org/

"he C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time."

I'd love to know where he got this impression. The new C++ features go a long way to helping make the language easier, and safer, to use.

  • Of course the modern C++ are safer but you can still shoot yourself in the foot. Compared to Rust you still need to think about the memory safety when writing C++ while Rust you don't need to think about it at all. The only time you need to think about the memory safety in Rust is when using unsafe keyword, which can be isolated into a dedicated function.

    Most C++ developers may don't understand what I mean. You need to proficient in Rust in order to understand it. When I was still using C++ as my primary language I have the same feeling as the other C++ developers about Rust. Once you start to comfortable with Rust you will see it is superior than C++ and you don't want to use C++ anymore.

  • C++ _has_ been getting safer and safer to write. However:

    1. The dangerous footguns haven't gone away 2. There are certain safety problems that simply can't be solved in C++ unless you accept that ABI will be broken and the language won't be backwards compatible.

    Circle (https://www.circle-lang.org/site/index.html) and Carbon (https://docs.carbon-lang.dev/) were both started to address this fundamental issue that C++ can't be fully fixed and made safe like Rust without at least some breaking changes.

    This article goes into more depth: https://herecomesthemoon.net/2024/11/two-factions-of-cpp/

    In the case of the Linux kernel, a lot of the newer features that C++ has delivered aren't _that_ useful for improving safety because kernel space has special requirements which means a lot of them can't be used. I think Greg is specifically alluding to the "Safety Profiles" feature that the C++ committee looks like it will be going with to address the big safety issues that C++ hasn't yet addressed - that's not going to land any time soon and still won't be as comprehensive as Rust.

perhaps someone can point me to a link where i can get information WHY it is so hard to call C from Rust or call into Rust code from C So i do not get the talk because i do not understand the issue.

  • It's not hard to just call C. Rust supports C ABI and there's tooling for converting between C headers and Rust interfaces.

    The challenging part is making a higher-level "safe" Rust API around the C API. Safe in the sense that it fully uses Rust's type system, lifetimes, destructors, etc. to uphold the safety guarantees that Rust gives and make it hard to misuse the API.

    But the objections about Rust in the kernel weren't really about the difficulty of writing the Rust code, but more broadly about having Rust there at all.

Prediction 2030: Linus Retires and C++ accepted as the primary language for writing the kernel.

Inadvertently, Rust makes working with C++ acceptable.

  • You might be onto something.

    Android already uses a hardware abstraction layer for Linux written in C++ to write drivers.

    It's a matter of politics to get something like this into the kernel.

Pasting the entire thing so people on mobile can read (at least on iPhone readability doesn’t work here:

As someone who has seen almost EVERY kernel bugfix and security issue for the past 15+ years (well hopefully all of them end up in the stable trees, we do miss some at times when maintainers/developers forget to mark them as bugfixes), and who sees EVERY kernel CVE issued, I think I can speak on this topic.

The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)

I'm all for moving our C codebase toward making these types of problems impossible to hit, the work that Kees and Gustavo and others are doing here is wonderful and totally needed, we have 30 million lines of C code that isn't going anywhere any year soon. That's a worthy effort and is not going to stop and should not stop no matter what.

But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.

Rust also gives us the ability to define our in-kernel apis in ways that make them almost impossible to get wrong when using them. We have way too many difficult/tricky apis that require way too much maintainer review just to "ensure that you got this right" that is a combination of both how our apis have evolved over the years (how many different ways can you use a 'struct cdev' in a safe way?) and how C doesn't allow us to express apis in a way that makes them easier/safer to use. Forcing us maintainers of these apis to rethink them is a GOOD thing, as it is causing us to clean them up for EVERYONE, C users included already, making Linux better overall.

And yes, the Rust bindings look like magic to me in places, someone with very little Rust experience, but I'm willing to learn and work with the developers who have stepped up to help out here. To not want to learn and change based on new evidence (see my point about reading every kernel bug we have.)

Rust isn't a "silver bullet" that will solve all of our problems, but it sure will help in a huge number of places, so for new stuff going forward, why wouldn't we want that?

Linux is a tool that everyone else uses to solve their problems, and here we have developers that are saying "hey, our problem is that we want to write code for our hardware that just can't have all of these types of bugs automatically".

Why would we ignore that?

Yes, I understand our overworked maintainer problem (being one of these people myself), but here we have people actually doing the work!

Yes, mixed language codebases are rough, and hard to maintain, but we are kernel developers dammit, we've been maintaining and strengthening Linux for longer than anyone ever thought was going to be possible. We've turned our development model into a well-oiled engineering marvel creating something that no one else has ever been able to accomplish. Adding another language really shouldn't be a problem, we've handled much worse things in the past and we shouldn't give up now on wanting to ensure that our project succeeds for the next 20+ years. We've got to keep pushing forward when confronted with new good ideas, and embrace the people offering to join us in actually doing the work to help make sure that we all succeed together.

thanks,

greg k-h

> The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)

C committee, are you listening? Hello? Hello? Bueller?

(Unfortunately, if they are listening it is to make more changes on how compilers should take "creative licenses" in making developers shoot themselves in the foot)

  • > error path cleanups, forgetting to check error values, and use-after-free mistakes

    C++ (ideally, C++17 or 20 to have all the boilerplate-reducing tools) allows for all of that to be made, even in a freestanding environment.

    It's just that it's not enforced (flexibility is a good thing for evergreen/personal projects, less so for corporate codebases), and that the C++ committee seems to have weird priorities from what I've read (#embed drama, modules are a failure, concepts are being forced through despite concerns etc.) and treats freestanding/embedded as a second-class citizen.

Seems to me that everyone is focused on the technical merits, not weighing the effort of learning a new programming language/toolchain/ecosystem for the maintainers appropriately.

Mastering a new programming language to a degree that makes one a competent maintainer is nothing to sneeze at and some maintainers might be unwilling to based on personal interests/motivation, which I'd consider legitimate position.

I think its important to acknowledge that not everyone may feel comfortable talking about their lack of competence/disinterest.

  • This is exactly the position Christoph Hellwig took in the original email chain that kicked off the current round of drama: https://lore.kernel.org/rust-for-linux/20250131075751.GA1672.... I think it's fair to say that this position is getting plenty of attention.

    • The opposing view is that drivers written in Rust using effectively foolproof APIs require far less maintainer effort to review. Yes, it might be annoying for Christoph to have to document & explain the precise semantics of his APIs and let a Rust contributor know when something changes, but there is a potential savings of maintainer time down the line across dozens of different drivers.

      2 replies →

  • And sadly, those are going to die out eventually, so the faster we get there, the less potentials for something breaking in a way that nobody would be able to figure it out.

  • Acknowledged, but said maintainers need to learn to cope with the relentless advance of technology. Any software engineer with a long career needs to be able to do this. New technology comes along and you have to adapt, or you become a fossil.

    It's totally fine on a personal level if you don't want to adapt, but you have to accept that it's going to limit your professional options. I'm personally pretty surly about learning modern web crap like k18s, but in my areas of expertise, I have a multi-decade career because I'm flexible with languages and tools. I expect that if AI can ever do what I do, my career will be over and my options will be limited.

    • To play devils advocate, for every technology that comes along with an advancement a handful come along with broken promises. People love to make fun of Javascript for that, but the only difference there is the cadence. Senior developers know this and know that the time and energy needed to separate the wheat from the chaff is exhausting. The advancements are not relentless it is the churn which is.

      That being said, rust comes with technical advances and also with enough of a community that the non technical requirements are already met. There should be enough evidence for rational but stubborn people to accept it as a way forward

    • Totally tangential, but since I just recently found this out: character-number-character, like [k8s, a16z, a11y] means that 8/16/11 characters in the middle are replaced by their count. I was wondering why kubernetes would be such a long word, when you wrote k18s. Maybe it was just a typo on your end, and this system is totally obvious.

      4 replies →

Christoph Hellwig seems fun to interact with. He drive-by posts the same, repeated points and seemingly refuses to engage with any replies.

  • AFAICT his only response in that thread:

    > Right now the rules is Linus can force you whatever he wants (it's his project obviously) and I think he needs to spell that out including the expectations for contributors very clearly.

    >> For myself I can and do deal with Rust itself fine, I'd love bringing the kernel into a more memory safe world, but dealing with an uncontrolled multi-language codebase is a pretty sure way to get me to spend my spare time on something else. I've heard a few other folks mumble something similar, but not everyone is quite as outspoken.

    He gets villianized and I don't think all his interactions were great, but this seems pretty reasonable and more or less in line with what other people were asking for (clearer direction from Linus).

    That said, I don't know, maybe Linus's position was already clear...

> > > > > David Howells did a patch set in 2018 (I believe) to clean up the C code in the kernel so it could be compiled with either C or C++; the patchset wasn't particularly big and mostly mechanical in nature, something that would be impossible with Rust. Even without moving away from the common subset of C and C++ we would immediately gain things like type safe linkage.

> > >

> > > That is great, but that does not give you memory safety and everyone

> > > would still need to learn C++.

> >

> > The point is that C++ is a superset of C, and we would use a subset of C++

> > that is more "C+"-style. That is, most changes would occur in header files,

> > especially early on. Since the kernel uses a lot of inlines and macros,

> > the improvements would still affect most of the existing kernel code,

> > something you simply can't do with Rust.

I have yet to see a compelling argument for allowing a completely new language with a completely different compiler and toolchain into the kernel while continuing to bar C++ entirely, when even just a restricted subset could bring safety- and maintainability-enhancing features today, such as RAII, smart pointers, overloadable functions, namespaces, and templates, and do so using the existing GCC toolchain, which supports even recent vintages of C++ (e.g., C++20) on Linux's targeted platforms.

Greg's response:

> But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.

side-steps this. Even if Rust is "better," it's much easier to address at least some of C's shortcomings with C++, and it can be done without significantly rewriting existing code, sacrificing platform support, or the incorporation of a new toolchain.

For example, as pointed out (and as Greg ignored), the kernel is replete with macros--a poor substitute for genuine generic programming that offers no type safety and the ever-present possibility for unintended side effects due to repeated evaluation of the arguments, e.g.:

#define MAX(x, y) (((x) > (y)) ? (x) : (y))

One need only be bitten by this kind of bug once to have it color your perception of C, permanently.

  • > Even if Rust is "better," it's much easier to address at least some of C's shortcomings with C++

    This simply forgets all the problems C++ has as a kernel language. It's really an "adopt a subset of C++" argument, but even that has its flaws. For instance, no one wants exceptions in the Linux kernel and for good reason, and exceptions are, for better or worse, what C++ provides for error handling.

    • > It's really an "adopt a subset of C++" argument, but even that has its flaws. For instance, no one wants exceptions in the Linux kernel and for good reason

      Plenty of C++ codebases don't use exceptions at all, especially in the video game industry. Build with GCC's -fno-exceptions option.

      > and exceptions are, for better or worse, what C++ provides for error handling.

      You can use error codes instead; many libraries, especially from Google, do just that. And there are more modern approaches, like std::optional and std::expected:

      https://en.cppreference.com/w/cpp/utility/optional

      https://en.cppreference.com/w/cpp/utility/expected

      8 replies →

    • isn't that why you pick a particular subset, and exclude the rest of the language? It should be pretty easy to avoid using try/catch, especially in the kernel. A subset of C probably doesn't make much sense but for c++ which absolutely gigantic, it shouldn't be hard. Getting programmers to adhere to it could be handled 99% of the time with a linter, the other 1% can be code by reviewers.

      2 replies →

  • > For example, as pointed out (and as Greg ignored), the kernel is replete with macros--a poor substitute for genuine generic programming that offers no type safety and the ever-present possibility for unintended side effects

    I never thought I would say that C++ would be an improvement, but I really have to agree with that.

    Simply adopting the generic programming bits with type safety without even objects, exceptions, smart pointers, etc. would be a huge step forward and a lot less disruptive than a full step towards Rust.

    • At this point, I think that would be a misstep.

      I'm not sure I have an informed enough opinion of the original C++ debate, but I don't think stepping to a C++ subset while also exploring Rust is a net gain on the situation, and has the same kinds of caveats as people who are upset at R4L complain about muddling the waters, while also being almost entirely new and untested if introduced now[1].

      [1] - I'm pretty sure some of the closed drivers that do the equivalent of shipping a .o and a shim layer compiled have C++ in them somewhere sometimes, but that's a rounding error in terms of complexity testing compared to the entire tree.

  • Ya it rather baffling, it would be a solid improvement, and they can easily ban the parts that don't work for them(exceptions/stl/self indulgent template wankery).

    On a memory safety scale I'd put C++ about 80% of the way from C to Rust.

I think it's becoming apparent that any attempt to progressively re-write a large codebase into a new language is always going to fail. Needs to done ground up new.

Aren't these people tired of shaving that yak already? I wish they rather focused on making one (1) decent distro for desktop use.

Rust changes every few months. It's simply not a mature language or people behind it have no idea what they are doing.

  • > Rust changes every few months.

    No it doesn't.

    Quite the contrary, great care is taken so that the language stay stable. "Stability without stagnation" is one of Rust core principles.

  • It turns out that there are always things to improve. You can decide to ignore those improvements for 50 years too but then people generally don’t want to use your language anymore.

  • If you haven't been maintaining any Rust code, you might have the impression that breaking changes are far more common than they really are. Rust has about as many breaking changes as Go, probably fewer? (Because Go lacks an edition mechanism.)

  • It's not the rust of 8-10 years ago, it's quite stable as a language now, and backward compatibility is stellar.

Isn't this a bait and switch, that all the c kernel devs were complaining about? That it wouldn't be just drivers but also all new kernel code? The lack of candor over the goal of R4L and downplaying of other potential solutions should give any maintainer (including potential rust ones) pause.

Anyway, why just stop at rust? If we really care about safety, lets drop the act and go make everyone do formal methods. Frama-C is at least C, has a richer contract language, has heavy static analysis tools before having to go to proofs, is much more proven, and the list goes on. Or, why not add Spark to the codebase if we are okay with mixing langs in the codebase? Its very safe.

  • Frama-C doesn't actually prove memory safety and has a huge proof hole due to the nature of UB. It gives weaker guarantees than Rust in many cases. It's also far more of a pain to write. The Frama-C folks have been using the kernel as a testbed for years and contributing small patches back. The analysis just doesn't scale well enough to involve other people.

    Spark doesn't have an active community willing to support its integration into the kernel and has actually been taking inspiration from Rust for access types. If you want to rustle up a community, go ahead I guess?

    • No, it can track pointer bounds and validity across functions. It also targets identifying cases of UB via eva. Both rust and frama-C rely on assertions to low level memory functions. Rust has the same gaping UB hole in unsafe that can cross into safe.

      If we are talking about more than memory, such as what greg is talking about in encoding operational properties then no, rust is far behind both frama-C, Spark, and tons of others. They can prove functional correctness. Or do you think miri, kani, cruesot, and the rest of the FM tools for Rust are superfluous?

      My mocking was that that the kernel devs have had options for years and have ignored them out of dislike (ada and spark) or lack of effort (frama-C). That other options provide better solutions to some of their intrests. And that this is more a project exercise in getting new kernel blood than technical merits.

  • For it to be bait and switch someone should've said "Rust will forever be only for drivers". Has anyone from the Linux leadership or R4L people done that? To my knowledge it has always been "for now".

    • "But for new code / drivers..." encompasses more than just "drivers" and refers to all new code. I doubt it's a mistake either due to the way the rest of the email is written. And Greg said "no one sane ever thought that (force anyone to learn rust)" just 5 months ago (https://lkml.org/lkml/2024/8/29/312). But he is now telling his C devs they will need to learn and code rust to make new code in the kernel.

      2 replies →

  • I'm no kernel dev, but I assume that DMA bindings (what this round of drama was originally all about) fall squarely into "stuff that drivers obviously need".