← Back to context

Comment by AshamedCaptain

2 days ago

I do not understand how this is supposed to work in practice. If there are "Rust bindings" then the kernel cannot have a freely evolving internal ABI, and the project is doomed to effectively split into the "C" core side and the "Rust" side which is more client oriented. Maybe it will be a net win for Linux for finally stabilizing the internal APIs, and even open the door to other languages and out-of-tree modules. On the other hand, if there are no "Rust bindings" then Rust brings very little to the table.

> I do not understand how this is supposed to work in practice. If there are "Rust bindings" then the kernel cannot have a freely evolving internal ABI...

Perhaps I misunderstand your argument, but it sounds like: "Why have interfaces at all?"

The Rust bindings aren't guaranteed to be stable, just as the internal APIs aren't guaranteed to be stable.

ABI is irrelevant. Only external APIs/ABIs are frozen, kernel-internal APIs have always been allowed to change from release to release. And Rust is only used for kernel-internal code like drivers. There's no stable driver API for linux.

  • External kernel APIs/ABIs are not frozen unless by external you only mean user space (eg externally loaded kernel modules try to keep up with dkms but source level changes require updates to the module source, often having to maintain multiple versions in one codebase with ifdef’s to select different kernel versions)

I don't understand why rust bindings imply a freezing (or chilling) of the ABI—surely rust is bound by roughly the same constraints C is, being fully ABI-compatible in terms of consuming and being consumed. Is this commentary on how Rust is essentially, inherently more committed to backwards compatibility, or is this commentary on the fact that two languages will necessarily bring constraints that retard the ability to make breaking changes?

  • Obviously the latter, which is already the point of contention that has started this entire discussion.

    • Can you explain why you think this? I don't understand the reasoning and it's certainly not "obvious". There's certainly no technical reason implying this, so is this just resistance to learning rust? C'mon, kernel developers can surely learn new tricks. This just seems like a defeatist attitude.

      EDIT: The process overhead seems straightforwardly worth it—rust can largely preserve semantics, offers the potential to increase confidence in code, and can encourage a new generation of contribution with a faster ramp-up to writing quality code. Notably nowhere here is a guarantee of better code quality, but presumably the existing quality-guaranteeing processes can translate fine to a roughly equivalently-capable language that offers more compile-time mechanisms for quality guarantees.

      11 replies →

From what I have read, the intent seems to be that a C maintainer can make changes that break the Rust build. It’s then up to the Rust binding maintainer to fix the Rust build, if the C maintainer does not want to deal with Rust.

The C maintainer might also take patches to the C code from the Rust maintainer if they are suitable.

This puts a lot of work on the Rust maintainers to keep the Rust build working and requires that they have sufficient testing and CI to keep on top of failures. Time will tell if that burden is sustainable.

  • > Time will tell if that burden is sustainable.

    Most likely this burden will also change over time. Early in the experiment it makes sense to put most of the burden on the experimenters and avoid it from "infecting" the whole project.

    But if the experiment is successful then it makes sense to spread the workload in the way that minimizes overall effort.

    • It took me a while to understand the conflict until this dawned on me. It doesn't matter how many assurances the R4L team gives that they are on the hook for keeping up with breaking changes during the experiment, some maintainers were dismissive of the project altogether, because of the project is successful, then they have to care. It wasn't until recently that we are all operating on different definitions of success. If your definition of success is "it proves that it's possible to get it working", the project succeeded ages ago, which means that you're running out of time to stop the project if you really don't want to ever have to care about it. But that's not the definition of a successful experiment, because otherwise it would already have been declared. One potential definition of success is "all of the tooling necessary is there, it's reliable, the code quality is higher than what was there before, and the number of defects in the new code is statistically lower". If that is the goal, then the time where the maintainers don't need to fix bindings as part of refractors is pushed further into the future. But that success goal also implies that everything is in place to be minimally disruptive to maintainers already.

      If it were me, I would have started building the relationships now with the R4L team to "act as-if" Rust is here to stay and part of the critical path, involving them when refractors happen but without the pressure to have to wait for them before landing C changes. That way you can actually exercise the workflow and get real experience on what the pain might be, and work on improving that workflow before it becomes an issue. Arguably, that is part of the scope of the experiment!

      The fear that everyone from R4L might get up and leave from one day to the next, leaving maintainers with Rust code they don't understand, is the same problem of current subsystem maintainers getting up and leaving from one day to the next leaving no-one to maintain that code. The way to protect against that is to grow the team's, have a steady pipeline of new blood (by fostering an environment that welcomes new blood and encourages them to stick around) and have copious amounts of documentation.