Comment by QuiEgo
2 months ago
Apple handled this problem by adding memory safety to C (Firebloom). It seems unlikely they would throw away that investment and move to Rust. I’m sure lots of other companies don’t want to throw away their existing code, and when they write new code there will always be a desire to draw on prior art.
That's a rather pessimistic take compared to what's actually happening. What you say should apply to the big players like Amazon, Google, Microsoft, etc the most, because they arguably have massive C codebases. Yet, they're also some of the most enthusiastic adopters and promoters of Rust. A lot of other adopters also have legacy C codebases.
I'm not trying to hype up Rust or disparage C. I learned C first and then Rust, even before Rust 1.0 was released. And I have an idea why Rust finds acceptance, which is also what some of these companies have officially mentioned.
C is a nice little language that's easy to learn and understand. But the price you pay for it is in large applications where you have to handle resources like heap allocations. C doesn't offer any help there when you make such mistakes, though some linters might catch them. The reason for this, I think, is that C was developed in an era when they didn't have so much computing power to do such complicated analysis in the compiler.
People have been writing C for ages, but let me tell you - writing correct C is a whole different skill that's hard and takes ages to learn. If you think I'm saying this because I'm a bad programmer, then you would be wrong. I'm not a programmer at all (by qualification), but rather a hardware engineer who is more comfortable with assembly, registers, Bus, DRAM, DMA, etc. I still used to get widespread memory errors, because all it takes is a lapse in attention while coding. That strain is what Rust alleviates.
Not trying to make a value judgement on Rust either, just brainstorming why Rust changeover might go slow per the question.
FWIW I work in firmware with the heap turned off. I’ve worked on projects in both C and Rust, and agree Rust still adds useful checks (at the cost of compile times and binary sizes). It seems worth the trade off for most projects.
Okay. That sounds like a reasonable explanation. Thanks!
I'm curious why your perspective on Rust as a HW engineer. Hardware does a ton of things - DMA, interrupts, etc. that are not really compatible with Rust's memory model - after all Rust's immutable borrows should guarantee the values you are reading are not aliased by writers and should be staying constant as long as the borrow exists.
This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.
Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.
I’d like to take a swipe at this
Rust has no_std to handle not having an allocator.
Tons of things end up being marked “unsafe” in systems/embedded Rust. The idea is you sandbox the unsafeness. Libraries like zero copy are a good example of “containing” unsafe memory accesses in a way that still gets you as much memory safety as possible given the realities of embedded.
Tl;dr you don’t get as much safety as higher level code but you still get more than C. Or maybe put a different way you are forced to think about the points that are inherently unsafe and call them out explicitly (which is great when you think about how to test the thing)
3 replies →
Here is my take on your question.
> I'm curious why your perspective on Rust as a HW engineer.
C, C++ and Rust requires you to know at least the basics of the C memory model. Register variables, heap, stack, stack frame, frame invalidation, allocators and allocation, heap pointer invalidation, etc. There are obviously more complicated stuff (which I think you already know, seeing that you're an embedded developer), but this much is necessary to avoid common memory errors like memory leaks (not a safety error), use-after-free, double-free, data-race, etc. This is needed for even non-system programs and applications, due to lack of runtime memory management (GC or RC). You can get by, by following certain rules of thumb in C and C++. But to be able to write flawless code, you have to know those hardware concepts. This is where knowledge of process and memory architecture comes in handy. You start with the fundamental rule before programming, instead of the other way around that people normally take. Even in Rust, the complicated borrow checker rules start to make sense once you realize how they help you overcome the mistakes you can make with the hardware.
> Hardware does a ton of things - DMA, interrupts, etc. that are not really compatible with Rust's memory model - after all Rust's immutable borrows should guarantee the values you are reading are not aliased by writers and should be staying constant as long as the borrow exists.
> This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.
I do have an answer, but I don't think it can be explained in a better way than how @QuiEgo did it: You can 'sandbox' those unsafe parts within Rust unsafe blocks. As I have explained elsewhere, these sandboxed parts are surprisingly small even in kernel or embedded code (please see the Rust standard library for examples). As long as you enforce the basic correctness conditions (the invariants) inside the unsafe blocks, the rest of the code is guaranteed to be safe. And even if you do make a mistake there (i.e memory safety), they are easier to find because there's very little code there to check. Rust does bring something new to the table for the hardware.
NOTE: I believe that those parts in the kernel are still in C. Rust is just a thin wrapper over it for writing drivers. That's a reasonable way forward.
> Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.
That isn't true. @QuiEgo already mentioned no_std. It's meant for this purpose. Here is the reference: https://docs.rust-embedded.org/book/intro/no-std.html#bare-m...
So you try to say c is for good programmers only and rust let also the idiots Programm? I think that’s the wrong way to argue for rust. Rust catches one kind of common problem but but does not magically make logic errors away.
No, they are not saying that at all??
No
> It seems unlikely [Apple] would throw away that investment and move to Rust.
Apple has invested in Swift, another high level language with safety guarantees, which happens to have been created under Chris Lattner, otherwise known for creating LLVM. Swift's huge advantage over Rust, for application and system programming is that it supports an ABI [1] which Rust, famously, does not (other than falling back to a C ABI, which degrades its promises).
[1] for more on that topic, I recommend this excellent article: https://faultlore.com/blah/swift-abi/ Side note, the author of that article wrote Rust's std::collections API.
Swift does not seem suitable for OS development, at least not as much as C or C++.[0] Swift handles by default a lot of memory by using reference counting, as I understand it, which is not always suitable for OS development.
[0]: Rust, while no longer officially experimental in the Linux kernel, does not yet have major OSs written purely in it.
What matters is what Apple thinks, and officially it is, to the point it is explicitly written on the documentation.
4 replies →
There's an allocation-free subset.
https://www.swift.org/get-started/embedded/
Rust's approach is overkill, I think. A lot of reference counting and stuff is just fine in a kernel.
2 replies →
Nothing wrong with using reference counting for OS development.
3 replies →
Apple is extending Swift specifically for kernel development.
Also, Swift Embedded came out of the effort to eventually use Swift instead for such use cases at Apple.