← Back to context

Comment by throwawaymaths

2 days ago

My point about "strategy" is not theoretical, it's implementation. why does your lifetime typing have to be in the compiler? it could be a part of a static checking tool, and get out of the way of routine development, and guarantee safety on release branches via CI for example.

also you could have affine types without RAII. without macros, etc. etc.

theres a very wide space of options that are theoretically equivalent to what rust does that are worth exploring for devex reasons.

First, let me say that you're bringing up some points that are orthogonal to "rust's strategy" for memory safety. Macros are not part of that strategy, and neither are many other ergonomic curiosities of Rust, and you are right to point out that those could be different without changing the core value proposition of Rust. There is plenty to say about those things, but I think it is better to focus on the points you raise about static analysis to start with.

Type systems are a form of static analysis tool, that is true; and in principle, they could be substituted by other such tools. Python has MyPy, for example, which provides a static analysis layer. Coverity has long been used on C and C++ projects. However, such tools can not "get out of the way of routine development" -- if they are going to check correctness of the program, they have to check the program; and routine development has to respond to those checks. Otherwise, how do you know, from commit to commit, that the code is sound?

The alternative is, as other posters have noted, that people don't run the static analysis tool; or run it rarely; both are antipatterns that create more problems relative to an incremental, granular approach to correctness.

Regarding macros and many other ergonomic features of Rust, those are orthogonal to affine types, that is true; but to the best of my knowledge, Rust is the only language with tightly integrated affine types that is also moderately widely used, moderately productive, has a reasonable build system, package infrastructure and documentation story.

So when you say "theres a very wide space of options that are theoretically equivalent to what rust does that are worth exploring for devex reasons.", what are those? And how theoretical are they?

It's probably true, for example, that dependently typed languages could be even better from a static safety standpoint; but it's not clear that we can tell a credible story of improving memory safety in the kernel (or mail servers, database servers, or other large projects) with those languages this year or next year or even five years from now. It is also hard to say what the "devex" story will be, because there is comparatively little to say about the ecosystem for such nascent technologies.

  • there are highly successful projects out there that for example turn on valgrind and asan only in test or dev builds?

    > how do you know, from commit to commit, that the code is sound?

    these days its easy to turn full checks on every commit in origin; a pull request can in principle be rejected if any commit fails a test, and rewriting git history by squashing (annoying but not impossible) can get you past that if an intermediate failed.

    • But how is this "out of the way of routine development"?

      It seems like, at least part of the time, you're discussing distinct use cases -- for example, the quick scripts you mention (https://news.ycombinator.com/item?id=43132877) -- some of which don't require the same level of attention as systems programming.

      At other times, it seems like you're arguing it would be easier to develop a verified system if you only had to run the equivalent of Rust's borrow checker once in awhile -- on push or on release -- but given that all the code will eventually have to pass that bar, what are you gaining by delaying the check?

Static analysis has the big disadvantage that it can and will be ignored.

  • thats fine. you dont need to run static analysis on a quick program that you yourself write that, say, downloads a file off the internet and processes it, and you're the only consumer.

    or a hpc workload for a physic simulation that gets run once on 400,000 cores, and if it doesnt crash on your test run it probably won't at scale.

    if youre writing an OS, you will turn it on. in fact, even rust ecosystem suggests this as a strategy, for example, with MIRI.

    • Are you going to write a "quick program" in C, though? That is what we are comparing to, when we consider kernel development.

      I wouldn't argue that Rust is a good replacement for Makefiles, shell build scripts, Python scripts...

      An amazing thing about Rust, though, is that you actually can write many "quick programs" -- application level programs -- and it's a reasonably good experience.

      2 replies →

  • How so? Because somebody forgot to run it before publishing a kernel release?

    • Because they can and will be ignored on a large scale unless the false positive rate is pleasantly low. And more importantly there is a large amount of existing code that simply doesn't yet pass.

How do you know that those other options haven't been explored, and rejected?

And remember that your gripes with Rust aren't everyone's gripes. Some of the things you hate about Rust can be things that other people love about Rust.

To me, I want all that stuff in the compiler. I don't want to have to run extra linters and validators and other crap to ensure I've done the right thing. I've found myself so much more productive in languages where the compiler succeeding means that everything that can (reasonably) be done to ensure correctness according to that language's guarantees has been checked and has passed.

Put another way, if lifetime checking was an external tool, and rustc would happily output binaries that violate lifetime rules, then you could not actually say that Rust is a memory-safe language. "Memory-safe if I do all this other stuff after the compiler tells me it's ok" is not memory-safe.

But sure, maybe you aren't persuaded by what I've said above. So what? Neither of us are Linux kernel maintainers, and what we think about this doesn't matter.

  • you're arbitrarily drawing the line where memory safe is. i could say rust is memory unsafe because it allows you to write code in an unsafe block. or you could lose memory safety if you use any sort of ~ECS system or functionally lose memory "safe"ty if you turn a pointer lookup into an index into an array (a common strategy for performance, if not to trick the borrow checker).

    what you really should care about is: is your code memory safe, not is your language memory safe.

    and this is what is so annoying about rust evangelists. To rust evangelists it's not about the code being memory safe (for example you bet your ass SEL4 is memory safe, even if the code is in C)

    • you wanna verify all your c just for memory safety? i bet you if you actually tried to verify c for memory safety, you would come screaming back to rust.

      and also seL4 is about 10k lines of code, designed around verification, sequential, and already a landmark achievement of verification. linux is like 3 orders of magnitude more code, not designed around verification, and concurrent.