Comment by fuhsnn
3 days ago
I wonder if Rust or future PL would evolve into allowing multiple borrow checker implementations with varying characteristics (compile speed, runtime speed, algorithm flexibility, etc.) that projects can choose from.
3 days ago
I wonder if Rust or future PL would evolve into allowing multiple borrow checker implementations with varying characteristics (compile speed, runtime speed, algorithm flexibility, etc.) that projects can choose from.
Rust already supports switching between borrow checker implementations.
It has migrated from a scope-based borrow checker to non-lexical borrow checker, and has next experimental Polonius implementation as an option. However, once the new implementation becomes production-ready, the old one gets discarded, because there's no reason to choose it. Borrow checking is fast, and the newer ones accept strictly more (correct) programs.
You also have Rc and RefCell types which give you greater flexibility at cost of some runtime checks.
The new borrow checker is not yet all that fast. For instance, it was 5000x slower, according to a recent report.
https://users.rust-lang.org/t/polonius-is-more-ergonomic-tha...
>I recommend watching the video @nerditation linked. I believe Amanda mentioned somewhere that Polonius is 5000x slower than the existing borrow-checker; IIRC the plan isn't to use Polonius instead of NLL, but rather use NLL and kick off Polonius for certain failure cases.
I think GP is talking about somehow being able to, for example, more seamlessly switch between manual borrowing and "checked" borrowing with Rc and RefCell.
[dead]
We already have that by having multiple approaches via affine types (what Rust uses), linear types, effects, dependent types, formal proofs.
All have different costs and capabilities across implementation, performance and developer experience.
Then we have what everyone else besides Rust is actually going for, the productivity of automatic resource management (regardless of how), coupled with one of the type systems above, only for performance critical code paths.
> affine types (what Rust uses)
I'd just like to interject for a moment. What you’re referring to as "affine types", is in fact, Uniqueness Types. The difference has to do with how they interact with unrestricted types. In Rust, these "unrestricted types" are references (which can be used multiple times due to implementing Copy).
Uniqueness types allow functions to place a constraint on the caller ("this argument cannot be aliased when you pass it to me"), but places no restriction on the callee. This is useful for Rust, because (among other reasons) if a value is not aliased you can free it and be sure that you're not leaving behind references to freed data.
Affine types are the opposite - they allow the caller to place a restriction on the callee ("I'm passing you this value, but you may use it at most once"), which is not something possible to express in Rust's type system, because the callee is always free to create a reference from its argument and pass that reference to multiple functions..
I would say it is perfectly accurate to call Rust's type system affine. At its core, "affine" means that the type system has exchange and weakening but not contraction, and that exactly characterizes Rust's type system. See <https://math.stackexchange.com/questions/3356302/substructur...> for an explanation of what those terms mean (that's in the context of a logic, but it's the same for type systems via the Curry-Howard correspondence).
This is often explained via the "do not use more than once rule", but that's not the actual definition, and as your example shows, following that simplified explanation to the letter can cause confusion.
> because the callee is always free to create a reference from its argument and pass that reference to multiple functions..
Passing a reference is not the same thing as passing the actual value, so this does not contradict affinity.
10 replies →
Yeah, that makes sense. The Rust type system isn't "affine" as in affine logic. Rust allows different forms of contraction, which affine logic strictly prohibits.
And some people like to claim that the Curry-Howard correspondence proves something about their type system, but this is only true for dependently typed languages.
And the proofs aren't about program behavior.
See, https://liamoc.net/forest/loc-000S/index.xml
1 reply →
I would love some sort of affine types in languages like Kotlin, it just makes cleaner code organization in my opinion.
Doesn't matter if it's purely "syntaxical" because the language is garbage collected, just the fact of specifying what owns what and be explicit about multiple references is great imo.
Some sort of effects systems can already be simulated with Kotlin features too.
Programming language theory is so interesting!
What you actually want is the underlying separation logic, so you can precisely specify function preconditions and prove mid-function conditions, and the the optomizer can take all those "lemmas" and go hog-wiled, right up to but not past what is allowed by the explicitly stated invariants.
"Rust", in this context, is "merely" "the usual invariants that people want" and "a suite of optimizations that assume those usual invariants, but not more or less".
Can you help me understand your comment with a simple example? Take slice::split_at and slice::split_at_mut:
What might their triples look like in separation logic?
The long answer to this question can be found in https://research.ralfj.de/thesis.html. :)
See Ralf's answer but let me give you a bit of extra flavor:
These functions do not memory accesses, they are both from an operational perspective essentially:
p, n -> (p, p + n)
The separation logics I've seen have all have what we might call a strong extensional calculi / shallow DSL embedding flavor. What that means is roughly that there is strong distinction between the "internal" program under scrutiny, and fairly arbitrary external reasoning about it.
I bring this up in order to say that we're very far from "what is the principal type of this expression?" type questions. There are many, many, ways one might type split_at/_mut depending on what the context requires. The interesting thing about these in Rust is really not the functions themselves, but & and &mut. Those types are stand-ins for some family of those myriad potential contexts, in the way that interfaces are always reifications of the possibilities of what the component on the other side of the interface might be.
In the "menagerie of separation logics" I was originally thinking of, there may not be singular & and &mut that work for all purposes. The most reusable form split_at indeed may be tantamount to the simple arithmetic pure function I wrote above, leaving to the caller the proof of showing whatever properties it needs are carried over from the original pointer to the new ones. Given the arithmetic relations, and the knowledge that nothing else is happening (related to the famous frame rule), the caller should be able to do this.
Rust's borrow checker has a fairly minimal compile time cost and does not impact codegen at all. Most of the compile time is spent on trait resolution, monomophization, optimization passes in LLVM, and linking.
As I understand it the borrow checker only has false negatives but no false positives, correct?
Maybe a dumb question but couldn't you just run multiple implementations in parallel threads and whichever finishes first with a positive result wins?
> As I understand it the borrow checker only has false negatives but no false positives, correct?
The borrow checker is supposed to be a sound static analysis, yes. I think Ralf Jung's comment at https://news.ycombinator.com/item?id=44511416 says soundness hasn't been proved relative to tree borrows yet.
> Maybe a dumb question but couldn't you just run multiple implementations in parallel threads and whichever finishes first with a positive result wins?
IIUC when you're compiling reasonably-sized programs you're already using all the cores, so parallelizing here doesn't seem like it's going to net you much gain, especially if it means you're doing a lot of extra work.
Thanks, didn't see that!
> when you're compiling reasonably-sized programs you're already using all the cores
Only on full rebuilds. I would assume most build jobs with a human in the loop only compile a handful of crates at once.
In fact as CPUs get more and more parallel we'll cross the threshold where thread count surpasses work items more often and then will come the time to get creative ;)
This presumes that checking composes which may not if you have orthogonal checker implementations. You might end up risking accepting an invalid program because part of it is valid under one checker, part under another, but the combination isn't actually valid. But maybe that's not actually possible in practice.
Borrow checking is function-local, so if the opsem model is the same and you run the different checkers per-function, there is no such risk.
1 reply →
I cannot imagine how that would work. You couldn't combine code that expect different borrowing rules to be applied. You'd effectively be creating as many sub-dialects as there are borrow checker implementations.
FWIW technically the rules are the same. How they go about proving that the rules are upheld for a program is what would be different.
I'm guessing you're referring to being able to change models without needing to change the code, but it's worth mentioning that there already is a mechanism to defer borrow-checking until runtime in Rust in the form of RefCell. This doesn't change the underlying exclusivity rules to allow aliasing mutable borrows, but it does allow an alternative to handling everything at compile time.
Deferring to runtime is not always great, since not only can it incur runtime overhead, the code can also panic if a violation is detected.
Using `try_borrow`/try_borrow_mut` should avoid panics, but yes, the overhead is why it's the exception rather than the rule, and it has to be done manually with these types. I'm not making a value judgment on the utility of working with that model, only pointing out in response to the parent comment that one of the things they mention is somewhat possible today, at the cost of having to update code. Even if it were possible to do it seamless as I'm assuming they're talking about, I don't really think it would be possible to do without incurring _any_ runtime overhead, but I think that's kind of their point; it might be nice to be able to switch between models when doing different types of development.
What’s wrong with the compile or runtime speed of the current one?
That would result in ecosystem splitting, which isn't great.