I think he meant "show me a true linked list / node graph in rust that isn't unsafe". The reason being its not possible using c-style pointer following (or without just putting everything auto-pointers). What you've shown is exactly the tradeoff they were referring to. In rust, the answer is: make sure lifetime of all memory is explicitly managed, then use integers for the 'links' between nodes.
His point was that for his programming, he wants to be able to make real pointers and real linked lists with memory unsafe, which Rust makes difficult or opaque. For example with linked list, you could simulate (to avoid unsafe), by either boxing everything (so all refs are actually smart pointers), or you can use a container with scoped memory lifetime, and have integers in an array that are the "next" pointer. In addition to extra complexity, the "integers as edges" doesn't actually solve the complexity, it just means you can't get a bad memory error (you can still have 'pointers' that point to the wrong index if you're rolling your own).
Same with your graph code. Using a COO representation for a graph does in theory make it "memory safe" (albeit more clumsy to use if you are doing pointer-following logic), and it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node).
I think the point (which I agree with for things like linked list, graph, compiler) is that depending on your usecase, the "safety" guarantees of rust are just making it harder to write the simplest most understandable code. Now instead of: `Node* next` I have lifetimes, integer references, two collections (nodes and edges) to keep in sync, smart pointers, etc. Previously my complexity was to make sure `next != null`, now its a ton of boilerplate and abstractions, performance hits, or more subtle bugs (like 'next' indices getting out of sync with the array of 'nodes').
If there was a way to explicitly track the lifetime of an arbitrary graph/tree of pointers at compile time, we wouldn't need garbage collection -- its not solvable at compile time, and the complexity has to live somewhere.
> it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node
This is not actually a different kind of bug; it's just use-after-free, which you can of course get when using pointers instead of indices.
Actually it's slightly safer than pointer use-after-free because it is type safe and there's no UB.
Also some of the Rust arenas give you keys (equivalent to pointers) which can check for this. There's a good list here (see "ABA mitigation"):
Forgive me if I've mis-understood this thread, but there are unsafe declerations in that crate. Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?
I guess you are making the point that the user does not have to concern themselves with the unsafe declarations?
> Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?
Yes, in the same way that there's a difference between using `std::Vec` (which uses `unsafe`), and writing an unsafe Vec class yourself.
Or even the difference between using Python (which wraps an unsafe CPython implementation), and doing everything in unsafe Python code.
The difference is that widely used code like CPython and `std::Vec` are much much better tested and audited than anything I would write myself, because so many people use them. This is a continuum so something like petgraph is going to be not as well tested as std::Vec but still way better tested than anything I've written.
I would say yes, there’s a difference, in general. I would much rather leave the unsafe code to crates used and tested by many other applications, than have them in the application code itself.
I don't think it's unreasonable, even though I am getting marked down for daring to ask, for people who are making assertions, even if they are well understood *within their own community* (that is, not necessarily universally known) to show examples of what they are talking about.
You're correcting someone, so it's clear that your understanding isn't universal, and example code is the absolute minimum.
I think he meant "show me a true linked list / node graph in rust that isn't unsafe". The reason being its not possible using c-style pointer following (or without just putting everything auto-pointers). What you've shown is exactly the tradeoff they were referring to. In rust, the answer is: make sure lifetime of all memory is explicitly managed, then use integers for the 'links' between nodes.
His point was that for his programming, he wants to be able to make real pointers and real linked lists with memory unsafe, which Rust makes difficult or opaque. For example with linked list, you could simulate (to avoid unsafe), by either boxing everything (so all refs are actually smart pointers), or you can use a container with scoped memory lifetime, and have integers in an array that are the "next" pointer. In addition to extra complexity, the "integers as edges" doesn't actually solve the complexity, it just means you can't get a bad memory error (you can still have 'pointers' that point to the wrong index if you're rolling your own).
Same with your graph code. Using a COO representation for a graph does in theory make it "memory safe" (albeit more clumsy to use if you are doing pointer-following logic), and it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node).
I think the point (which I agree with for things like linked list, graph, compiler) is that depending on your usecase, the "safety" guarantees of rust are just making it harder to write the simplest most understandable code. Now instead of: `Node* next` I have lifetimes, integer references, two collections (nodes and edges) to keep in sync, smart pointers, etc. Previously my complexity was to make sure `next != null`, now its a ton of boilerplate and abstractions, performance hits, or more subtle bugs (like 'next' indices getting out of sync with the array of 'nodes').
If there was a way to explicitly track the lifetime of an arbitrary graph/tree of pointers at compile time, we wouldn't need garbage collection -- its not solvable at compile time, and the complexity has to live somewhere.
> it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node
This is not actually a different kind of bug; it's just use-after-free, which you can of course get when using pointers instead of indices.
Actually it's slightly safer than pointer use-after-free because it is type safe and there's no UB.
Also some of the Rust arenas give you keys (equivalent to pointers) which can check for this. There's a good list here (see "ABA mitigation"):
https://donsz.nl/blog/arenas/
Err https://github.com/petgraph/petgraph
What are you asking for exactly?
Forgive me if I've mis-understood this thread, but there are unsafe declerations in that crate. Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?
I guess you are making the point that the user does not have to concern themselves with the unsafe declarations?
> Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?
Yes, in the same way that there's a difference between using `std::Vec` (which uses `unsafe`), and writing an unsafe Vec class yourself.
Or even the difference between using Python (which wraps an unsafe CPython implementation), and doing everything in unsafe Python code.
The difference is that widely used code like CPython and `std::Vec` are much much better tested and audited than anything I would write myself, because so many people use them. This is a continuum so something like petgraph is going to be not as well tested as std::Vec but still way better tested than anything I've written.
I would say yes, there’s a difference, in general. I would much rather leave the unsafe code to crates used and tested by many other applications, than have them in the application code itself.
I don't think it's unreasonable, even though I am getting marked down for daring to ask, for people who are making assertions, even if they are well understood *within their own community* (that is, not necessarily universally known) to show examples of what they are talking about.
You're correcting someone, so it's clear that your understanding isn't universal, and example code is the absolute minimum.
It doesn't seem clear what code you're asking for.