Only when used in a naïve way, which Rust does not. For example, the increments/decrements are done only when "clone" is called and scope exit respectively, and based on Rust ownership/borrow checking, is rarely done combining the best of both worlds (but yes, implementations with aggressive increment/decrements in loops and on every function call can be very slow). Rust also separates Arc (atomic refs) and Rc (non-atomic refs) and enforces usage scenarios in the type checker giving you cheap Rc in single threaded scenarios. Reference counting when done in a smart way works pretty well, but you obviously have to be a little careful of cycles (which in my experience are pretty rare and fairly obvious when you have such a data type).
It's how often reference counts are adjusted on hot paths that matters (including in libraries), and back to the original point, reference counting doesn't let you free groups of objects in one go (unlike a tracing GC).
Also it'd be nice if the reference counts were stored separately from the objects. Storing them alongside the object being tracked is a classic mistake made by reference count implementations (it spreads the writes over a large number of cache lines). I was actually surprised that Rust doesn't get this right.
Another issue with manual memory management is that you can't compact the heap.
Anyone claiming something like this obviously hasn’t dig into GCs. You honestly think that writing into memory at each access, especially atomically is anywhere near the performance of a GC that can do most of its work in parallel and just flip a bit to basically “having deleted” everything no longer accessible?
The book also has a chapter on reference counting ;-)
Rust also uses reference counting, probably the worst sort of garbage collection.
Only when used in a naïve way, which Rust does not. For example, the increments/decrements are done only when "clone" is called and scope exit respectively, and based on Rust ownership/borrow checking, is rarely done combining the best of both worlds (but yes, implementations with aggressive increment/decrements in loops and on every function call can be very slow). Rust also separates Arc (atomic refs) and Rc (non-atomic refs) and enforces usage scenarios in the type checker giving you cheap Rc in single threaded scenarios. Reference counting when done in a smart way works pretty well, but you obviously have to be a little careful of cycles (which in my experience are pretty rare and fairly obvious when you have such a data type).
The increment/decrement calls only occur on an explicit call to .clone(). No .clone(), no increment/decrement.
You won't see many clones in rust code.
It's how often reference counts are adjusted on hot paths that matters (including in libraries), and back to the original point, reference counting doesn't let you free groups of objects in one go (unlike a tracing GC).
Also it'd be nice if the reference counts were stored separately from the objects. Storing them alongside the object being tracked is a classic mistake made by reference count implementations (it spreads the writes over a large number of cache lines). I was actually surprised that Rust doesn't get this right.
Another issue with manual memory management is that you can't compact the heap.
1 reply →
Tracing is the worst in terms of performance
Anyone claiming something like this obviously hasn’t dig into GCs. You honestly think that writing into memory at each access, especially atomically is anywhere near the performance of a GC that can do most of its work in parallel and just flip a bit to basically “having deleted” everything no longer accessible?
2 replies →
That depends. Deallocating a zillion little objects one a a time can be slower than doing them all in a batch.
Not really, here it is winning hands down over Swift's ARC implementation.
https://github.com/ixy-languages/ixy-languages
13 replies →
[flagged]
2 replies →
[flagged]