← Back to context

Comment by seanmcdirmid

13 years ago

The caching misbehavior of referencing counting has been greatly exaggerated, especially in the context of UI where responsiveness is much more important than raw CPU speed. Also, the ref counting tradeoff seems to work better for device (e.g. all the cool kids [1] are doing it).

[1] https://developer.apple.com/library/mac/documentation/Genera...

.NET still does GC, it is only the WinRT APIs (something like COM) that manage resources through ref counting. There is some cool interop magic that makes this somewhat transparent to the programmer.

> The caching misbehavior of referencing counting has been greatly exaggerated, especially in the context of UI where responsiveness is much more important than raw CPU speed. Also, the ref counting tradeoff seems to work better for device (e.g. all the cool kids [1] are doing it).

Apple's ARC is a compiler hack. It only works if you happen to use set of Cocoa libraries that are recognized by the compiler and allow for some magic incantation.

You cannot make any Objective-C library ARC aware.

ARC is a good solution for Objective-C, because Apple never managed to have their GC working safely in all situations, given the underlying C type system.

> .NET still does GC, it is only the WinRT APIs (something like COM) that manage resources through ref counting. There is some cool interop magic that makes this somewhat transparent to the programmer.

True, but C++ does not and you still have the performance penalty of reference counting.

The only way to have reference counting with good performance is do what Parasol or Objective-C do, by removing unnecessary increment/decrement operations thanks to dataflow analysis.

Which is something that you cannot do just by using the WinRT runtime and need to rely on the cleverness of the compilers.

Actually, are you aware that reference counting tends to be presented on the first chapter of many CS books about garbage collection as poor man's GC?

  • > Apple's ARC is a compiler hack. It only works if you happen to use set of Cocoa libraries that are recognized by the compiler

    I don't understand. How does ARC rely on more than what NSObject provides? I'm coming from the C++ shared_ptr world and ARC does not feel much different so far (less explicit, but easier to optimise).

  • > True, but C++ does not and you still have the performance penalty of reference counting.

    Not very relevant. For UI code, responsive resource allocation and de-allocation is much more important than saving a few % CPU cycles.

    > Actually, are you aware that reference counting tends to be presented on the first chapter of many CS books about garbage collection as poor man's GC?

    Yes! Sometimes the conventional wisdom doesn't hold in the end, or it doesn't hold for all use cases. But I should have been more clear that ref counting is GC, whereas what I refer to as GC involves sweeping reference roots periodically to identify garbage.

    • > > True, but C++ does not and you still have the performance penalty of reference counting.

      > Not very relevant. For UI code, responsive resource allocation and de-allocation is much more important than saving a few % CPU cycles.

      I beg to differ. I consider missile control radar systems of military applications a bit more important than UIs.

      There are military applications using real time GC systems, which I can list here if you want.

      I also have a German magazine here (Making Games Magazin 1/13) that explains how Witcher2 for the Xbox 360 makes use of a GC.

      Anyway one thing I do agree, automatic memory management, regardless of the form, should be the any of any programming language.

      Pure manual memory management should be treated like Assembly. It will never go away and should only be used for the very few cases where there is no other way around the problem at hand.

      1 reply →