Compared to what, though? And is that still the case if all OS components use whatever it is, as opposed to a few applications? Memory efficiency is crucial for overall system performance, and ARC is highly memory-efficient compared to every production GC I’m aware of.
Peak memory is not the only reason ARC is memory-efficient though, the main reason is that it has better memory reading behavior because it doesn't have to scan for pointers. Server GCs assume all memory containing pointers is cheap to access, which is not true if some of it is swapped out.
Compared to what, though? And is that still the case if all OS components use whatever it is, as opposed to a few applications? Memory efficiency is crucial for overall system performance, and ARC is highly memory-efficient compared to every production GC I’m aware of.
Peak memory is not the only reason ARC is memory-efficient though, the main reason is that it has better memory reading behavior because it doesn't have to scan for pointers. Server GCs assume all memory containing pointers is cheap to access, which is not true if some of it is swapped out.
https://github.com/ixy-languages/ixy-languages
iOS ships the same speed phones as Android with half the RAM. So I’d say here’s a real comparison of ARC vs tracing GC.
And the Android rules the mobile world with 80% market share.
Also iOS applications tend to crash due to memory leaks or not enough memory being available.
So yeah a real comparison.
Iphones are 3 generations ahead in terms of single core CPU performance, so that’s just a biased take.