Comment by Dwedit
3 years ago
What I really want out of a garbage collector is a "Collect" function with a deadline. Pick a max time it's allowed to run before stopping and returning to the program.
3 years ago
What I really want out of a garbage collector is a "Collect" function with a deadline. Pick a max time it's allowed to run before stopping and returning to the program.
Real time GCs exist such as the IBM Metronome GC. Though I'll be honest and say I haven't heard of many real-time GCs other than the Metronome one. Certainly many new GCs have reduced pause times dramatically but that's orthogonal to real-time GC (as you can make pause times infinitesimally small but not let the mutator actually progress).
Here’s a fairly extensive review that mentions some recent research on “real-time” GCs, including one with a lower bound for mutator utilization: https://eschew.wordpress.com/2016/09/02/summarizing-gc/.
See PTC and Aicas.
I've achieved GC timing that is good enough for real-time competitive game hosting using .NET6+. Worst case over 4 hours of load testing was an 8ms spike. Average 1-2ms.
The magic trick is to intentionally collect as often as reasonably possible (i.e. at batch/frame/tick processing boundaries) and avoid using sophisticated GC schemes that involve multiple threads or asynchrony.
Oh, and obviously you need to minimize allocations throughout or it won't matter.
Nim has exactly that with the GC_step proc.
https://nim-lang.org/1.4.0/gc.html
However recent and future versions (2.0) are moving towards a different approach that is also applicable for deterministic real time systems: ARC, which is basically alloc and free calls inserted automatically by the compiler using static analysis (no "runtime").
That is necessary but often not sufficient. It still leaves the problem of moving time from where garbage is created to the bulk 'collect' step. I've run collect every frame to keep stalls down, but the time always creeps up and up and you end up doing forensic analysis of which code is making the garbage in the first place.
Not really possible unless you have a real-time OS. Virtual memory and concurrency means there is really no way to bound how long anything will take.
You may have absolutely zero pauses if you don't have heap compaction.
Isn’t that just incremental garbage collection?