Comment by cmrdporcupine

16 hours ago

Had similar epiphanies some many years ago (ugh, I'm old) when I was playing around writing a garbage collected persistent (in the 'stored on [spinny spinny] disk' sense not the FP sense of the word) programming language / runtime. This was back when it was roughly infeasible to be holding large "worlds" of objects purely in-memory on machines of the style of the time, so intelligently paging objects in and out was imperative. (Aside, I think with the recent doubling... tripling of RAM prices this area of thinking is now again more imperative)...

In any case, if one is doing GC in such a language, a full tracing collector (whether copying or mark & sweep) is madness, as to find live references means walking nearly the entire heap including the portions living in secondary storage, and now you're in a world of pain.

In this case, an intelligent cycle collecting garbage collector in the Bacon style was the answer. You keep in in-memory table of reference counts, and you only trace when you hit cycles. [and hopefully design your language semantics to discourage cycles]

> you only trace when you hit cycles

How do you tell when you've hit a cycle?

> hopefully design your language semantics to discourage cycles

Why? Cyclical structures can be very useful. For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.

[UPDATE] Two responses have now pointed out that this particular case can be handled with weak pointers. But then I can just point to a general graph as an example where that won't work.

  • > For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.

    That's not a true cycle, it's just a back link for which "weak" reference counts suffice. The containment relation implies that the container "owns" the object, so we don't need to worry about the case where the container might just go away without dropping its contents first.

    (Edit: I agree that when dealing with a truly general graph some form of tracing is the best approach. These problem domains are where tracing GC really helps.)

  • > For example, it can be very handy in many situations for a contained object to have a back-pointer to its container.

    Does it frequently need an owning reference though or would a weak reference suffice? Usually the latter situation suffices.

    • A fair point, but then you're still putting the burden on the programmer to figure out where a weak reference is appropriate.

      But then I'll just choose a different example, like a general graph.

  • > How do you tell when you've hit a cycle?

    https://pages.cs.wisc.edu/~cymen/misc/interests/Bacon01Concu...

    TLDR there are heuristics which can give you a hint. And then you trigger a local trace to see.

    > Why?

    Because then you incur the cost of a trace -- and potentially paging in from slow-slow disk -- vs a simple atomic refcount.

    Even just a localized trace on live objects is a pointer-chasing cache & branch prediction killer.

I’m curious - did fragmentation end up being a significant issue, whether in memory or offloaded?

  • I never got far enough to push that into a production system but I suspect it would have, yes.

    I can see a periodic compacting phase could be useful in a system like that.

    In the DB world there's good research around similar topics. e.g. LeanStore and Umbra -- Umbra in particular does some nice things with variable sized buffers that I believe are expected to help with fragmentation https://db.in.tum.de/~freitag/papers/p29-neumann-cidr20.pdf