Graphs are common. But you don't have to represent each edge as a pointer. For example you can represent them using (sparse) adjacency matrices. Or you can represent edges using a pointer in each direction (even for a directed graph) or some other data structure (as is commonly the case in triangle mesh data structures). Lots of options. Most do not require GC.
No but they are under-specified. OP is specifically working with a document-hierarchy data-structure with a natural ownership/weak-pointer distinction to exploit -- no need to abstract it to a general graph.
Yes, but they said that in the context of a tailored language for persistent/HDD-backed data, where implicitly performance crosses the line into an additional measure of correctness, rather than an orthogonal one. ("to find live references means walking nearly the entire heap including the portions living in secondary storage, and now you're in a world of pain")
So the "increased cognitive overhead" is intrinsic to the problem domain, not an unforced defect of the language design. Overgeneralization in such a case would induce even worse overhead as there'd be no user-level way to fix perf.
Graphs are common. But you don't have to represent each edge as a pointer. For example you can represent them using (sparse) adjacency matrices. Or you can represent edges using a pointer in each direction (even for a directed graph) or some other data structure (as is commonly the case in triangle mesh data structures). Lots of options. Most do not require GC.
No but they are under-specified. OP is specifically working with a document-hierarchy data-structure with a natural ownership/weak-pointer distinction to exploit -- no need to abstract it to a general graph.
Yes, but then they also said:
> hopefully design your language semantics to discourage cycles
thus expanding the scope of their comment beyond that specific use case.
Yes, but they said that in the context of a tailored language for persistent/HDD-backed data, where implicitly performance crosses the line into an additional measure of correctness, rather than an orthogonal one. ("to find live references means walking nearly the entire heap including the portions living in secondary storage, and now you're in a world of pain")
So the "increased cognitive overhead" is intrinsic to the problem domain, not an unforced defect of the language design. Overgeneralization in such a case would induce even worse overhead as there'd be no user-level way to fix perf.
2 replies →