← Back to context

Comment by Analemma_

2 days ago

Memory allocators are something I expect to rapidly degrade in the absence of continuous updates as the world changes underneath you. Changing page sizes, new ucode latencies, new security features etc. all introduce either outright breakage or at least changing the optimum allocation strategy and making your old profiling obsolete. Not to mention the article already pointed out one instance where a software stack (KDE, in that case) used allocation profiles that broke an earlier version completely. Even though that's fixed now, any language runtime update or new feature could introduce a new allocation style that grinds you down.

As much as it's nice to think software can be done, I think something so closely tied to the kernel and hardware and the application layer, which all change constantly, never can be.

“Software is just done sometimes” is a common refrain I see repeated among communities where irreplaceable software projects are often abandoned. The community consensus has a tendency to become “it is reliable and good enough, it must be done”.