Comment by dpe82
2 days ago
Why would they have to change? Sometimes software development is largely "done" and there isn't much more you need to do to a library.
2 days ago
Why would they have to change? Sometimes software development is largely "done" and there isn't much more you need to do to a library.
While I certainly wish that more software would reach a "done" stage, I don't think jemalloc is necessarily there yet. Unfortunately I'm aware of there being bugs in the current version of jemalloc, when run in certain environment configurations, including memory leaks. I know the folks that found it were looking to report it, but I guess that won't happen now.
Even from a quick look at the open issues, I can see https://github.com/jemalloc/jemalloc/issues/2838, and https://github.com/jemalloc/jemalloc/issues/2815 as two examples, but there's a fair number of issues still open against the repository.
So that'll leave projects like redis & valkey with some decisions to make.
1) Keep jemalloc and accept things like memory leak bugs
2) Fork and maintain their own version of jemalloc.
3) Spend time replacing it entirely.
4) Hope someone else picks it up?
jemalloc is used enough at Amazon that it would make sense for them to maintain it, but that's not really their style.
Jemalloc is used as an easy performance boost probably by every major Ruby on Rails server.
Memory allocators are something I expect to rapidly degrade in the absence of continuous updates as the world changes underneath you. Changing page sizes, new ucode latencies, new security features etc. all introduce either outright breakage or at least changing the optimum allocation strategy and making your old profiling obsolete. Not to mention the article already pointed out one instance where a software stack (KDE, in that case) used allocation profiles that broke an earlier version completely. Even though that's fixed now, any language runtime update or new feature could introduce a new allocation style that grinds you down.
As much as it's nice to think software can be done, I think something so closely tied to the kernel and hardware and the application layer, which all change constantly, never can be.
“Software is just done sometimes” is a common refrain I see repeated among communities where irreplaceable software projects are often abandoned. The community consensus has a tendency to become “it is reliable and good enough, it must be done”.
For an example of why an allocator is a maintenance treadmill, consider that C++ recently (relatively) added sized delete, and Linux recently gained transparent huge pages.
It's been 14 years since THP got added to the kernel[1], surely we're past calling that "recent" :)
https://www.kernelconfig.io/config_transparent_hugepage
> In particular, the seeds for principled huge page allocation (HPA) were sown way back in 2016! HPA work continued apace for several years, slowed, then stagnated as tweaks piled on top of each other without the requisite refactoring that keeps a codebase healthy. This feature trajectory recently cratered.
But if they'd declared the allocators "done" 15 years ago, then you wouldn't have it.
Another example is rseq (which was originally implemented for tcmalloc).
Some people believe everything must always be constantly tweaked, redone, broken and fixed, and churned for no reason. The only things that need to be fixed in mature, working software are bugs and security issues. It doesn't magically stop working or get "stale" unless dependencies, the OS, or build tools break.
Technology marches on, and in some number of years other allocators will exist that outperform/outfeature jemalloc.
This number of years depending on your allocation profile could be something like -10 years easily. New allocators constantly crop up
Presumably then the performance impact of any switch will be positive.
> Sometimes software development is largely "done"
Lol absolutely not