← Back to context

Comment by jdsully

2 days ago

The "greedy" part is likely not releasing pages back to the OS in a timely manner.

That seems odd though, seeing as this is one of the main criticisms of glibc's allocator.

  • In the containerized environments where these allocators were mainly developed, it is all but totally pointless to return memory to the kernel. You might as well keep everything your container is entitled to use, because it's not like the other containers can use it. Someone or some automatic system has written down how much memory the container is going to use.

    • Returning no longer used anonymous memory is not without benefits.

      Returning pages allows them to be used for disk cache. They can be zeroed in the background by the kernel which may save time when they're needed again, or zeroing can be avoided if the kernel uses them as the destination of a full page DMA write.

      Also, returning no longer used pages helps get closer to a useful memory used measurement. Measuring memory usage is pretty difficult of course, but making the numbers a little more accurate helps.

      1 reply →

    • I know Google has good engineering, but I find this a bit implausible?

      For most applications, especially request/response type apps like web servers, "right sizing" truly correctly while accounting for spikes takes a lot of engineering effort to fully account for how much allocation a single request will need, then ensuring the maximum concurrent requests never go beyond that so you never risk OOMs.

      I can see this being fine-tuned for extremely high-scale, core services like load balancers, SDNs, file systems etc., where you probably want to allocate all your data structures at startup time and never actually allocate anything after that, and you probably have whole teams of engineers devoted to just single services. But not most apps?

      Surely it's better for containers to share system memory, and rely on limits and resource-driven autoscaling to make the system resilient?

      1 reply →

    • glibc is not written in a containerized environment and I personally think it’s telling that a core feature of the more recent tcmalloc Google open sourced is that it returns memory efficiently, so clearly even in containerized environments it’s important. The reason for this is how kernels deal with compressing pages and pages released to the kernel are explicitly zeroed (unlike the user space allocator) which aids in the efficiency of the compression even in a containerized workload because those pages can just be skipped since they’re unused and the kernel can share the reference zeroed page for lazy allocations.

      Also the kernel itself has memory needs for lots of things and it not having memory or having to go on a hunt to find contiguous pages is not good. Additionally in a VM or container environment there’s other containers and VMs running on that machine so the memory will also eventually get percolated up to the hyper visor to rebalance. None of this happens if the user space allocator hangs on to memory needlessly in a greedy fashion and indeed such an application would be more subject to the OOM killer.