Comment by CyberDildonics
5 hours ago
In the JVM, heap allocations are done via bump allocation.
If that were true then they wouldn't be heap allocations.
https://www.digitalocean.com/community/tutorials/java-jvm-me...
https://docs.oracle.com/en/java/javase/21/core/heap-and-heap...
not possible to do in the JVM, barring primitives
Then you make data structures out of arrays of primitives.
Easy to do? Sure. Easy to do fast? Well, no. That's entirely the reason why C++ has multiple allocators.
I don't know what this means. Vectors are trivial and if you hand out ranges of memory in an arena allocator you allocate it once and free it once which solves the heavy allocation problem. The allocator parameter in templates don't factor in to this.
> If that were true then they wouldn't be heap allocations.
"Heap" is a misnomer. It's not called that due to the classic CS "heap" datastructure. It's called that for the same reason it's called a heap allocation in C++. Modern C++ allocators don't use a heap structure either.
How the JVM does allocations for all it's collectors is in fact a bump allocator in the heap space. There are some weedsy details (for example, threads in the JVM have their own heap space for doing allocation to avoid contention in allocation) but suffice it to say it ultimately translates into a region check then pointer bump. This is why the JVM is so fast at allocation, much faster than C++ can be. [1] [2]
> I don't know what this means.
JVM allocations are typically pointer bumps, adding a number to a register. There's really nothing faster than it. If you are implementing an arena then you've already lost in terms of performance.
[1] https://www.datadoghq.com/blog/understanding-java-gc/#memory...
[2] https://inside.java/2020/06/25/compact-forwarding/
Modern C++ allocators don't use a heap structure either.
"Yes, malloc uses a heap data structure to allocate memory dynamically for programs. The heap allows for persistent memory allocation that can be managed manually by the programmer."
"How Malloc Works with the Heap
How the JVM does allocations for all it's collectors is in fact a bump allocator in the heap space.
This doesn't make sense. It's one or the other. A heap isn't about getting more memory or mapping it into a process space, it is about managing the memory already in the process space and being able to free memory in a different order than you allocated it, then give that memory back out without system calls.
https://www.geeksforgeeks.org/c/dynamic-memory-allocation-in...
https://en.wikipedia.org/wiki/C_dynamic_memory_allocation
JVM allocations are typically pointer bumps, adding a number to a register.
I think you are mixing up mapping memory into a process (which is a system call not a register addition) and managing the memory once it is in process space.
The allocator frees memory and reuses it within a process. If freeing it was as simple as subtracting from a register then there would be no difference in speed between the stack and the heap and there would be no GC pauses and no GC complexity. None of these things are true obviously since java has been dealing with these problems for 30 years.
This is why the JVM is so fast at allocation, much faster than C++ can be
Java is slower than C++ and less predictable because you can't avoid the GC which is the whole point here.
The original point was that you have to either avoid the GC or fight the GC and a lot of what you have talked about is either not true or explains why someone has to avoid and fight the GC in the first place.
You're wrong for like 6 different reasons.
Java does do bump pointer allocation. The key is that when GC runs, surviving objects get moved. The slow part of GC isn't the allocation (GCs generally have much faster allocators than malloc). The slow part is the barriers that the GC requires and the pauses.