Comment by otterley
1 day ago
It took seven years to address this concern following the initial bug report (2018). That seems like a lot, considering how instrumenting CPU time can be in the hot path for profiled code.
1 day ago
It took seven years to address this concern following the initial bug report (2018). That seems like a lot, considering how instrumenting CPU time can be in the hot path for profiled code.
400x slower than 70ns is still only 28us. How often is the JVM calling this function?
It depends. If you’re doing continuous profiling, it’d make a call to get the current time at every method entry and exit, each of which could then add a context switch. In an absolute sense it appears to be small, but it could really add up.
This is what flame graphs are super helpful for, to see whether it’s really a problem or not.
Also, remember that every extra moment running instructions is a lost opportunity to put the CPU to sleep, so this has energy efficiency impact as well.
If you are doing continuous profiling, you are probably using a low overhead stack sampling profiler rather than recording every method entry and exit.
1 reply →
If it's calling it twice per function, that's enormously expensive and this is a major win.
28us is still solid amount of time
If it's called once an hour, who cares?
Even called every frame 60 times per second, it's only 0.2% of a 60 fps time budget.
It's not a huge amount of time in absolute terms; only if it's relatively "hot."