Comment by naasking
10 months ago
> 2% performance is actually a pretty big difference.
No it's not, particularly when it can help you identify hotspots via profiling that can net you improvements of 10% or more.
10 months ago
> 2% performance is actually a pretty big difference.
No it's not, particularly when it can help you identify hotspots via profiling that can net you improvements of 10% or more.
Sure, but how many of the people running distro compiled code do perf analysis? And how many of the people who need to do perf analysis are unable to use a with-frame-pointers version when they need to? And how many of those 10% perf improvements are in common distro code that get upstreamed to improve general user experience, as opposed to being in private application code?
If you're netflix then "enable frame pointers" is a no-brainer. But if you're a distro who's building code for millions of users, many of whom will likely never need to fire up a profiler, I think the question is at least a little trickier. The overall best tradeoff might end up being still to enable frame pointers, but I can see the other side too.
It's not a technical tradeoff, it's a refusal to compromise. Lack of frame pointers prevents many groups from using software built by distros altogether. If a distro decides that they'd rather make things go 1% faster for grandma, at the cost of alienating thousands of engineers at places like Netflix and Google who simply want to volunteer millions of dollars of their employers resources helping distros to find 10x performance improvements, then the distros are doing a great disservice to both grandma and themselves.
I mean if you need to do performance analysis on a software just recompile it. Why it's such a big deal?
In the end a 2% of performance of every application it's a big deal. On a single computer may not be that significant, think about all the computers, servers, clusters, that run a Linux distro. And yes, I would ask a Google engineer that if scaled on the I don't know how many servers and computers that Google has a 2% increase in CPU usage is not a big deal: we are probably talking about hundreds of kW more of energy consumption!
We talk a lot of energy efficiency these days, to me wasting a 2% only to make the performance analysis of some software easier (that is that you can analyze directly the package shipped by the distro and you don't have to recompile it) it's something stupid.
2 replies →
Presenting people with false dichotomies is no way to build something worthwhile
I would say the question here is what should be the default, and that the answer is clearly "frame pointers", from my point of view.
Code eking out every possible cycle of performance can enable a no-frame-pointer optimization and see if it helps. But it's a bad default for libc, and for the kernel.