Comment by userbinator

10 months ago

[flagged]

I’m not sure what use case you’re coming from but it sounds like you’re saying something like: most end users don’t use a profiler or debugger so why should they pay the cost of debuggability? That’s fine I guess if you’re throwing software over a wall to users and taking no responsibility for their experience. But a lot of people who build software do take some responsibility for bugs and performance problems that their users experience. This stuff is invaluable for them. End users benefit (tremendously) from software being debuggable even if the users themselves never run a profiler or debugger that uses the frame pointers (because the developers are able to find and fix problems reported by other users or the developers themselves).

  • [flagged]

    • While I somewhat support the idea of avoiding debug-mode applications shipped to end customers, it seems in this particular case Brendan argues about the need to debug binaries in runtime on the server side. If there is a binary component running in production, the choice of running it with debug enabled (and/or with an ability to activate debug in runtime) is purely choice of the system owners (who nowadays are both the developers and the operators of components).

      “Observability” primarily refers to ability to view system state beyond of what blackbox monitoring does. Again, the term primarily refers to server side operations and not to software shipped to end users.

      As much as spying on users is ugly, it’s not related to debugging server side.

> Let's make software more inefficient (even if it's tiny, it all adds up!)

I'm not sure if you know who the author of that blog is, but if there's anyone in the world who cares about (and is knowledgeable about) improving performance, it's him. You can be pretty darn sure he wouldn't do this if he believed it would make software more inefficient.

  • [flagged]

    • You're confusing things with the analogy. The invasiveness of telemetry is collateral damage, not failure to meet its primary objective (gathering useful data for debugging, spying on people, whatever you think it is). In this case his primary objective literally is to improve performance... which aligns with your own goal, and which he has successfully demonstrated in the past.

We have an embarrassment of riches in terms of compute power, slowing down everything by a negligible amount is worth it if it makes profiling and observability even 20% easier. In almost all cases you cannot just walk up to a production Linux system and do meaningful performance analysis without serious work, unlike say a Solaris production system from 15 years ago.

  • We have an embarrassment of riches in terms of compute power yet all software is still incredibly slow because we and up slowing down everything by a "negligible" amount here and another "negligible" amount there and so on.

    > In almost all cases you cannot just walk up to a production Linux system and do meaningful performance analysis without serious work

    So? That's one very very very specific use case that others should not have to pay for. Not even with a relatively small 1% perf hit.

    • When computers are slow, the primary way out is finding out WHY they are slow.

      Finding this out requires... meaningful performance analysis. That's right, this 1% perf hit will make it extremely easy to find the reasons for 5,10,20,50% slowdowns, and enable developers (and you!) to fix them.

      By making it easier to profile software, YOU can notice performance issues in the software you're running and deliver a nice patch that will improve performance by more than was lost, making this 1% slowdown a very effective, high-interest investment.

      1 reply →

    • The issue is we can't see what running software is doing, because frame pointers are omitted, symbols are stripped, etc. in the name of performance, so we can't see what the software is even doing without changing it. And in many cases changing it will "fix" the symptom, before we can determine what the issue actually was.

> Let's make software more inefficient

Isn't the whole point of enabling frame pointers everywhere to make it easier to profile software so that it can be made more efficient?

You seem to be stuck in the 90-ties. Computing is 64-bit nowadays, not 32-bit, and modern architectures/ABIs integrate frame pointers and frame records in a way that’s both natural and performant.

As someone who worked on V8 for some years, I can assure that Web bloat is not due to frame pointers.

  • No but it is exactly the same attitude - so-called "tiny" perfomance hits for developer convenience.