Comment by philippta

1 day ago

It always comes as a surprise to me how the same group of people who go out of their way to shave off the last milliseconds or microseconds in their tooling care so little about the performance of the code they ship to browsers.

Not to discredit OP's work of course.

People shaving off the last milliseconds or microseconds in their tooling aren't the same people shipping slow code to browsers. Say thanks to POs, PMs, stakeholders, etc.

  • Sometimes they are the same person.

    It just take someone to have poor empathy towards your users to ship slow software that you don't use.

    • I've never met a single person obsessed with performance who goes half the way. You either have a performance junkie or a slob who will be fine with 20 minutes compile times.

      1 reply →

TBH I don't know how to do that work. If I'm in the backend it's very easy for me. I can think about allocations, I can think about threading, concurrency, etc, so easily. In browser land I'm probably picking up some confusing framework, I don't have any of the straightforward ways to reason about performance at the language level, etc.

Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.

Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".

  • It's not too diffcut in the browser either. Consider how often you're making copies of your data and try to reduce it. For example:

    - for loops over map/filter

    - maps over objects

    - .sort() over .toSorted()

    - mutable over immutable data

    - inline over callbacks

    - function over const = () => {}

    Pretty much, as if you wrote in ES3 (instead of ES5/6)

    • Yes but it's not really fair to expect me to know how to do that. Just because I know how to do it for backend code, where it's often a lot easier to see those copies, doesn't mean I'm just a negligent asshole for not doing it on the frontend. I don't know how, it's a different skillset.

      2 replies →

  • The work is largely the same.

    You think about allocations: JS is a garbage collected language and allocations are "cheap" so extremely common. GC is powerful and in most JS engines quite fast but not omniscient and sometimes needs a hand. (Just like reasoning with any GC language.) Of course the easiest intervention to allocations is to remove allocations entirely; just because it is cheap to over-allocate, and the GC will mostly smooth out the flaws with such approaches, doesn't mean ignoring the memory complexity of the chosen algorithms. Most browser dev tools today have allocation profilers equal or better to their backend cousins.

    You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers). On the flipside, JS is a little harder to reason about threading than many backend languages because it is extensively cooperatively threaded. Code has to yield to other code frequently and regularly. Shaving milliseconds off a routine yields more time to other things that need to happen (browser events, user input, etc). That starts to add up. JS encourages you to do things in short, tight "bursts" rather than long-running algorithms. Here again, most browser dev tools today have strong stack trace/flame chart profilers that equal or exceed backend cousins. Often in JS "tall" flames are fine but "wide" flames are things to avoid/try to improve. (That's a bit reversed from some backend languages where shallow is overall less overhead and long-running tasks are sometimes better amortized than lots of short ones.)

    > But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".

    The heavily event-driven architecture of the browser often means that just sitting on a webpage is "browsing in a hot loop". Browsers have gotten better and better at sleeping inactive tabs and multi-threading tabs to not interfere with each other, but things are still a bit of a "tragedy of the commons" that the average performance of a website still directly and indirectly drags everyone else down. It might not matter to you that your webpage is slow because you only expect a user to visit it once, but you also aren't taking into account that is probably not the only website that user is browsing at that moment. Smart users do directly and indirectly notice when the bad performance of one webpage impacts their experiences of other web pages or crashes their browser. Depending on your business model and what the purpose of that webpage is for, that can be a bad impression that leads to things like lost sales/customers.

    • I don't think it's the same tbh. In Rust I can often just `rg '\.clone'` and immediately see wins. Allocations are far easier to track statically. I don't have a good sense for "seeing" allocations when I look at JS, it feels like it's unfair to expect me to have that tbh. As for profilers, yes I could see things like "this code is allocating a lot" but JS hardly feels like a language where it's smooth to then fix that, and again, frameworks are so common that I doubt I'd be in a position to do so. This is really in contrast to systems languages again where I also have profilers but fixing the problem is often trivial.

      > You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers).

      My issue isn't with being able to write concurrent code that has no bugs, my issue is having access to primitives where I have tight control over concurrency and parallelism. The primitives in JS do not provide that control and are often very heavy in and of themselves.

      I think it's perhaps worth noting that I am not saying "it's impossible to write fast code for the browser", I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.

      2 replies →

I personally met a lot of folks who care about both quite a bit.

But to be fair, besides the usual patterns like tree-shaking and DCE, "runtime performance" is really tricky to measure or optimize for