Comment by jakevoytko
1 day ago
When I was at Google, our team kept RUM metrics for a bunch of common user actions. We had a zero regression policy and a common part of coding a new feature was running benchmarks to show that performance didn't regress. We also had a small "forbidden list" of JavaScript coding constructs that we measured to be particularly slow in at least one of Chrome/Firefox/Internet Explorer.
Outside contributors to our team absolutely hated us for it (and honestly some of the people on the team hated it too); everyone likes it when their product is fast, and nobody likes being held to the standard of keeping it that way. When you ask them to rewrite their functional coding as a series of `for` loops because the function overhead is measurably 30% slower across browsers[0], they get so mad.
[0] This was in 2010, I have no idea what the performance difference is in the Year of Our Lord 2025.
Have you had the chance to interact with any of the web interfaces for their cloud products like GCP Console, Looker Studio, Big Query etc? It's painful, like when clicking a button or link you can feel a cloud run initializing itself in the background before processing your request.
Yes, I have the misfortune of needing to use Looker on a daily basis.
Boy, do I wish more teams worked this way. Too many product leaders are tasked with improving a single KPI (for example, reducing service calls) but without requiring other KPIs such as user satisfaction to remain constant. The end result is a worse experience for the customer, but hey, at least the leader’s goal was met.