Comment by marcinzm

1 day ago

> extra 10 milliseconds you saved

Strawmen arguments are no fun so let's look at an actual example:

https://www.rippling.com/blog/the-garbage-collector-fights-b...

P99 of 3 SECONDS. App stalls for 2-4 SECONDS. All due to Python.

Their improved p99 is 1.5 seconds. Tons of effort and still could only get 1.5 seconds.

https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-...

> Amazon Found Every 100ms of Latency Cost them 1% in Sales

I've seen e-commerce companies with 1 second p50 latencies due to language choices. Not good for sales.

> Amazon Found Every 100ms of Latency Cost them 1% in Sales

I see this quoted but Amazon has become 5x slower (guestimate) and it doesn't seem like they are working on it as much. Sure the home page loads "fast" ~800ms over fiber, but clicking on a product routinely takes 2-3 seconds to load.

  • Amazon nowadays has a near monopoly powered by ad money due to the low margin on selling products versus ad spend. So unless you happen to be in the same position using them nowadays as an example isn't going to be very helpful. If they increased sales 20% at the cost of 1% less ad spend then they'd probably be at a net loss as a result.

So you're kinda falling into a fallacy here. You're taking a specific example and trying to make a general rule out of it. I also think the author of the article is doing the same thing, just in a different way.

Users don't care about the specifics of your tech stack (except when they do) but they do care about whether it solves their problem today and in the future. So they indirectly care about your tech stack. So, in the example you provided, the user cares about performance (I assume Rippling know their customer). In other examples, if your tech stack is stopping you from easily shipping new features, then your customer doesn't care about the tech debt. They do care, however, that you haven't given them any meaningful new value in 6 months but your competitor has.

I recall an internal project where a team discussed switching a Python service with Go. They wanted to benchmark the two to see if there was a performance difference. I suggested from outside that they should just see if the Python service was hitting the required performance goals. If so, why waste time benchmarking another language? It wasn't my team, so I think they went ahead with it anyway.