Comment by joaohaas
2 days ago
Since the post is about the benefits of react, I'm sure if requests were involved they would mention it.
Also, even if it was involved, 200ms for round-trip and DB queries is complete bonkers. Most round-trips don't take more than 100ms, and if you're taking 200ms for a DB query on an app with millions of users, you're screwed. Most queries should take max 20-30ms, with some outliers in places where optimization is hard taking up to 80ms.
> 200ms for round-trip and DB queries is complete bonkers
Never lived in Australia I see
If Shopify app P75 response time is that slow due to that the users are in Australia, then they should get a data center there.
In the real world, you can't just optimise for the sake of it. You need to get a business case for it. Because it all boils down to revenue vs expenses
1 reply →
Should they?
You could do the maths on conversion rate increase if that latency disappeared vs the cost of spinning up a dc & running it (including the mess that is localised dbs)
I’m not sure the economics works out for most businesses (I say this as an Australian)
1 reply →
> Most queries should take max 20-30ms
Most queries are 20-30ms. But a worst case of 200ms for large payloads or edge cases or just general degradations isn't crazy. Without knowing if 500ms is a p50 or p99 it's kind of a meaningless metric but assuming it's a p99, I think it's not as bad as the original commenter stated.
They mention later in the article that the 500ms is p75.
Realistically 50ms p75 should be achievable for the level of complexity in the shopify app.
P75. I can only image the p90 and p99 are upwards of 1 second.
1 reply →
Ah. I see we are spoiled with <4ms queries on our database. See, it all depends on perspective and use case. :)
I have a 160ms ping to news.ycombinator.com. Loading your comment took 1.427s of wall clock time. <s>Clearly, HN is so bad, it's complete bonkers ;)</s>
time curl -o tmp.del "https://news.ycombinator.com/item?id=42730748"
real 0m1.427s
"if you're taking 200ms for a DB query on an app with millions of users, you're screwed"
My calculation was 200ms for the DB queries and the time it takes your server-side framework ORM system to parse the results and transform it into JSON. But even in general, I disagree. For high-throughput systems it typically makes sense to make the servers stateless (which adds additional DB queries) in exchange for the ability to just start 20 servers in parallel. And especially for PostgreSql index scans where all the IO is cached in RAM anyway, single-core CPU performance quickly becomes a bottleneck. But a 100+ core EPYC machine can still reach 1000+ TPS for index scans that take 100ms each. And, BTW, the basic Shopify plan only allows 1 visitor per 17 seconds to your shop. That means a single EPYC server could still host 17,000 customers on the basic plan even if each visit causes 100ms of DB queries.
That seems really slow for a get request to hn without a session cookie (fetching only cacheable data).
And being not logged in - probably a poor comparison with Shopify app.
Having indices doesn’t guarantee anything is cached, it just means that fetching tuples is often faster. And unless you have a covering index, you’re still going to have to hit the heap (which itself might also be partially or fully cached). Even then, you still might have to hit the heap to determine tuple visibility, if the pages are being frequently updated.
Also, Postgres has supported parallel scans for quite a long time, so single-core performance isn’t necessarily the dominating factor.
I do not understand this thinking at all, a parsed response into whatever rendering engine, even if extremely fast is going to be a large percentage of this 500ms page load. Diminishing it with magical thinking about pure database queries under load with no understanding of the complexity of Shopify is quite frankly ridiculous, next up you’ll be telling everyone to roll there own file sharing with rsync or something…
I know - old man yells at cloud and stuff - but some 8-bit home computers from the 80s completed their entire boot sequence in about half a second. What does a 'UI rendering engine' need to do that takes half a second on a device that's tens of thousands of times faster? Everything on modern computers should be 'instant' (some of that time may include internet latency of course, but I assume that the Shopify devs don't live on the moon).
Moorsches Law v2 (/s) states that while computers get faster, we add more layers so computers actually get slower.
2 replies →
Not sure why people keep bringing the old (my machine x years ago was faster). Machines nowadays do way more than machines from 80s. Whether the tasks they do are useful or not is separate discussion.
3 replies →
Sure, and the screen in text mode was 80 x 25 chars = 2000 bytes of memory. A new phone has perhaps three million pixels, each taking 4 bytes. There's a significant difference.
1 reply →