← Back to context

Comment by fxtentacle

2 days ago

I would read the <500ms screen loads as follows:

When the user clicks a button, we start a server round-trip and fetch the data and do client-side parsing, layout, formatting and rendering and then less than 500ms later, the user can see the result on his/her screen.

With a worst-case ping of 200ms for a round-trip, that leaves about 200ms for DB queries and then 100ms for the GUI rendering, which is roughly what you'd expect.

Since the post is about the benefits of react, I'm sure if requests were involved they would mention it.

Also, even if it was involved, 200ms for round-trip and DB queries is complete bonkers. Most round-trips don't take more than 100ms, and if you're taking 200ms for a DB query on an app with millions of users, you're screwed. Most queries should take max 20-30ms, with some outliers in places where optimization is hard taking up to 80ms.

  • > 200ms for round-trip and DB queries is complete bonkers

    Never lived in Australia I see

  • > Most queries should take max 20-30ms

    Most queries are 20-30ms. But a worst case of 200ms for large payloads or edge cases or just general degradations isn't crazy. Without knowing if 500ms is a p50 or p99 it's kind of a meaningless metric but assuming it's a p99, I think it's not as bad as the original commenter stated.

    • They mention later in the article that the 500ms is p75.

      Realistically 50ms p75 should be achievable for the level of complexity in the shopify app.

      2 replies →

    • Ah. I see we are spoiled with <4ms queries on our database. See, it all depends on perspective and use case. :)

  • I have a 160ms ping to news.ycombinator.com. Loading your comment took 1.427s of wall clock time. <s>Clearly, HN is so bad, it's complete bonkers ;)</s>

    time curl -o tmp.del "https://news.ycombinator.com/item?id=42730748"

    real 0m1.427s

    "if you're taking 200ms for a DB query on an app with millions of users, you're screwed"

    My calculation was 200ms for the DB queries and the time it takes your server-side framework ORM system to parse the results and transform it into JSON. But even in general, I disagree. For high-throughput systems it typically makes sense to make the servers stateless (which adds additional DB queries) in exchange for the ability to just start 20 servers in parallel. And especially for PostgreSql index scans where all the IO is cached in RAM anyway, single-core CPU performance quickly becomes a bottleneck. But a 100+ core EPYC machine can still reach 1000+ TPS for index scans that take 100ms each. And, BTW, the basic Shopify plan only allows 1 visitor per 17 seconds to your shop. That means a single EPYC server could still host 17,000 customers on the basic plan even if each visit causes 100ms of DB queries.

    • That seems really slow for a get request to hn without a session cookie (fetching only cacheable data).

      And being not logged in - probably a poor comparison with Shopify app.

    • Having indices doesn’t guarantee anything is cached, it just means that fetching tuples is often faster. And unless you have a covering index, you’re still going to have to hit the heap (which itself might also be partially or fully cached). Even then, you still might have to hit the heap to determine tuple visibility, if the pages are being frequently updated.

      Also, Postgres has supported parallel scans for quite a long time, so single-core performance isn’t necessarily the dominating factor.

  • I do not understand this thinking at all, a parsed response into whatever rendering engine, even if extremely fast is going to be a large percentage of this 500ms page load. Diminishing it with magical thinking about pure database queries under load with no understanding of the complexity of Shopify is quite frankly ridiculous, next up you’ll be telling everyone to roll there own file sharing with rsync or something…

    • I know - old man yells at cloud and stuff - but some 8-bit home computers from the 80s completed their entire boot sequence in about half a second. What does a 'UI rendering engine' need to do that takes half a second on a device that's tens of thousands of times faster? Everything on modern computers should be 'instant' (some of that time may include internet latency of course, but I assume that the Shopify devs don't live on the moon).

      9 replies →

If you are good those numbers are an order of magnitude off. In truth it is probably mostly auth or something. If you simply avoid json you can radically attack these things fast.

RTT to nearest major metro DC should be up to 20ms (where I am it is less than half that), your DB calls should not be anything like 200ms (and in the event they are you need to show something else first), and 10-20ms is what you should assume for rendering budget of something very big. 60hz means 16ms per frame after all.

  • What percentile? Topics like these don't talk about the 5G connected iphone 16 pro max, but have to include low-end phones with old OS versions and bad connectivity (e.g. try the same network connectivity in the London metro, where often there is no receiption whatsoever).

    As you reach for higher percentiles, RTT and such start growing very fast.

    Edit: other commenter mentioned 75% as percentile.

    • Independent of connectivity, UI rendering should be well under the device refresh rate. Consider the overhead of a modern video game that runs 60fps without a hiccup. It’s ludicrous that a CRUD app which usually only populates some text fields and maybe a small image or two can’t do the same

      2 replies →

    • > What percentile?

      There's no argument that starts this way which doesn't end either with "support working offline", or defining when you consider that a user has stepped out of bounds with respect to acceptable parameters, which then raises the question what do you do in that event?

      If all you're trying to do is say 75% of users have a good experience, and in your territory 75% means a 150ms and that's too long then the network cannot be in your critical path, and you have to deal with it. If you're on a low end phone any I/O at all is going to kill you, including loading too much code, and needs to be out of the way.

      If you can tell the UX is going to be bad you will need to abort and tell them that, though they really will not like it, it's often better to prevent such users ever getting your app in the first place.

      I come from mobile games, and supported titles with tens of millions of players around the world back in the early 4G era. All I can tell you is not once did mobile ping become a concern - in fact those networks are shockingly good compared to wifi.

  • > RTT to nearest major metro DC should be up to 20ms (where I am it is less than half that)

    over a mobile network? My best rtt to azure or aws over tmobile or verizon is 113ms vs 13ms over my fiber conection.

> 200ms for DB queries

No. Just no. There’s an entire generation of devs at this point who are convinced that a DB is something you throw JSON into, use UUIDs for everything, add indices when things are slower than you expected, and then upsize the DB when that doesn’t fix it.

RAM access on modern hardware has a latency of something like 10 nanoseconds. NVMe reads vary based on queue depth and block size, but sub-msec is easily attainable. Even if your disks are actually a SAN, you should still see 1-2 msec. The rest is up to the DB.

All that to say, a small point query on a well-designed schema should easily execute in sub-msec times if the pages are in the DB’s buffer pool. Even one with a small number of joins shouldn’t take more than 1-2 msec. If this is not the case for you, your schema, query, or DB parameters are sub-optimal, or you’re doing some kind of large aggregation query.

I took a query from 70 to 40 msec today just by rewriting it. Zero additional indexing or tuning, just unrolling several unnecessary nested subqueries, and adding a more selective predicate. I have no doubt that it could get into the single digits if better indexing was applied.

I beg of devs, please take the time to learn SQL, to read EXPLAIN plans, and to measure performance. Don’t accept 200 msec queries as “good enough” because you’re meeting your SLOs. They can be so much faster.

  • Beg all you want. They're still going to dump JSON strings (not even jsonb) and UUIDs in them anyway, because, "Move fast and break things."

    I lament along with you.

  • >RAM access on modern hardware has a latency of something like 10 nanoseconds

    What modern hardware are you using that this is true? That's faster than L3 cache on many processors.

    • Correction: DRAM latency is ~10 - 20 nsec on most DDR4 and DDR5 sticks. The access time as seen by a running program is much more than that.

      As an actual example of RAM latency, DDR4-3200 with CL22 would be (22 cycles * 2E9 nsec/sec / 3200E6 cycles/sec) == 13.75 nsec.

  • "All that to say, a small point query on a well-designed schema should easily execute in sub-msec times if the pages are in the DB’s buffer pool"

    Shopify is hosting a large number of webshops with billions of product descriptions, but each store only has a low visitor count. So we are talking about a very large and, hence, uncacheable dataset with sparse access. That means almost every DB query to fetch a product description will hit the disk. I'd even assume a RAID of spinning HDDs for price reasons.

    • Shopify runs a heavily sharded MySQL backend. Their Shop app uses Vitess; last I knew the main Shopify backend wasn’t on Vitess (still sharded, just in-house), but I could be wrong.

      I would be very surprised if “almost every query” was hitting disk, and I’d be even more surprised to learn that they used spinners.

  • > just unrolling several unnecessary nested subqueries, and adding a more selective predicate

    And state of the art query optimizers can even do all this automatically!

    • Sometimes, yes. Sometimes not. This was on MySQL 5.7, and I wound up needing to trace the optimizer path to figure out why it was slower than expected.

      While I do very much appreciate things like WHERE foo IN —> WHERE EXISTS being automatically done, I also would love it if devs would just write the latter form. Planners are fickle, and if statistics get borked, query plans can flip. It’s much harder to diagnose when all along, the planner has been silently rewriting your query, and only now is actually running it as written.

      1 reply →

  • I think 500ms P75 is good for an app that hits network in a hot path (mobile networks are no joke), but I agree that 200ms is very very bad for hitting the DB on the backend. I've managed apps with tables in the many, many billions of rows in MySQL and would typically expect single digit millisecond responses. If you use EXPLAIN you can quickly learn to index appropriately and adjust queries when necessary.

    • 500ms p75 is not good for the (low) complexity of the shopify app.

      Also reporting p75 latency instead of p99+ just screams to me that their p99 is embarrassing and they chose p75 to make it seem reasonable.

100ms to render an iOS screen means dropping 6 frames. That would put an applicant in the "no hire" category.

The 500ms number is p75 - not worst case at all.

200ms round trip is like 10x more than what's reasonably possible.

Same with your other numbers.

People have gotten used to that, but UI work back to the 1960s has done studies and showed clearly that for many of these operations you get tens of ms before people notice and their attention wanes. The web often doesn't allow for response times as fast as the humans need, which is a good reason to write real apps not web apps. That is also why I use tabs - load a bunch in the background so when I'm ready I can just switch tabs and it is there.