← Back to context

Comment by throw10920

12 days ago

> If your implementation is poor

This is factually incorrect. Latency is limited by the speed of light and the user's internet connection. If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.

> You doing weather forecasting? Crypto mining? What "computation" is happening on the client?

This is absolutely ridiculous. It's very easy to see that you might want a simple SPA that allows you to browse a somewhat-interactive site (like your bank's site) without having to make a lot of round-trips to the server, and there's also thousands of examples of complex web applications that exist in the real world that serve as trivial examples of computation that might happen on the client.

> The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.

I never mentioned anything about "ads" or "most web sites" - I was asking an engineering question. Ads are entirely irrelevant here. I doubt you even have data to back this claim up.

Please don't leave low-effort and low-quality responses like this - it wastes peoples' time and degrades the quality of HN.

> This is factually incorrect. Latency is limited by the speed of light and the user's internet connection.

This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML. Additionally, all major hypermedia frameworks have mechanisms to download a link on mousedown or when you hover over a link for a specified time.

> If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.

Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends). Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.

  • > This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML.

    No, it is not. There's no way to send the incremental results of typing in a search box, for instance, to the server with HTML alone - you need to use Javascript. And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.

    > Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends).

    I never said anything about or implying "download 10-20MBs of data (i.e. the normal js front ends)" in my question. Bad assumption. So, no, there's no "mortal wound" because you just strawmanned my premises.

    > Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.

    As someone building web applications - no, they really aren't. My webapps sip power and compute and are low-latency while still being very poorly optimized.

    • > No, it is not. There's no way to send the incremental results of typing in a search box, for instance, to the server with HTML alone - you need to use Javascript.

      Hypermedia applications use javascript (e.g., htmx - the original subject), so I'm not sure why you're hung up on that.

      > And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.

      You just send the request on keydown. It's going to take about ~50-75ms or so for your user's finger to traverse into the up position. Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.

      > As someone building web applications - no, they really aren't.

      We were originally talking about "normal" (js) web applications (e.g. react, angular, etc.)., most of these apps have all the traits I mentioned earlier. We all have used these pigs that take forever on first load, cause high cpu utilization, and are often janky.

      > My webapps sip power and compute and are low-latency while still being very poorly optimized.

      And now you have subtlely moved the goals posts to only consider the web apps you're building, in place of "normal" js webapps you originally compared against htmx. I saw you do the same thing in another thread on this story. I have no further interest in engaging in that sort of discussion.

      1 reply →

If you want to help people with latency issues, the best way is to sent everything in with as few request as possible, especially if you don’t have a lot of images. And updating with html (swapping part of the DOM) is efficient compared to updating with JSON.