Comment by throw10920
11 days ago
> This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML.
No, it is not. There's no way to send the incremental results of typing in a search box, for instance, to the server with HTML alone - you need to use Javascript. And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.
> Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends).
I never said anything about or implying "download 10-20MBs of data (i.e. the normal js front ends)" in my question. Bad assumption. So, no, there's no "mortal wound" because you just strawmanned my premises.
> Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.
As someone building web applications - no, they really aren't. My webapps sip power and compute and are low-latency while still being very poorly optimized.
> No, it is not. There's no way to send the incremental results of typing in a search box, for instance, to the server with HTML alone - you need to use Javascript.
Hypermedia applications use javascript (e.g., htmx - the original subject), so I'm not sure why you're hung up on that.
> And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.
You just send the request on keydown. It's going to take about ~50-75ms or so for your user's finger to traverse into the up position. Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.
> As someone building web applications - no, they really aren't.
We were originally talking about "normal" (js) web applications (e.g. react, angular, etc.)., most of these apps have all the traits I mentioned earlier. We all have used these pigs that take forever on first load, cause high cpu utilization, and are often janky.
> My webapps sip power and compute and are low-latency while still being very poorly optimized.
And now you have subtlely moved the goals posts to only consider the web apps you're building, in place of "normal" js webapps you originally compared against htmx. I saw you do the same thing in another thread on this story. I have no further interest in engaging in that sort of discussion.
> Hypermedia applications use javascript (e.g., htmx - the original subject), so I'm not sure why you're hung up on that.
Because you falsely claimed otherwise:
>> This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML.
So, another false statement on your part.
> You just send the request on keydown. It's going to take about ~50-75ms or so for your user's finger to traverse into the up position. Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.
No, it's not "plenty of time" because many users have latency in the 100's of ms (mine on my mobile connection is ~200ms), and some on satellite/in remote areas with poor infra have latency of up to a second - and that's completely ignoring server response latency, bandwidth limitations on data transport, and rehydration time on the frontend.
> Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.
Scientifically wrong: "the lower threshold of perception was 85 ms, but that the perceived quality of the button declined significantly for latencies above 100 ms"[1].
> We were originally talking about "normal" (js) web applications (e.g. react, angular, etc.).
Factually incorrect. We were talking about normal frontend technologies - including vanilla, which you intentionally left out - so even if you include those heavyweight frameworks:
> most of these apps have all the traits I mentioned earlier. We all have used these pigs that take forever on first load, cause high cpu utilization, and are often janky.
...this is a lie, because we're not talking about normal apps, we're talking about technologies. All you have to do is create a new React or Angular or Vue application, bundle it, and observe that the application size is under 300k, and responds instantly to user input.
> And now you have subtlely moved the goals posts to only consider the web apps you're building, in place of "normal" js webapps you originally compared against htmx.
Yet another lie, and gaslighting to boot. I never moved the goalposts - my comments have been about the technologies, not what webapps people "normally" build - you were the one who moved the goalposts by changing the discourse from the trade-space decisionmaking that I was talking about to trying to malign modern web frameworks (and intentionally ignoring the fact that I included vanilla webtech) based on how some developers use them. My example was merely to act as a counter-example to prove how insane your statements were.
Given that you also made several factually incorrect statements in another thread[2], we can conclude that in addition to maliciously lying about things that I've said, you're also woefully ignorant about how web development works.
Between these two things, I think we can safely conclude that htmx doesn't really have any redeeming qualities, given that you were unable to describe coherent arguments for it, and resorted to lies and falsehoods instead.
[1] https://news.ycombinator.com/item?id=43621954