← Back to context

Comment by throw10920

12 days ago

> SPAs are problematic because you need to manage state twice: in the BE and FE. You also may want a spec'ed API (with client library generation would be AWESOME: GraphQL and OpenAPIv3 have that and it helps a lot).

OK, this helps explain some of the reasoning.

Unfortunately, that means that the tradeoff is that you're optimizing for user experience instead of developer experience - htmx is much easier for the developer, but worse for the user because of higher latency for all actions. I don't see how you can get around this if your paradigm is you do all of your computation on the server - and if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.

> "Normal" means React/Vue/Angular these days

I didn't mean (just) that. I included vanilla webtech in my definition of "normal" - I guess I should have clarified in my initial comment (I just meant to exclude really exotic, if useful, things like Elm). Does that change how you would respond to it?

>higher latency for all actions

If your implementation is poor

> all of your computation on the server

You doing weather forecasting? Crypto mining? What "computation" is happening on the client? The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.

  • > If your implementation is poor

    This is factually incorrect. Latency is limited by the speed of light and the user's internet connection. If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.

    > You doing weather forecasting? Crypto mining? What "computation" is happening on the client?

    This is absolutely ridiculous. It's very easy to see that you might want a simple SPA that allows you to browse a somewhat-interactive site (like your bank's site) without having to make a lot of round-trips to the server, and there's also thousands of examples of complex web applications that exist in the real world that serve as trivial examples of computation that might happen on the client.

    > The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.

    I never mentioned anything about "ads" or "most web sites" - I was asking an engineering question. Ads are entirely irrelevant here. I doubt you even have data to back this claim up.

    Please don't leave low-effort and low-quality responses like this - it wastes peoples' time and degrades the quality of HN.

    • > This is factually incorrect. Latency is limited by the speed of light and the user's internet connection.

      This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML. Additionally, all major hypermedia frameworks have mechanisms to download a link on mousedown or when you hover over a link for a specified time.

      > If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.

      Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends). Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.

      3 replies →

    • If you want to help people with latency issues, the best way is to sent everything in with as few request as possible, especially if you don’t have a lot of images. And updating with html (swapping part of the DOM) is efficient compared to updating with JSON.

> htmx is much easier for the developer, but worse for the user because of higher latency for all actions

Latency is something to consider yes. Besides that we should not forget it is easy to make an HTMX-mess: HTMX's not a good fit not fit for all use-cases and some approaches are dead end (the article even talks about this, but you find more testimonies of this online). With HTMX you also create a lot of end-points, usually without a spec: this can also become an issue (might not work for some teams).

> if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.

Exactly! A good reason not use HTMX, if you need a lot of browser-side computation.

> I didn't mean (just) that. I included vanilla webtech in my definition of "normal"

If you mean "just scripting with JS (w/o any framework)" then I still do not think this is an acceptable alternative to compare HTMX to. IMHO you have to compare with something that that provides a solid basis to develop a larger application on. Otherwise you may say HTMX is great because the status quo (vanillaJS/React/Vue/Ang) is such a mess.

> Unfortunately, that means that the tradeoff is that you're optimizing for user experience instead of developer experience

Not really, your backend has rich domain logic you can leverage to provide users with as much data as possible, while providing comparable levels of interactivity. Pushing as much logic (i.e., state) while you're developing results in a pale imitation of that domain logic on the front end, leading to a greatly diminished user experience.

  • More incorrect statements.

    > your backend has rich domain logic you can leverage to provide users with as much data as possible ... Pushing as much logic (i.e., state) while you're developing results in a pale imitation of that domain logic on the front end

    False. There's very little that you can do on the frontend that you can't do on the backend - you can implement almost all of your logic on the frontend and just use the backend for a very few things.

    > leading to a greatly diminished user experience.

    False. There's just no evidence for this whatsoever, and as counterevidence some of the best tools I've ever used have been extremely rich frontend-logic-heavy apps.

react and htmx both trade off dx and ux, and react shits on the user way more aggressively

  • What do you mean? You can build any level of UX (user) in React that you can achieve without it right?

    I'd say all these FE framework help us build and structure browser applications that potentially increase UX over "just HTML pages". Hence they all try to improve the DX building such apps compared to vanilla JS or jQuery.