Comment by throw10920

12 days ago

While I get the emotional appeal, I still don't understand the use-case for htmx. If you're making a completely static page, you just use HTML. If you're making a dynamic page, then you want to push as much logic to the client as possible because far more users are latency-limited than compute-limited (compare [1] vs [2]), so you use normal frontend technologies. Mixing htmx and traditional frontend tech seems like it'd result in extra unnecessary complexity. What's the target audience?

Edit: "Normal/traditional frontend" here means both vanilla (HTML+JS+CSS) and the most popular frameworks (React, Angular, Vue, Next).

[1] https://danluu.com/slow-device/

[2] https://danluu.com/web-bloat/

You should read the htmx “book” (https://hypermedia.systems/ ) where the use case is clearly explained. It advocates for using htmx to enhance a page with more interactivity by extending html semantics and behaviours a bit (thus requiring minimum effort and learning curve) and move to more heavyweight client-side front end stuff (react and friends) if more interactivity or complex behaviours are needed.

You can whip up a simple html form and spruce it up with htmx so it feels “modern” to current users, with little effort and changes and importantly without having to learn the insanity that are modern front end stacks. Not only curmudgeons from the 90s like me benefit from this!

I agree with the author of the article in that it's often best to do as much as possible in plain-old HTML.

When an interactive "widget" is needed, I try to simply embed that in one HTML page, and avoid to make the whole app an single page application (SPA).

SPAs are problematic because you need to manage state twice: in the BE and FE. You also may want a spec'ed API (with client library generation would be AWESOME: GraphQL and OpenAPIv3 have that and it helps a lot).

> so you use normal frontend technologies

This is the problem. "Normal" means React/Vue/Angular these days and they are all shit IMHO. This is partly because JS is a rampification and TS fix only what could be fixed by _adding_ to the language (since it's a superset). So TS is not a fix.

I had great success with Elm on the frontend. It's not normal by any norm. Probably less popular than HTMX. But it helps to build really solid web apps and all devs that use it become FP-superstars in a month.

Tools like ReasonML/ReScript and PureScript may also have these benefits.

  • > SPAs are problematic because you need to manage state twice: in the BE and FE. You also may want a spec'ed API (with client library generation would be AWESOME: GraphQL and OpenAPIv3 have that and it helps a lot).

    OK, this helps explain some of the reasoning.

    Unfortunately, that means that the tradeoff is that you're optimizing for user experience instead of developer experience - htmx is much easier for the developer, but worse for the user because of higher latency for all actions. I don't see how you can get around this if your paradigm is you do all of your computation on the server - and if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.

    > "Normal" means React/Vue/Angular these days

    I didn't mean (just) that. I included vanilla webtech in my definition of "normal" - I guess I should have clarified in my initial comment (I just meant to exclude really exotic, if useful, things like Elm). Does that change how you would respond to it?

    • >higher latency for all actions

      If your implementation is poor

      > all of your computation on the server

      You doing weather forecasting? Crypto mining? What "computation" is happening on the client? The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.

      6 replies →

    • > htmx is much easier for the developer, but worse for the user because of higher latency for all actions

      Latency is something to consider yes. Besides that we should not forget it is easy to make an HTMX-mess: HTMX's not a good fit not fit for all use-cases and some approaches are dead end (the article even talks about this, but you find more testimonies of this online). With HTMX you also create a lot of end-points, usually without a spec: this can also become an issue (might not work for some teams).

      > if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.

      Exactly! A good reason not use HTMX, if you need a lot of browser-side computation.

      > I didn't mean (just) that. I included vanilla webtech in my definition of "normal"

      If you mean "just scripting with JS (w/o any framework)" then I still do not think this is an acceptable alternative to compare HTMX to. IMHO you have to compare with something that that provides a solid basis to develop a larger application on. Otherwise you may say HTMX is great because the status quo (vanillaJS/React/Vue/Ang) is such a mess.

    • > Unfortunately, that means that the tradeoff is that you're optimizing for user experience instead of developer experience

      Not really, your backend has rich domain logic you can leverage to provide users with as much data as possible, while providing comparable levels of interactivity. Pushing as much logic (i.e., state) while you're developing results in a pale imitation of that domain logic on the front end, leading to a greatly diminished user experience.

      1 reply →

I use it to get some interactivity without reloading that would have to be Ajax anyways. If you have a form inline, you can’t do a lot client side if the server doesn’t work, so using htmx is fine.

> "If you're making a dynamic page, then you want to push as much logic to the client as possible because far more users are latency-limited than compute-limited"

That's an assertion I don't agree with.

Data still needs to come from or go to the server, whether you do it in a snippet of HTML or with an API call.

In either case, latency is there.

  • > That's an assertion I don't agree with.

    The part about users being more latency-limited than compute-limited, or wanting to push as much to the browser as possible?

    The former is somewhat hard to quantify, but most engineers building interactive applications (or distributed systems, of which interactive client-server webapps are a special case) have far more trouble with latency than compute.

    The latter is definitely true.

    > Data still needs to come from or go to the server, whether you do it in a snippet of HTML or with an API call. ... In either case, latency is there.

    This is definitely incorrect.

    Consider the very common case of fetching some data from the server and then filtering it. In many, many cases, the filtering can be done client-side. If you do that with an interactive frontend, then it's nearly instant, and there's no additional fetch to the server. If you shell out to the server, then you pay the latency penalty, and incur a fetch.

    "In either case, latency is there." is just factually wrong.

Yeah I don't get the irrationality either, especially the dogmatic "hypertext" angle. I mean you can see me pontificating about SGML as the original complement to the HTML vocabulary bringing text macros and other authoring affordances, but that is strictly for documents and their authors. If you want to target web apps and require JS anyway, I don't see the necessity for markup and template languages; you have already a much more powerful programming language in the mix. Any ad-hoc and inessential combination of markup templating and JS is going to be redesigned by the next generation of web devs anyway because of the domain's cyclic nature ie. the desire to carve out know-how niches, low retention rate in webdev staff, many from-scratch relaunches, ..,

> If you're making a dynamic page, then you want to push as much logic to the client as possible because far more users are latency-limited than compute-limited

This implies you value optimization over other concerns, will do ssr & rehydration, etc.

  • I work with user in far location, bad internet signal, and terrible low-end androids.

    Htmx has been the best performant of all my tries before.

    HTML is fast. (also, I use svg everywhere I can)

    • Also, you can have a whole book with half of MB of html. So loading 2+ MB of JS, then a good amount of json, especially with high latency connection is not better than just loading the html with all the data baked in.

      1 reply →

I reach for HTMX when (a) its a project where I have the power to make that decision and (b) I need to render state that lives on the server.

My main issue with SPAs, and client rendering in general, has always been the attempt to client render state that is persisted elsewhere.

There are certain pieces of state that really do live on the client, and for that client rendering is great. A vast majority of cases involve state that is persisted somewhere on the server though, and in those cases its needlessly complex to ship both the state and an entire rendering engine to the browser.

Htmx is “frontend tech”.

  • I said "normal frontend tech" in my comment. It's also easy to tell from context what I mean. I'd appreciate not trying to be pedantic and instead responding to the substance of my comment :)

    • What defines normal? It’s a strange idea when the typical stack for web front-end keeps changing. There isn’t even a single answer to the client/server split.

      Is JQuery normal? What about the Google Closure compiler? ColdFusion? Silverlight? Ruby and CoffeeScript? Angular? SPA React with classes? Elm? SSR React with a server framework? Client-only vanilla DOM manipulation?

      Your idea of normal is presumably whatever you’ve been using for the past few years. For someone who starts using Htmx now, it becomes normal. And if there’s enough of those people, their idea of normal becomes commonplace.

      5 replies →