← Back to context

Comment by geocar

3 months ago

> This is actually not a bad idea. Why should the browser contain a specific template engine, like XSLT

XSLT is a specification for a "template engine" and not a specific engine. There are dozens of XSLT implementations.

Mozilla notably doesn't use libxslt but transformiix: https://web.mit.edu/ghudson/dev/nokrb/third/firefox/extensio...

> and not Jinja for example?

Jinja operates on text, so it's basically document.write(). XSLT works on the nodes itself. That's better.

> Also it can be reimplemented using JS or WASM.

Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.

I think if you view XSLT as nothing more than ancient technology that nobody uses, then I can see how you could think this is ok, but I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.

I bet Google will try and solve this problem they're creating by pushing AMP again...

> The browsers today are too bloated

No, Google's browser today is too bloated: That's nobody's fault but Google.

> and it is difficult to create a new browser engine

I don't recommend confusing difficult to create with difficult to sell unless you're looking for a reason to not do something: There's usually very little overlap between the two in the solution.

I'm asking this genuinely, not as a leading question or a gotcha trap: why use this client side, instead of running it on the server and sending the rendered output?

  • For one, in many cases the XML + XSLT is more compact than the rendered output, so there are hosting and bandwidth benefits, especially if you're transforming a lot of XML files with the same XSLT.

  • I think the obvious answer is that client side mapping would let the browser give different view of the data to the client. The obvious problem is that downloading all the data and then transforming is inherently inefficient (and sure, despite this, download-then-process is a common solution used for many problems - but it's problematic to specify the worst solution before you know the problem).

    Perhaps there's an alternative universe where javascript lost and an elegant, declarative XSLT could declaratively present data and incrementally download only what's needed, allowing compact and elegant websites.

    But in our universe today, this mapping language wound-up a half-thought-idea that just kicked around for a long time in the specs without ever making sense.

    • My gut instinct is to agree with every bit of that. I admit that I might be missing something, but I've never wanted to send the data once and then have the client view it in multiple transformed ways (minus simple presentation stuff like sorting a table by column and things like that).

      And using it to generate RSS as mentioned elsewhere in the comments? That makes perfect sense to me on the server. I don't know that I've ever even seen client-side generated RSS.

      But again, this may all be my own lack of imagination.

> I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.

Serving a server-generated HTML page could be even faster.

  • Maybe but PR author, who created the Issue there as well, gave example: 'JSON+React'. 'React' one of the slowest framework out there. Performance is rarely considered in contemporary front-end.

  • > Serving a server-generated HTML page could be even faster.

    Except it isn't.

    Lots of things could be faster than they are.

    • Loading one page is probably faster that loading a template and only after that loading the data with the second request, given that the network latency can be pretty high. That's why Google serves (served?) its main page as a single file and not as multiple HTML/CSS/JS files.

      1 reply →

  • That assumes the server has a lot of additional CPU power to serve the content as HTML (and thus do the templating server side), whereas with XSLT I can serve XML and the XSLT and the client side can render the page according to the XSLT.

    The XSLT can also be served once, and then cached for a very long time period, and the XML can be very small.

    • With server-side rendering you control the amount of compute you are providing, with client-side rendering you cannot control anything and if the app would be dog slow on some devices you can't do anything.

> Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.

Nobody is going to process million of DOM nodes with XSLT because the browser won't be able to display them anyway. And one can write a WASM implementation.

  • I think you're confusing throughput with latency.

    You're right nobody processes a million DOM nodes with XSLT in a browser, but you're wrong about everything else: WASM has a huge startup cost.

    Consider applying stylesheet properties: XSLT knows exactly how to lay things out so it can put all of the stylesheet properties directly on the element. Pre-rendered HTML would be huge. CSS is slow. XSLT gets you direct-attach, small-payload, and low-latency display.

    • That's even a rarer case, embedding CSS rules into XSLT template (if I understood you correctly), I never heard of it. I know that CSS is sometimes embedded into HTML though.