"Remove mentions of XSLT from the html spec"

3 months ago (github.com)

Related: Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599

A few things to note:

- This isn't Chrome doing this unilaterally. https://github.com/whatwg/html/issues/11523 shows that representatives from every browser are supportive and there have been discussions about this in standards meetings: https://github.com/whatwg/html/issues/11146#issuecomment-275...

- You can see from the WHATNOT meeting agenda that it was a Mozilla engineer who brought it up last time.

- Opening a PR doesn't necessarily mean that it'll be merged. Notice the unchecked tasks - there's a lot to still do on this one. Even so, give the cross-vendor support for this is seems likely to proceed at some point.

  • Also, https://github.com/whatwg/html/issues/11523 (Should we remove XSLT from the web platform?) is not a request for community feedback.

    It's an issue open on the HTML spec for the HTML spec maintainers to consider. It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.

    This is happening after some serious exploits were found: https://www.offensivecon.org/speakers/2025/ivan-fratric.html

    And the maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913

    • I think this discussion is quite reasonable, but it also highlights the power imbalance: If this stuff is decided in closed meetings and the bug trackers are not supposed to be places for community feedback, where can the community influence such decisions?

      80 replies →

    • I think this post is useful where the thread author proposed some solutions to the people affected: https://github.com/whatwg/html/issues/11523#issuecomment-318...

      The main thing that seems unaddressed is the UX if a user opens a direct link to an XML file and will now just see tag soup instead of the intended rendering.

      I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.

      Sort of like an inverse of the <link rel="alternate" ...> solution that the post mentioned.

      The only thing this doesn't fix is sites that are abandoned and won't update or are part if embedded devices and can't update.

      20 replies →

  • Former Mozilla and Google (Chrome team specifically) dev here. The way I see what you're saying is: Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT from the web platform, regardless of whether it's still being used. It's okay because someone from Mozilla brought it up.

    Out of those three projects, two are notoriously under-resourced, and one is notorious for constantly ramming through new features at a pace the other two projects can't or won't keep up with.

    Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?

    This appeal to authority doesn't hold water for me because the important question is not 'do people with specific priorities think this is a good idea' but instead 'will this idea negatively impact the web platform and its billions of users'. Out of those billions of users it's quite possible a sizable number of them rely on XSLT, and in my reading around this issue I haven't seen concrete data supporting that nobody uses XSLT. If nobody really used it there wouldn't be a need for that polyfill.

    Fundamentally the question that should be asked here is: Billions of people use the web every day, which means they're relying on technologies like HTML, CSS, XML, XSLT, etc. Are we okay with breaking something that 0.1% of users rely on? If we are, okay, but who's going to tell that 0.1% of a billion people that they don't matter?

    The argument I've seen made is that Google doesn't have the resources (somehow) to maintain XSLT support. One of the googlers argued that new emerging web APIs are more popular, and thus more deserving of resources. So what we've created is a zero-sum game where any new feature added to the platform requires the removal of an existing feature. Where does that game end? Will we eventually remove ARIA and/or screen reader support because it's not used by enough people?

    I think all three browser vendors have a duty to their users to support them to the best of their ability, and Google has the financial and human resources to support users of XSLT and is choosing not to.

    • Another way to look at this is:

      Billions of people use the web every day. Should the 99.99% of them be vulnerable to XSLT security bugs for the other 0.01%?

      17 replies →

    • So the Safari developers are overworked/under-resourced, but Google somehow should have infinite resources to maintain things forever? Apple is a much bigger company than Google these days, so why shouldn't they also have these infinite resources? Oh, right, its because fundamentally they don't value their web browser as much as they should. But you give them a pass.

      3 replies →

    • Bring back VRML!

      Seriously though, if I were forced to maintain every tiny legacy feature in a 20 year old app... I'd also become a "former" dev :)

      Even in its heyday, XSLT seemed like an afterthought. Probably there are a handful of legacy corporate users hanging on to it for dear life. But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...

      10 replies →

    • When I see "reps from every browser agree" my bullshit alarm immediately goes off. Does it include unanimous support from browser projects that are either:

      1. not trillion dollar tech companies

      or

      2. not 99% funded from a trillion dollar tech company.

      I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare. And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.

      8 replies →

    • Many such cases. Remember when the Chrome team seriously thought they could just disable JavaScript alert() overnight [1][2] and not break decades of internet compatibility? It still makes me smile how quietly this was swept under the rug once it crashed and burned, just like how the countless "off-topic" and "too emotional" comments on Github said it would.

      Glad to see the disdain for the actual users of their software remains.

      [1] https://github.com/whatwg/html/issues/2894 [2] https://www.theregister.com/2021/08/05/google_chrome_iframe/

      (FWIW I agree alert and XSLT are terrible, but that ship sailed a long time ago.)

    • > Representatives from Chrome/Blink, Safari/Webkit, and Firefox/Gecko are all supportive of removing XSLT

      Did anybody bother checking with Microsoft? XML/XSLT is very enterprisey and this will likely break a lot of intranet (or $$$ commercial) applications.

      Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy? It's the equivalent of the crazy cat hoarder who wormed her way onto the HOA board speaking for everyone else. No.

      11 replies →

    • >who's going to tell that 0.1% of a billion people that they don't matter?

      This is also not a fair framing. There are lots of good reasons to deprecate a technology, and it doesn't mean the users don't matter. As always, technology requires tradeoffs (as does the "common good", usually.)

    • > Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?

      Because otherwise everybody has to repeat same work again and again, programming how - instead of focusing on what, declarative way.

      Then data is not free, but caged by processing so it can't exist without it.

      I just want data or information - not processing, not strings attached.

      I don't see any need to run any extra code over any information - except to keep control and to attach other code, trackers etc. - just, I'm not Google, no need to push anything (just.. faster JS engine instead of empowering users somehow made a browser better ? (no matter how fast, you can't) - for what ? (of what I needed) - or instead of something, that they 'forgot' with a wish they could erase it ?)

    • By your argument, once anything makes it in, then it can't be removed. Billions of people are going to use the web every day and it won't stop. Even the most obscure feature will end up being used by 0.1% of users. Can you name a feature that's supported by all browsers that's not being used by anyone?

      4 replies →

  • > Even so, give the cross-vendor support for this is seems likely to proceed at some point.

    Yup. Just like the removal of confirm/prompt that had vendor support and was immediately rushed. Thankfully to be indefinitely postponed.

    Here's Google's own doc on how a feature should be removed: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

    Notice how "unilateral support by browser vendors" didn't even look at actual usage of XSLT, where it's used, and whether significant parts would be affected.

    Good times.

  • Also, according to Chrome's telemetry, very, very few websites are using it in practice. It's not like the proposal is threatening to make some significant portion of the web inaccessible. At least we can see the data underlying the proposal here.

    • Sadly, I just built a web site with HTMX and am using the client-side-templates extension for client-side XSLT.

      >very, very few websites

      Doesn't include all the corporate web sites that they are probably blocked from getting such telemetry for. These are the users that are pushing back.

      1 reply →

    • 1. Chrome telemetry underreports a lot of use cases

      2. They have a semi-internal document https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS... that explicitly states: small usage percentage doesn't mean you can safely remove a feature

      --- start quote ---

      As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial.

      There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!

      --- end quote ---

      3. Any feature removal on the web has to be a) given thorough thought and investigation which we haven't seen. Library of congress apparently uses XSLT and Chrome devs couldn't care less

      7 replies →

    • Looking at the problem differently. Say some change would make Hacker News unusable, the data would support this and show that it practically affects no one.

      3 replies →

    • The people writing, and visiting websites that rely on XSLT are the same users that disable or patch out telemetry.

  • Ok thanks, we've dechromed the title above. (Submitted title was "Chrome intends to remove XSLT from the HTML spec".)

  • The implementations are owned by the implementers. Who owns the actual standard, the implementers or the users?

    • I think trying to own a web standard is like trying to own a prayer. You can believe all you want, but it's up to the gods to listen or not...

    • As for any standard, the implementers ultimately own it. Users don't spend resources on implementing standards, so they only get a marginal say. Do you expect to contribute to the 6G standards, or USB-C, too?

    • Own is not really the right word for an open source project. In practice it is controlled by Apple, Google, Microsoft and Mozilla.

  • The responses of some folks on this thread reminds me of this:

    https://xkcd.com/1172/

    • That's more a joke about people coming to rely on any observable behavior of something, no matter how buggy or unintentional.

      Here's we're talking about killing off XSLT used in the intended, documented, standard way.

So if in reading the two threads correctly essentially Google asked for feedback, essentially all the feedback said "no, please don't". And they said "thanks for the feedback, we're gonna do it any way!"?

The other suggestions ignored seemed to be "if this is about security, then fund the OSS, project. Or swap to a newer safer library, or pull it into the JS sandbox and ensure support is maintained." Which were all mostly ignored.

And "if this is about adoption then listen to the constant community request to update the the newer XSLT 3.0 which has been out for years and world have much higher adoption due to tons of QoL improvements including handling JSON."

And the argument presented, which i don't know (but seems reasonable to me), is that XSLT supports the open web. Google tried to kill it a decade ago, the community pushed back and stopped it. So Google's plan was to refuse to do anything to support it, ignore community requests for simple improvements, try to make it wither then use that as justification for killing it at a later point.

Forcing this through when almost all feedback is against it seems to support that to me. Especially with XSLT suddenly/recebtly gaining a lot of popularity and it seems like they are trying to kill it before they have an open competitor in the web.

https://github.com/whatwg/html/issues/11523

  • >essentially all the feedback said "no, please don't". And they said "thanks for the feedback, we're gonna do it any way!"?

    this is a perfectly reasonable course of action if the feedback is "please don't" but the people saying "please don't" aren't people who are actually using it or who can explain why it's necessary. it's a request for feedback, not just a poll.

    • > people who are actually using it

      I'd presume that most of those people are using it in some capacity, it's just that their numbers are seen as too minor to influence the decision.

      > explain why it's necessary

      No feature is strictly necessary, so that's a pretty high standard.

      6 replies →

    • Part of the reason I stopped was lack of higher than 1.0 in browsers.

      The other reason is that SVG took a very long time to get good, and when it did I wanted to use XSL and SVG together.

      Now SVG has got good and they are removing it :(

  • It would be incredible if we could pull it into the javascript/wasm sandbox and get xslt 3.0 support. The best of both worlds, at the cost of a performance hit on those pages, but not a terrible cost.

  • It comes with the XML territory that things have versioned schemas and things like namespaces, and can be programmed in XSLT. This typically means that integrations are trivial due to public, reliable contracts.

    Unlike your average Angular project. Building on top of minified Typescript is rather unreasonable and integrating with JSON means you have a less than reliable data transfer protocol without schema, so validation is a crude trial and error process.

    There's no elegance in raw XML et consortes, but the maturity of this family means there are also very mature tools so in practice you don't have to look at XML or XSD as text, you can just unmarshal it into your programming language of choice (that is, if you choose a suitable one) and look at it as you would some other data structure.

Breaking the fundamental promise of the HTML spec is a big deal.

The discussions don't address that. That surprises me, because these seem to be the people in charge of the spec.

The promise is, "This is HTML. Count on it."

Now it would be just, "This is HTML for now. Don't count on it staying that way, though."

Not saying it should never be done, but it's a big deal.

They are removing XSLT just for being a long-tail technology. The same argument would apply to other long-tail web technologies.

So what they're really proposing is to cut off the web's long tail.

(Just want to note: The list of long-tail web technologies will continue to grow over time... we can expect it to grow roughly in proportion to the rate at which web technologies were added around 20 years in the past. Meaning we can expect an explosion of long-tail web technologies soon enough. We might want to think carefully about whether the people currently running the web value the web's long tail the way we would like.)

  • WHATWG broke this quasi-officially when they declared HTML a "Living Standard". The HTML spec is not a standard to be implemented anymore, it's just a method of coordinating/announcing what the browser vendors are currently working on.

    (For the same reason, they dropped the name HTML5 and are only talking about "HTML". Who needs version numbers if there is no future and no past anyway?)

    https://whatwg.org/faq#living-standard https://github.com/whatwg/html/blob/main/FAQ.md#html-standar...

  • Nothing lasts forever, and eventually you have to port, emulate, archive or otherwise deal with very old applications / media. You see this all over the place: physical media, file formats, protocols, retro gaming, etc.

    There's a sweet spot between giving people enough time and tools to make a transition while also avoiding having your platform implode into a black hole of accumulated complexity. Neither end of the spectrum is healthy.

    • I can still run windows applications that are decades old. If you don't want to support legacy stuff, don't insinuate yourself into global standards.

      If this was just Android that would be an issue between Google and their developers/users, but this is everybody.

  • To be completely fair, looking over the lines removed by the PR, there don't appear to be any normative statements requiring HTML handling XSLT unless I missed one.

    I get that people are more reacting to the prospect of browsers removing existing support, but I was pretty surprised by how short the PR was. I assumed it was more intertwined.

    • Their explicit intent is to generally remove XSLT from browsers.

      If this was just about, e.g., organizing web standards docs for better separation of concerns, I think a lot of people would be reacting to it quite differently.

  • There's a perverse irony that Google is as responsible as anybody for cramming a crazy amount of new stuff into the HTML/CSS/browser spec that everybody else has to support forever.

    If they were one of the voices for "the browser should be lightweight and let JS libs handle the weird stuff" I would respect this action, but Google is very very not that.

  • > They are removing XSLT just for being a long-tail technology. The same argument would apply to other long-tail web technologies.

    That's a concise way to put it. IMHO this is also the main problem of the standard.

    However I think XSLT isn't only long tail but also a curiosity with just academic value. I've being doing some experimentation and prototyping with XSLT while it was still considered alive. So even if you see some value in it, the problems are endless:

    * XSLT is cumbersome to write and read

    * XML is clunky, XSLT even more so

    * yes there's SLAX, which is okay-ish but it becomes clear very fast that it's indeed just Syntax sugar

    * there's XSLT 2.0 but there's no software support

    * nobody uses it, there's no network effect in usage

    I think a few years ago I stumbled upon a CMS that uses it and once I accidentally stumbled upon a Website that uses XSLT transformation for styling. That's all XSLT I ever saw in the wild being actually used.

    All in all XSLT is a useless part of the way to large long tail preventing virtually everyone from writing spec compliant web browser engines.

    > The promise is, "This is HTML. Count on it."

    I think after HTML4 and XHTML people saw that a fully rigid standard isn't viable, so they made HTML5 a living standard with a plethora of working groups. Therefore the times where this was ever supposed to be true are long over anyway.

    So indeed the correct way forward would be to remove more parts of a long tail that's hardly in use and stopping innovation. And instead maybe keeping a short list of features that allow writing modern websites.

    (Also nobody is stopping anyone from using XSLT as primary language that compiles to HTML5/ES5/CSS)

This is actually not a bad idea. Why should the browser contain a specific template engine, like XSLT, and not Jinja for example? Also it can be reimplemented using JS or WASM.

The browsers today are too bloated and it is difficult to create a new browser engine. I wish there were simpler standards for "minimal browser", for example, supporting only basic HTML tags, basic layout rules, WASM and Java bytecode.

Many things, like WebAudio or Canvas, could be immplemented using WASM modules, which as a side effect, would prevent their use for fingerprinting.

  • > This is actually not a bad idea. Why should the browser contain a specific template engine, like XSLT

    XSLT is a specification for a "template engine" and not a specific engine. There are dozens of XSLT implementations.

    Mozilla notably doesn't use libxslt but transformiix: https://web.mit.edu/ghudson/dev/nokrb/third/firefox/extensio...

    > and not Jinja for example?

    Jinja operates on text, so it's basically document.write(). XSLT works on the nodes itself. That's better.

    > Also it can be reimplemented using JS or WASM.

    Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.

    I think if you view XSLT as nothing more than ancient technology that nobody uses, then I can see how you could think this is ok, but I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.

    I bet Google will try and solve this problem they're creating by pushing AMP again...

    > The browsers today are too bloated

    No, Google's browser today is too bloated: That's nobody's fault but Google.

    > and it is difficult to create a new browser engine

    I don't recommend confusing difficult to create with difficult to sell unless you're looking for a reason to not do something: There's usually very little overlap between the two in the solution.

    • I'm asking this genuinely, not as a leading question or a gotcha trap: why use this client side, instead of running it on the server and sending the rendered output?

      6 replies →

    • > I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.

      Serving a server-generated HTML page could be even faster.

      6 replies →

    • > Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.

      Nobody is going to process million of DOM nodes with XSLT because the browser won't be able to display them anyway. And one can write a WASM implementation.

      2 replies →

  • > Why should the browser contain a specific template engine, like XSLT,

    XSLT is a templating language (like HTML is a content language), not a template engine like Blink or WebKit is a browser engine.

    > Also it can be reimplemented using JS or WASM.

    Changing the implementation wouldn't involve taking the language out of the web platform. There wouldn't need to be any standardization talk about changing the implementation used in one or more browsers.

  • The old, bug-ridden native XSLT code could also be shipped as WASM along with the browser rather than being deprecated. The sandbox would nullify the exploits, and avoid breaking old sites.

    They actually thought about it, and decided not to do it :-/

  • > Many things, like WebAudio or Canvas, could be immplemented using WASM modules, which as a side effect, would prevent their use for fingerprinting.

    Audio and canvas are fundamental I/O things. You can’t shift them to WASM.

    You could theoretically shift a fair bit of Audio into a WASM blob, just expose something more like Mozilla’s original Audio Data API which the Web Audio API defeated for some reason, and implement the rest atop that single primitive.

    2D canvas context includes some rendering stuff that needs to match DOM rendering. So you can’t even just expose pixel data and implement the rest of the 2D context in a WASM blob atop that.

    And shifting as much of 2D context to WASM as you could would destroy its performance. As for WebGL and WebGPU contexts, their whole thing is GPU integration, you can’t do that via WASM.

    So overall, these things you’re saying could be done in WASM are the primitives, so they definitely can’t.

  • Why should the browser contain a specific scripting language, like JavaScript, and not ActiveScript for example?

    • I suspect you might know this, but Internet Explorer 3 supported JavaScript (JScript) and VBScript in 1996.

    • The browser could use Java or .NET bytecode interpreter - in this case it doesn't need to have a compiler and you can use any language - but in this case you won't be able to see a script's source code.

      1 reply →

    • It's a consequence of javascript being "good enough." Originally, the goal was for the web to support multiple languages (I think one prototype of the <script> tag had a "type=text/tcl") and IE supported VBScript for a while.

      But at the end of the day, you only really need one, and the type attribute was phased out of the script tag entirely, and Javascript won.

      4 replies →

  • > Why should the browser contain a specific template engine, like XSLT, and not Jinja for example?

    Historic reasons, and it sounds like they want it to contain zero template engines. You could transpile a subset of Jinja or Mustache to XSLT, but no one seems to do it or care.

  • >Why should the browser contain a specific template engine, like XSLT

    Because XSLT is part of the web standards.

  • I kind of agree that little used,[0] non-web-like features is fair to be considered for removal. However I wish they didn't hide behind security vulnerabilities as the reason as that clearly wasn't it. The author didn't even bother to look if a memory safe package existed. "We're removing this for your own good" is the worst way to go about it but he still doubles down on this idea later in the thread.

    [0] ~0.001% usage according to one post there

  • Compare webkit to UDK (The unreal development kit for game dev) to consider why there is so much bloat in the browser. People have wanted to render more and more advanced things, and the webkit engine should cater to all of them as best it can.

    For better or worse, http is no longer just for serving textual documents.

  • While this sounds crazy at first, I could warm for several incremental layers of features, where browsers could choose to implement support for only a set of layers. The lowest layer would be something like HTTP with plain text, the next one HTML, then CSS with basic selectors, then CSS with the full selector set, then ECMA and WASM, then device APIs, and so forth.

    Would make it possible to create spec-compliant browsers with a subset of the web platform, fulfilling different use cases without ripping out essentials or hacking them in.

    • There is no point in several layers because to maximize compatibility developers would need to target the simplest layer. And if they don't, simple browsers won't be able to compete with full-fledged ones.

    • You can set the doctype in the document to the spec you want to use, which is basically what you're asking for. Try setting <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">

  • > Why should the browser contain a specific template engine, like XSLT, and not Jinja for example? Also it can be reimplemented using JS or WASM.

    I think a dedicated unsupported media type -> supported media type WASM transformation interface would be good. You could use it for new image formats and the like as well. There are things like JXL.js that do this:

    https://github.com/niutech/jxl.js

  • I get the point a minimal browser and WASM, but Java bytecode ?! Why not Python bytecode ? Seems unreasonable to me to add any specific bytecode support. By layout rules you mean get rid of CSS ? Sounds also unreasonable IMHO.

    And no WebAudio and Canvas couldn't be implemented in client WASM without big security implication. If by module you mean inside the browser, them, what is the point of WASM here ?

    • What WebAudio needs to provide is only means to get or push buffers from/to audio devices and run code in high priority thread. There is no need for browser to provide implementation of low-pass filters, audio proccessing graphs and similar primitives.

    • Honestly, even WASM makes it not very minimal in my book. A minimal browser should be HTML and perhaps a subset of CSS, that's it.

  • Wasm is ANYTHING but basic.

    Fuck javascript, fuck wasm, fuck html, fuck css.

    Rebase it all on XML/XPath/XQuery that way you only need ONE parser, one simple engine.

    This whole kitchen sink/full blown OS nonsense needs to end.

    Edit: You’re clearly a wasm shill, wasm is an abomination that needs to die.

Oh hey, that thing happened that one could easily see was going to happen [0]. The writing was on the wall for XSL as soon as the browsers tore out FTP support: their desire to minimize attack surface trumps any tendency to leave well enough alone.

I wonder what the next step of removing less-popular features will be. Probably the SMIL attributes in favor of CSS for SVG animations, they've been grumbling about those for a while. Or maybe they'll ultimately decide that they don't like native MathML support after all. Really, any functionality that doesn't fit in the mold of "a CSS attribute" or "a JS method" is at risk, including most things XML-related.

[0] https://news.ycombinator.com/item?id=43880391

  • CSS animations still lack a semantic way to sequence animations based on the beginning/end of some other animation, which SMIL offers. With SMIL you can say 'when this animation ID begins/ends only then trigger this other animation', including time offsets from that point.

    Which is miles better than having to having to use calcs for CSS animation timing which requires a kludge of CSS variables/etc to keep track of when something begins/ends time-wise, if wanting to avoid requiring Javascript. And some years ago Firefox IIRC didn't even support time-based calcs.

    When Chromium announced the intent to deprecate SMIL a decade back (before relenting) it was far too early to consider that given CSS at that time lacked much of what SMIL allowed for (including motion along a path and SVG attribute value animations, which saw CSS support later). It also set off a chain of articles and never-again updated notes warning about SMIL, which just added to confusion. I remember even an LLM mistakenly believing SMIL was still deprecated in Chromium.

    • > if wanting to avoid requiring Javascript.

      And there's one of the issues: browser devs are perfectly happy if user JS can be used to replicate some piece of functionality, since then it's not their problem.

  • > their desire to minimize attack surface trumps any tendency to leave well enough alone.

    Is that a good thing or a bad thing?

    Technical people like us have our desires. But the billions of people doing banking on their browsers probably have different priorities.

    • There's ways to reduce attack surface short of tearing out support. Such as, for instance, taking one of those alleged JS polyfills and plugging it into the browser, in place of all the C++. But if attack surface is your sole concern, then one of those options sounds much easier than the other, and also ever-so-slightly superior.

      In any case, there's no limit on how far one can disregard compatibility in the name of security. Just look at the situation on Apple OSes, where developers are kept on a constant treadmill to update their programs to the latest APIs. I'd rather not have everything trend in that direction, even if it means keeping shims and polyfills that aren't totally necessary for modern users.

      7 replies →

  • > their desire to minimize attack surface trumps any tendency to leave well enough alone.

    It's that why Chrome unilaterally releases 1000+ web APIs a year, many of them quite complex, and spanning a huge range of things to go wrong (including access to USB, serial devices etc.)? To reduce the attack surface?

    • Well, their desire to stay trendy trumps their desire to minimize attack surface, I'd have to imagine. Alas, XML is roughly the polar opposite of trendy, mostly seen as belonging in the trash heap of the 90s alongside SOAP, CORBA, DCOM, Java applets, etc.

  • > The writing was on the wall for XSL as soon as the browsers tore out FTP support

    When did they do that? Can I not still ftp://example.com in the url bar?

    • FTP support was completely removed from Chrome with the release of Chrome 88, which was released in January 2021

This would be sad, but I think it's sadder that we didn't spend more effort integrating more modern XSLT. It was painful to use _but_ if it had a few revisions in the browser I think it would have been a massive contender to things like React.

XML was unfairly demonized for the baggage that IBM and other enterprise orgs tied to it, but the standard itself was frigging amazing and powerful.

  • I have to agree. I liked XSLT and would have done much more with just a few additions to it.

    Converting a simple manually edited xml database of things to html was awesome. What I mostly wanted the ability to pass in a selected item to display differently. That would allow all sorts of interactivity with static documents.

How do we feel about this concern in general? Not just specific to XSLTs

> my main concern is for the “long tail” of the web—there's lots of vital information only available on random university/personal websites last updated before 2005

It's a strong argument for me because I run a lot of old webpages that continue to 'just work', as well as regularly getting value out of other people's old pages. HTML and JS have always been backwards compatible so far, or at least close enough that you get away with slapping a TLS certificate onto the webserver

But I also see that we can't keep support for every old thing indefinitely. See Flash. People make emulators like Ruffle that work impressively well to play a nostalgic game or use a website on the Internet Archive whose main menu (guilty as charged) was a Flash widget. Is that the way we should go with this, emulators? Or a dedicated browser that still gets security updates, but is intended to only view old documents, the way that we see slide film material today? Or some other way?

> @whatwg whatwg locked as too heated and limited conversation to collaborators

Too heated? Looked pretty civil and reasonable to me. Would it be ridiculous to suggest that the tolerance for heat might depend on how commenters are aligned with respect to a particular vendor?

  • "too heated" is a codeword for "we don't want to deal with dissenting opinions". Same on other forums, e.g. Reddit.

  • It's a little jarring that the 1 comment visible underneath that is a "Nice, thanks for working on this!", and if you click on the user that wrote it, it's someone working for Google on Chrome... sheesh, kiss-ass much?

  • There was a discussion they opened to "gather community feedback" just three weeks ago. That one did get heated: https://github.com/whatwg/html/issues/11523

    Google ignored everything, pushed on with the removal, and now pre-emptively closed this discussion, too

    • > Google ignored everything, pushed on with the removal, and now pre-emptively closed this discussion, too

      To be fair to Google, they've consistently steam-rolled the standards processes like that for as long as I can remember, so it really isn't new.

      1 reply →

  • FYI, I heard that it was Apple employees who administer that repo that marked those comments as off topic and locked the thread, but people are attributing that to the Google employee that opened the issue.

  • I disagree - I saw a number of comments I would consider rude and unprofessional and once a PR gets posted on HN, frankly it typically gets much worse.

    I find people on HN are often very motivated reasoners when it comes to judging civility, but there’s basically no excuse for calling people “fuckers” or whatever.

  • > Why do people create such joke PRs?

    > We didn't forgot your decade of fuckeries, Google.

    > You wanted some heated comment? You are served.

    > the JavaScript brainworm that has destroyed the minds of the new generation

    > the covert war being waged by the WHATWG

    > This is nothing short of technical sabotage, and it’s a disgrace.

    > breaking yet another piece of the open web you don't find convenient for serving people ads and LLM slop.

    > Are Google, Apple, Mozilla going to pay for the additional hosting costs incurred by those affected by the removal of client-side XSLT support?

    > Hint: if you don't want to be called out on your lies, don't lie.

    > Evil big data companies who built their business around obsoleting privacy. Companies who have built their business around destroying freedom and democracy.

    > Will you side with privacy and freedom or will you side with dictatorship?

    Bullshit like this has no place in an issue tracker. If people didn’t act like such children in a place designed for productive conversation, then maybe the repo owners wouldn’t be so trigger happy.

I love XSLT. I released a client-side XSLT-based PWA last year (https://github.com/ssg/eksi-yedek - in Turkish). The reason I had picked XSLT was that the input was in XML, and browser-based XSLT was the most suitable candidate for a PWA.

Two years ago, I created a book in memory of a late friend to create a compilation of her posts on social media. Again, thanks to XSLT, it was a breeze.

XSLT has been orphaned on the browser-side for the last quarter century, but the story on the server-side isn't better either. I think that the only modern and comprehensive implementation comes with Saxon-JS which is bloated and has an unwieldy API for JavaScript.

Were XSLT dropped next year, what would be the course of action for us who rely on browser-based XSLT APIs?

XSLT, especially 3.0, is immensely powerful, and not having good solutions on JS ecosystem would make the aftermath of this decision look bleaker.

  • I’d just use the browsers XML parser and javascript for the transformation. Which is what I assume a putative XSLT javascript library would do.

    And if you’re leaning towards a declarative framework, use React.

Fwiw the XSLT implementation in Blink and WebKit is extremely inefficient. For example converting the entire document into a string, to parse it to a format that's compatible with libxslt, to then produce a string and parse it back into a node structure again. I suspect a user space library could be similarly as effective.

Ex. https://source.chromium.org/chromium/chromium/src/+/main:thi...

https://source.chromium.org/chromium/chromium/src/+/main:thi...

https://github.com/WebKit/WebKit/blob/65b2fb1c3c4d0e85ca3902...

Mozilla has an in-house implementation at least:

https://github.com/mozilla-firefox/firefox/tree/5f99d536df02...

It seems like the answer to the compat issue might be the MathML approach. An outside vendor would need to contribute an implementation to every browser. Possibly taking the very inefficient route since that's easy to port.

I have no opinion on this, just sharing my one-and-only XSLT story.

My first job in software was as a software test development intern at a ~500 employee non-profit, in about 2008 when I was about 19 or 20 years old. Writing software to test software. One of my tasks during the 2 years I worked there was to write documentation for their XML test data format. The test data was written in XML documents, then run through a test runner for validation. I somehow found out about XSLT and it seemed like the perfect solution. So I wrote up XML schemas for the XML test data, in XSD of course. The documentation lived in the schema, alongside the type definitions. Then I wrote an XSLT document, to take in those XML schemas and output HTML pages, which is also basically XML.

So in effect what I wrote was an XML program, which took XML as input, and outputted XML, all entirely in the browser at document-view time.

And it actually worked and I felt super proud of it. I definitely remember it worked in our official browser (Internet Explorer 7, natch). I recall testing it in my preferred browser, Firefox (version 3, check out that new AwesomeBar, baby), and I think I got it working there, too, with some effort.

I always wonder what happened with that XML nightmare I created. I wonder if anyone ever actually used it or maybe even maintained it for some time. I guess it most likely just got thrown away wholesale during an inevitable rewrite. But I still think fondly back on that XSLT "program" even today.

  • My XSLT story:

    I wrote my personal website in XML with XSLT transforming into something viewable in the browser circa 2008. I was definitely inspired by CSS Zen Garden where the same HTML gave drastically different presentation with different CSS, but I thought that was too restrictive with too much overly tricky CSS. I thought the code would be more maintainable by writing XSLT transforms for different themes of my personal website. That personal webpage was my version of the static site generator craze: I spent 80% of the time on the XSLT and 20% on the content of the website. Fond memories, even though I found XSLT to be incredibly difficult to write.

    • Ha! Shout out to CSS Zen Garden. I didn't go as far down the rabbit hole as you did (noped out before XSLT made its way into my mix), but around that time I made sure all of my html was valid XML (er, XHTML), complete with the little validation badge at the bottom of the page. 80:20 form to content ratio sounds about right.

    • Another fellow soul!

      My first rewrite of my site, as I moved it away from Yahoo, into my own domain was also in XSLT/XML.

      Eventually I got tired of keeping it that way, and rewrite the parsing and HTML generation into PHP, but kept the site content in XML, to this day.

      Every now and then I think about rewriting it, but I rather do native development outside work, and don't suffer from either PHP nor XML allergies.

      Doing declarative programming in XSLT was cool though.

      1 reply →

  • I implemented the full XPath and XSLT language with debugging capabilities for a company I used to work for some 25ish years ago. It was fun (until XPath and XSLT 2. Well that was fun too but because of nice work colleague not the language) but I always did wonder how this took off and Lisp didn’t.

  • After the XML madness whenever I see some tech being hyped and used all over the place I remember the days of XML and ignore it.

    • I was quite fond of DokuWiki’s xml-rpc. Probably long replaced now but it was a godsend to have a simple rpc to the server from within javascript. (2007)

  • I once attempted to use XSLT to transform SOAP requests generated by our system so the providers' implementation would accept them. This included having to sufficiently grok XSD, WSDL el at to figure out what part of the chain is broken.

    At the end of the (very long) process, I just hard-coded the reference request XML given by the particularly problematic endpoints, put some regex replacements behind it, and called it a day.

  • We can laugh at NFTs but honestly there are a lot of technical solutions that fit the "kinda works/kinda seems like a good idea" but in the end it's a house of cards with a vested interest

    Imagine people put energy into writing that thick of a book about XML. To be filed into the Theology section of a library

It's not like the browsers can just switch to some better maintained XSLT library. There aren't any. There are about 1.5 closed-source XSLT 3 implementations, Altova and Saxonica. I don't want to sound ageist, but the latter is developed by the XSLT spec's main author, who is nearing retirement age. This library is developed behind closed doors, and from time to time zip files with code get uploaded to GitHub. Make of that what you will in terms of the XSLT community. For all of its elegance, XSLT doesn't seem very relevant if nobody is implementing it. I'm all for the open web, but XSLT should just be left in peace to slide into the good night.

  • Saxonica is an Employee Ownership Trust and the team as a whole is relatively young (far off from retirement).

    "Saxonica today counts some of the world's largest companies among its customer base. Several of the world's biggest banks have enterprise licenses; publishers around the world use Saxon as a core part of their XML workflow; and many of the biggest names in the software industry package Saxon-EE as a component of the applications they distribute or the services they deploy on the cloud."

    https://www.saxonica.com/about/about.xml

Best comment from another related thread (not from me):

So the libxml/libxslt unpaid volunteer maintainer wants to stop doing 'disclosure embargo' of reported security issues: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913 Shortly after that, Google Chrome want to remove XSLT support.

Coincidence?

Source (yawaramin): https://xkcd.com/2347/ A shame that libxml and libxslt could not get more support while used everywhere. Thanks for all the hard work to the unpaid volunteers!

  • This seems totally fine though? XSLT 1.0 supporter says the support time is costing heavily, then Chrome says removing support is fine, which seems to align to both of them.

    It'd be much better that Google did support the maintainer, but given the apparent lack of use of XSLT 1.0 and the maintainer already having burned out, stopping supporting XSLT seems like the current best outcome:

    > "I just stepped down as libxslt maintainer and it's unlikely that this project will ever be maintained again"

I used XSLT once to publish recipes on the web. The cookbook software my mom used (maybe MasterCook?) could "export as xml" and I wrote an xslt to transform it into readable html. It was fine. It's, of course, also possible to run the XSLT from the command line to generate static html.

The suggestion of using a polyfill is a bit nonsensical as I suspect there is little new web being written in XSLT, so someone would have to go through all the old pages out there and add the polyfill. Anyone know if accomplishing XSLT is possible with a Chrome extension? That would make more sense.

  • It would sure be possible to combine a polyfill with a webextension, not sure if XSLT contains any footguns for this approach that would make it hard to do, but if it's solely a single client-side transformation of the initial XML response, this should work fine.

    Cool example with the recipes page :)

    • I guess it's time for me to write that webextension; if it gets popular enough I can sell it to someone wearing a black hat for maybe tens of dollars!

      1 reply →

The idea of building something like PDF.js makes a lot of sense. I think the core crux of it though is the polyfill should be in the browser, not something that a site maintainer has to manually implement.

  • We absolulely shouldn't be just ripping out support.

    If there is a polyfill I'm not sure making it in Javascript makes sense but web assembly could work.

I love how one company can do whatever they want. This is perfect.

I had no idea what XSLT even was until today. Reading the submission, the thread linked by u/troupo below, and Wikipedia, I find that it's apparently used in RSS parsing by browsers, because RSS is XML and then XSLT is "originally designed for transforming XML documents into other XML documents" so it can turn the XML feed into an HTML page

I agree RSS parsing is nice to have built into browsers. (Just like FTP support, that I genuinely miss in Firefox nowadays, but allegedly usage was too low to warrant the maintenance.) I also don't really understand the complaint from the Chrome people that are proposing it: "it's too complex, high-profile bugs, here's a polyfill you can use". Okay, why not stuff that polyfill into the browser then? Then it's already inside the javascript sandbox that you need to stay secure anyway, and everything just stays working as it was. Replacing some C++ code sounds like a win for safety any day of the week

On the other hand, I don't normally view RSS feeds manually. They're something a feed parser (in my case: Blogtrottr and Antennapod) would work with. I can also read the XML if there is a reason for me to ever look at that for some reason, or the server can transform the RSS XML into XHTML with the same XSLT code right? If it's somehow a big deal to maintain, and RSS is the only thing that uses it, I'm also not sure how big a deal it is to have people install an extension if they view RSS feeds regularly on sites where the server can do no HTML render of that information. It's essentially the same solution as if Chrome would put the polyfill inside the browser: the browser transforms the XML document inside of the JS sandbox

  • It's much more general purpose than that. RSS is just XML after all. XSLT basically lets you transform XML into some other kind of markup, usually HTML.

    I think the principle behind it is wonderful. https://www.example.com/latest-posts is just an XML file with the pure data. It references an XSLT file which transforms that XML into a web page. But I've tried using it in the past and it was such a pain to work with. Representing things like for loops in markup is a fundamentally inefficient thing to do, JavaScript based templating is always going to win out from the developer experience viewpoint, especially when you're more than likely going to need to use JS for other stuff anyway.

    It's one of those purist things I yearn for but can never justify. Shipping XML with data and a separate template feels so much more efficient than pre-prepared HTML that's endlessly repetitive. But... gzip also exists and makes the bandwidth savings a non-issue.

  • RSS likely isn't the only thing that uses it. XSLT is basically the client side declarative template language for XML/HTML that people always complain doesn't exist (e.g. letting you create your own tags or do includes with no server or build steps).

    • I understand that there are more possible uses for the tool, but RSS is the only one I saw someone mention. Are there more examples?

      It may be that I don't notice when I use it, if the page just translates itself into XHTML and I would never know until opening the developer tools (which I do often, fwiw: so many web forms are broken that I have a habit of opening F12, so I always still have my form entries in the network request log). Maybe it's much more widespread than I knew of. I have never come across it and my job is testing third-party websites for security issues, so we see a different product nearly every week (maybe those sites need less testing because they're not as commonly interactive? I may have a biased view of course)

      11 replies →

  • > I also don't really understand the complaint from the Chrome people that are proposing it: "it's too complex, high-profile bugs, here's a polyfill you can use".

    Especially considering the amount of complex standards they have qualms about from WebUSB to 20+ web components standards

    > On the other hand, I don't normally view RSS feeds manually.

    Chrome metrics famously underrepresent corporate installation. There could be quite a few corporate applications using XSLT as it was all the rage 15-20 years ago.

    • My guess is that they're fine with WebBluetooth/USB/FileSystem/etc. because the code for the new standard is recent and sticks with modern security sensibilities.

      XSLT (and basically anything else that existed when HTML5 turned ten years old) is old code using old quality standards and old APIs that still need to be maintained. Browsers can rewrite them to be all new and modern, but it's a job very few people are interested in (and Google's internal structure heavily prioritizes developing new things over maintaining old stuff).

      Nobody is making a promotion by modernizing the XSLT parser. Very few people even use XSLT in their day to day, and the biggest product of the spec is a competitor to at least three of the four major browser manufacturers.

      XSLT is an example of exciting tech that failed. WebSerial is exciting tech that can still prove itself somehow.

      The corporate installations still doing XSLT will get stuck running an LTS browser like they did with IE11 and the many now failed strains of technology that still supports (anyone remember ActiveX?).

    • We pentest lots of corporate applications so if this was widespreadly deployed in the last ~8 years that I've been doing the job full time, I don't know how I would have missed it (like, never even saw a talk about it, never saw a friend using it, never heard a colleague having to deal with it... there's lots of opportunities besides getting such an assignment myself). Surely there are talks on it if you look for it, just that I haven't the impression that this is a common corporate thing, at least among the kinds of customers we have (mainly larger organizations). A sibling comment mentions they use it on their hobby site though

  • XSLT was the blockchain, nft, metaverse of the mid?-2000s. Was totally going to solve all of our problems.

    • I thought XML was that big hype, not XSLT. That I somehow never saw mentioned that you can do actual webpages and other useful stuff with it is probably why I never understood why people thought XML was so useful ^^' I thought it was just another data format like JSON or CSV, and we might as well have written HTML as {"body":{"p":"Hello, World!"}} and that it's just serendipity that XML was earlier

      1 reply →

    • At the time I ran across lots of real websites using it. I successfully used it myself at least once too. Off the top of my head, Blizzard was using it to format WoW player profiles for display in the browser.

This is tragic. I believe we should have gone the other way and included xslt 3.0 in the baseline browser requirements.

Actually, I think removing XSLT is bad because it means we are more tied to javascript or other languages for XML transformation instead of a language designed for this specific purpose, a DSL.

Which means more unreadable code.

But if they decide to remove XSLT from spec, I would be more than happy if they remove JS too. The same logic applies.

having browsers transform XML data into HTML via XSLT is a cool feature, and it works completely statically, without any server-side or client-side code. Would be a shame if that was removed. I have a couple dozen XML databases that I made accessible in a browser using xslt...

So annoying, XSLT is very powerful but browsers let it languish at 1.0

XSLT 1.0 is still useful though, and absolutely shouldn't be removed.

Them: "community feedback" Also them: <marks everything as off topic>

This came about after the maintainer of libxml2 found giving free support to all these downstream projects (from billionaire and trillionaire companies) too much.

Instead of just funding him, they have the gall to say they don't have the money.

While this may be true in a micocosm of that project, the devs should look at the broader context and who they are actually working for.

The XSLT juice is worth the squeeze, but only to a tiny minority of users, and there's costly rewrites to do to keep XSLT in there (for Chrome, at least.)

Here's what I wish could happen: allow implementers to stub out the XSLT engine and tell users who love it that they can produce a memory-safe implementation themselves if they want the functionality put back in. The passionate users and preservationists would get it done eventually.

I know that's not a good solution because a) new xslt engine code needs to be maintained and there's an ongoing cost for that for very few users, b) security reviews are costly for the new code, c) the stubs themselves would probably be nasty to implement, have security implications, etc. And, there's probably reasons d-z that I can't even fathom.

It sucks to have functionality removed/changed in the web platform. Software must be maintained though; cost of doing business. If a platform doesn't burden you with too much maintenance and chooches along day after day, then it's usually a keeper.

This proposal seems to be aimed at removing native support in favor of a WASM-based polyfill (like PDF.js, I guess) which seems reasonable?

Google definitely throws its weight around too much w.r.t. to web standards, but this doesn't seem too bad. Web specifications are huge and complex so trying to size down a little bit while maintaining support for existing sites is okay IMO.

  • No, that would indeed be reasonable, but the proposal is to remove XSLT from the standard and remove Chrome support for XSLT entirely, forcing websites to adopt the polyfill themselves.

    • Which is, to me, silly. If you ship the polyfill then there's no discussion to be had. It works just the same as it always has for users and it's as secure as V8, no aging native codebase with memory corruption bugs to worry about.

      4 replies →

  • Last I checked, it’s a polyfill that Chrome won’t default include - they’re just saying that they’d have a polyfill in JS and it’s on site authors to use.

    That breaks old unmaintained but still valuable sites.

    • As a user you can only use the polyfill to replace the XSLTProcessor() JavaScript API. You can't use the polyfill if you're using XSLT for XML Stylesheets (<?xml-stylesheet … ?> tags).

      (But of course, XML Stylesheets are most widely used with RSS feeds, and Google probably considers further harm to the RSS ecosystem as a bonus. sigh)

      1 reply →

    • Ah, okay. I guess that's another one I'll add to the list of hostile actions towards the web then.

      I completely understand the security and maintenance burdens that they're bringing up but breaking sites would be unacceptable.

  • The polyfills are something devs have to include and use. It means all the pages that cannot be updated will be broken.

Setting aside the discussion of the linked issue itself (tone, comments, etc), I feel like I need to throw this out there:

I don't understand the point in having a JS polyfill and then expecting websites to include it if they want to use XSLT stuff. The beauty of the web is that shit mostly just works going back decades, and it's led to all kinds of cool and useful bits of information transfer. I would bet money that so much of the weird useful XSLT stuff isn't maintained as much today - and that doesn't mean it's not content worth keeping/preserving.

This entire issue feels like it would be a nothing-burger if browser vendors would just shove the polyfill into the browser and auto-run it on pages that previously triggered the fear-inducing C++ code paths.

What exactly is the opposition to this? Even reading the linked issue, I don't see an argument against this that makes much sense. It solves every problem the browser vendors are complaining about and nothing functionally changes for end users.

Chrome is a browser – it can’t remove something from the spec. Perhaps this should say Google proposes to remove it from the spec.

  • Chrome is the dominant browser. Sad as this may be removing it from Blink means de facto removing it from the spec.

    That being said, I'm not against removing features but neither this or the original post provide any substantial rationale on why it should be removed. Uses for XSLT do exist and the alternative is "just polyfill it" which is awkward especially for legacy content.

  • Not sure if you missed it, but a few days before this PR, Google did propose removing it from the spec.

    • But that's my point - Google is proposing removing it from the spec. It's kinda weird to reformulate it for the headline as 'Chrome' is doing it.

I don't get the people complaining that they need it on their low-power microcontrollers yet instead of using an XSLT library they'd rather pull in Chromium.

With how bloated browsers are right now, good riddance IMO

  • They are not talking about pulling in Chromium on a microcontroller. Their web server is on a microcontroller, so they want to minimize server side CPU usage and force the browser to do their XSLT transformation.

    Since it's a microcontroller, modifying that server and pushing the firmware update to users is probably also a pain.

    Unusual use case, but an reasonable one.

    • Yeah, I don't think XML + XLST is any better than or allows anything that sending say JSON and transforming it with JS wouldn't. However that would require changing the firmware, which as you mention may be difficult or impossible.

  • I think they're talking about outputting XML+XSLT on those microcontrollers, i.e. just putting out text. Chromium would come in for the viewer who's loading whatever tiny status-webpage those microcontrollers are hosting on a separate device.

I saw XSLT used to transform RSS feeds into something nicely human readable. That is, the RSS feed was referencing the XSLT. Other than that I haven't noticed the use of XSLT on the web.

I remember having built a static site that was 100% xml data and xslt transformers in the early 2000s

Quite fun at the time

IBM owns a very high-performance XSLT engine they could probably open source or license to the browser makers. IF anyone from IBM is here (?), may want to consider it..

If security and memory-safety is a concern and there is already a polyfill, why remove the API form the standard instead of just using the WASM-based polyfill internally?

  • They want to punt a half-baked polyfill over the wall and remove support from the browser so they don't have to do any maintenance work, making it someone else's problem.

There are better candidates to remove from the spec than XSLT, like HTML. The parsing rules for HTML are terrible and it hinders further advancement of the spec more than anything. The biggest mistake of HTML was back peddling on the switch to XHTML.

Removal of anything is problematic though, better off freezing parts of the spec to specific compatibility versions and getting browsers to ship optional compatibility modes that let you load and view old sites.

If this is in response to Nick Wellnhofers announcement from three months ago to stop embargoing/priorizing libxlst/libxml2 CVEs due to lack of manpower (which I suspect is a consequence of flooding projects with bogus LLM-generated findings from students wanting to butter their profile), wouldn‘t it be possible to ship an emscripten-compiled libxslt implementation instead of libxslt proper?

So Google is bringing the deprecation treadmill to the web, yay!

Yegge called it:

https://steve-yegge.medium.com/dear-google-cloud-your-deprec...

"""

> Because I sometimes get similar letters from the Google Cloud Platform. They look like this:

>> Dear Google Cloud Platform User,

>> We are writing to remind you that we are sunsetting [Important Service you are using] as of August 2020, after which you will not be able to perform any updates or upgrades on your instances. We encourage you to upgrade to the latest version, which is in Beta, has no documentation, no migration path, and which we have kindly deprecated in advance for you.

>> We are committed to ensuring that all developers of Google Cloud Platform are minimally disrupted by this change.

>> Besties Forever,

>> Google Cloud Platform

> But I barely skim them, because what they are really saying is:

>> Dear RECIPIENT,

>> Fuck yooooouuuuuuuu. Fuck you, fuck you, Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore. So drop your fucking plans and go start digging through our shitty documentation, begging for scraps on forums, and oh by the way, our new shit is COMPLETELY different from the old shit, because well, we fucked that design up pretty bad, heh, but hey, that’s YOUR problem, not our problem.

>> We remain committed as always to ensuring everything you write will be unusable within 1 year.

>> Please go fuck yourself,

>> Google Cloud Platform

"""

  • But if you live in a capitalist country with a free market, several competitors should pop out and suggest migrating your system into their cloud for free, shouldn't they? No way capitalist overlooks an unoccupied market niche.

Intent to remove: emergency services dialling (911, 112, 000, &c.)

Almost no one ever uses it: metrics show only around 0.02% of phone calls use this feature. So we’re planning on deprecating and then removing it.

—⁂—

Just an idea that occurred to me earlier today. XSLT doesn’t get a lot of use, but there are still various systems, important systems, that depend upon it. Links to feeds definitely want it, but it’s not just those sorts of things.

Percentages only tell part of the story. Some are tiny features that are used everywhere, others are huge features that are used in fewer places. Some features can be removed or changed with little harm—frankly, quite a few CSS things that they have declined to address on the grounds of usage fall into this category, where a few things would be slightly damaged, but nothing would be broken by it. Other features completely destroy workflows if you change or remove them—and XSLT is definitely one of these.

Do we know Webkit, KHTML and Gecko's stand on this?

I know this is for security reason but why not update the XSLT implementation instead. And if feature that aren't used get dropped, they might as well do it all in one good. I am sure lots of HTML spec aren't even used.

  • If it was just for security reasons, they could sponsor FOSS development on the implementation.

    I am of the opinion that it is to remove one of the last ways to build web applications that don't have advertising and tracking injected into them.

    • I get the impression they are ripping it out because they don't want to sponsor the FOSS volunteer working on it or deal w/ maintaining it themselves. The tracking/advertising take doesn't hold much water for me as adding those things to the page is something developers and companies choose to do. You could just as easily inject a tracking script tag or pixel or whatever via XSLT during transformation if you wanted.

    • > I am of the opinion that it is to remove one of the last ways to build web applications that don't have advertising and tracking injected into them.

      Er, how so? What stops you from doing so in HTML/JS/CSS ?

  • KHTML has been discontinued and was barely maintained for several years before. It has not been a relevant party for about a decade if not more.

Despite rather heated discussion just three weeks they started just two weeks prior https://github.com/whatwg/html/issues/11523

  • It’s another “we listened the community and nobody told us no” moment. Like Go’s telemetry issue.

    Google is boneheaded and hostile to open web at this point, explicitly.

    • > It’s another “we listened the community and nobody told us no” moment. Like Go’s telemetry issue.

      Go changed their telemetry to opt-in based on community feedback, so I'm not sure what point you're trying to make with that example.

      7 replies →

  • Looks like they're going to ram it through anyway, no matter the existing users. There's got to be a better way to deal with spam than just locking the thread to anyone with relevant information.

    • WHATWG literally forced W3C to sign a deal and obey their standards. WHATWG is basically Google + Apple + Microsoft directly writing the browser standards. Fixing Microsoft's original mistake of Internet Explorer of not creating a faux committee lol.

      1 reply →

  • "Heated discussion" sounds like any comment voicing legitimate concern being hidden as "off-topic", and the entire discussion eventually being locked. Gives me Reddit vibes, I hope this is not how open web standards are managed.

  • If it's a security issue, shouldn't the browsers just replace C++ code with the JS or WASM polyfill themselves?

    • I also wondered about that. They probably don't want to do that because of maintaining, fixing and allocating resources to it then.

      Probably a browser extension on the user side can do the same job if an XSLT relying page cannot be updated.

      2 replies →

This is disappointing. I was using XSLT for transforming SVGs, having discovered it early last year via a chat. Even despite browsers only shipping with v1.0 it still allowed a quite compact way to manipulate them without adding some extra parser dependency.

As much as I think XSLT is cool, if it's used by practically nobody and contains real security vulnerabilities... oh well. You can't deny that combination is a good objective reason to remove it.

And browsers are too big with too many features; reducing the scope of what a browser does is good (but not enough by itself to remove a feature).

Maybe one day it will come back as a black-box module running in an appropriate sandbox - like I think Firefox uses for PDF rendering.

Ouch. Two of my old web sites use XSLT, as a way to display info from a database on administrative pages. I guess it's time to kill off those sites.

The web is so far gone at this point, they should probably remove everything but wasm.

At least that's how my cynical side feels anymore.

Wait, all the web browsers had XSLT support all along?

I remember using these things in a CSCI class, and, IIRC, we were using something akin to Tomcat to do transformations on the server, before serving HTML to the browser, circa 2005/2006.

I had to look up what XSLT was (began working professionally as a programmer in 2013). Honestly, if it simplifies the spec, at this point it seems like a good idea to remove it.

XSLT came across as a little esoteric.

I support the html and browser spec being greatly simplified in general. Makes it easier to develop competing browsers.

  • But at the same time, people don't want web pages and web apps to become all fully opaque like Flutter web or complex, minified JS-heavy sites. Even the latter have many a11y benefits of markup.

    I think that's a tradeoff.

    Simplest approach would be to just distribute programs, but the Web is more than that!

    Another simple approach would be to have only HTML and CSS, or even only HTML, or something like Markdown, or HTML + a different simple styling language...

    and yet nothing of that would offer the features that make web development so widespread as a universal document and application platform.

    • I think most people just don't care, although the a11y benefits are truely important. HTML isn't going anywhere and often you need JS to make things more accessible.

      But like, most people just want a site to work and provide value, save them time etc and the way the site is built is entirely unimportant. I find myself moving towards that side despite being somewhat of a web purist for years.

i thought that HTML spec is immutable.

I'm sorry but I don't understand this. If a polyfill can add xslt support then why don't browser vendors ship the polyfill and apply it automatically when necessary?

The vision of XML was a semantic web. Nowadays everybody knows that semantic is 'em' and non-semantic is 'b' or 'i'. This is simple, but wrong. In fact a notation is semantic when you can make all the distinctions you care about and (this is important) do not have to make distinctions you do not care about. In this case every distinction means something and thus is semantic.

How do you apply this to documents? They are so different. XML gives the answer: you INVENT a notation that suits just your case and use it. This way you perfectly solve the enigma of semantic.

OK, fine, but what to do with my invented notation? Nobody understands it. Well, that is OK. You want to render it as HTML; HTML has no idea about you notation, but is (was) also a kind of XML, so you write a transformation from your notation to HTML. Now you want to render it for printed media: here is XSL-FO, go ahead. Or maybe you want to let blind people read your document too; here is (a non-existent) AUDIO-ML, just add a transformation into this format. In fact there could be lots of different notations for different purposes (search, for instance) and they are all within a single transformation step.

And for that transformation we give you a tool: XSLT.

(I remember a piece discussed here; it was about different languages and one of examples of very simple languages was XSLT. It is my impression as well; XSLT is unconventional, but otherwise very simple.)

Of course you do not have to invent a new notation each time. It's equally fine to invent small specific notations and mix them with yours.

For example, imagine a specific chess notation. It allows you to describe positions and a sequence of moves, giving you either a party or a composition. You write about chess and add snippets in this notation. First, it can be very expressive; referring to a position should take no more than:

    <position party="#myparty" move="22w" />

Given the party is described this can render the whole board. Or you can refer to a sequence of moves:

    <moves party="#myparty" from="22w" to "25b" />

and this can be rendered in any chess move notation.

And then imagine a specific search engine that crawls the web, indexes parties and compositions and then can search, for example, for other pages that discuss this party, or for similar positions, or for matching sequences of moves.

XML even had a foundation to incorporate other notations. XML itself is, indeed, verbose (although this can be lessened with a good design, which is rare), but starting from v1.0 it has a way to formally indicate that contents of an element are written in a specific notation. If that direction was followed it could lead to things like:

    <math notation="latex">...</math> <math notation="asciimath">...</math> 

all in the same document.

The vision of XML was federated web. Lots of notations, big and small, evolving and mixing. It was dismissed on the premise it was too strict. I myself think it was too free.

As a reminder for people who love xslt.

Nothing is stopping you from using content negotiation to do it server side.