One extremely important XSLT use-case is for RSS/Atom feeds. Right now, clicking on a link to feed brings up a wall of XML (or worse, a download link). If the feed has an XSLT stylesheet, it can be presented in a way that a newcomer can understand and use.
I realize that not that many feeds are actually doing this, but that's because feed authors are tech-savvy and know what to do with an RSS/Atom link.
But someone who hasn't seen/used an RSS reader will see a wall of plain-text gibberish (or a prompt to download the wall of gibberish).
XSLT is currently the only way to make feeds into something that can still be viewed.
I think RSS/Atom are key technologies for the open web, and discovery is extremely important. Cancelling XSLT is going in the wrong direction (IMHO).
I've done a bunch of things to try to get people to use XSLT in their feeds: https://www.rss.style/
> “They don't put ads on their sites, so I'm not surprised…”
Similarly, Chrome regularly breaks or outright drops support for web features used only in private enterprise networks. Think NTLM or Kerberos authentication, private CA revocation list checking, that kind of thing.
Gotta love the reference to the <link> header element. There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
iIRC, all of the proposed workarounds involved updating the sites using XSLT, which may not always be particularly easy, or even something publishers will realize they need to do.
For RSS/Atom feeds presented as links in a site (for convenience to users), developers can always offer a simple preview for the feed output using: https://feedreader.xyz/
Isn't this kind of an argument for dropping it? Yeah it would be great if it was in use but even the people who are clicking and providing RSS feeds don't seem to care that much.
You are probably right, but it is depressing how techies don't see the big picture & don't want to provide an on-ramp to the RSS/Atom world for newcomers.
I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
"XSLT is currently the only way to make feeds into something that can still be viewed."
You could use content negotiation just fine. I just hit my personal rss.xml file, and the browser sent this as the Accept header:
You can easily ship out an HTML rendering of an RSS file based on this. You can have your server render an XSLT if you must. You can have your server send out some XSLT implemented in JS that will come along at some point.
To a first approximation, nobody cares enough to use content negotiation any more than anyone cares about providing XML stylesheets. The tech isn't the problem, the not caring is... and the not caring isn't actually that big a problem either. It's been that way for a long time and we aren't actually all that bothered about it. It's just a "wouldn't it be nice" that comes up on those rare occasions like this when it's the topic of conversation and doesn't cross anyone's mind otherwise.
Another point: it is shocking how many feeds have errors in them. I analyzed the feeds of some of the top contributors on HN, and almost all had something wrong with them.
Even RSS wizards would benefit from looking at a human-readable version instead of raw XML.
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
Maybe it's more for people who have no idea what RSS is and click on the intriguing icon. If they weren't greeted with a load of what seems like nonsense for nerds there could have been broader adoption of RSS.
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
So that excludes you from the "someone who hasn't seen/used a RSS reader" demographic mentioned in the comment you are replying to.
I never realized styling RSS feeds was an options. Now looking at some of the examples, I wonder how many times I've clicked on "Feed", then rolled my eyes and closed it because I thought it wasn't RSS. More than zero, I'm sure.
> Cancelling XSLT is going in the wrong direction (IMHO).
XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away. The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
It seems like something an extension ought to be capable of, and if not, fix the extension API so it can. In firefox I think it would be a full-blown plugin, which is a lower-level thing than an extension, but I don't know whether Chromium even has a concept of such a thing.
> XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away.
Not having it available from the browser really reduces the ability to use it in many cases, and lots of the nonbrowser XSLT ecosystem relies on the same insecure, unmaintained implementation. There is at least one major alternative (Saxon), and if browser support was switching backing implementation rather than just ending support, “XSLT isn’t going anywhere” would be a more natural conclusion, but that’s not, for whatever reason, the case.
XSLT as a feature is being removed from web browsers, which is pretty significant. Sure it can still be used in standalone tools and libraries, but having it in web browsers enabled a lot of functionality people have been relying on since the dawn of the web.
> hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away
So why not switch to a better maintained and more secure implementation? Firefox uses TransforMiix, which I haven't seen mentioned in any of Google's posts on the topic. I can't comment on whether it's an improvement, but it's certainly an option.
> The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
Really? How about a trillion dollar corporation steps up to sponsor the lone maintainer who has been doing a thankless job for decades? Or directly takes over maintenance?
They certainly have enough resources to maintain a core web library and fix all the security issues if they wanted to. The fact they're deciding to remove the feature instead is a sign that they simply don't.
And I don't buy the excuse that XSLT is a niche feature. Their HTML bastardization AMP probably has even less users, and they're happily maintaining that abomination.
> It seems like something an extension ought to be capable of
I seriously doubt an extension implemented with the restricted MV3 API could do everything XSLT was used for.
> and if not, fix the extension API so it can.
Who? Try proposing a new extension API to a platform controlled by mega-corporations, and see how that goes.
It's encouraging to see browsers actually deprecate APIs, when I think a lot of problems with the Web and Web security in particular is people start using new technologies too fast but don't stop using old ones fast enough.
That said, it's also pretty sad. I remember back in the 2000s writing purely XML websites with stylesheets for display, and XML+XSLT is more powerful, more rigorous, and arguably more performant now in the average case than JSON + React + vast amounts of random collated libraries which has become the Web "standard".
But I guess LLMs aren't great at generating XSLT, so it's unlikely to gain back that market in the near future. It was a good standard (though not without flaws), I hope the people who designed it are still proud of the influence it did have.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Yup, "been there, done that" - at the time I think we were creating reports in SQL Server 2000, hooked up behind IIS.
It feels this is being deprecated and removed because it's gone out of fashion, rather than because it's actually measurably worse than whatever's-in-fashion-today... (eg React/Node/<whatever>)
100%. I’ve been neck deep over the past few months in developing a bunch of Windows applications, and it’s convinced me that never deprecating or removing anything in the name of backwards incompatibility is the wrong way. There’s a balance to be struck like anything, but leaving these things around means we continue to pay for them in perpetuity as new vulnerabilities are found or maintenance is required.
What about XML + CSS? CSS works the exact same on XML as it does on HTML. Actually, CSS works better on XML than HTML because namespace prefixes provide more specific selectors.
The reason CSS works on XML the same as HTML is because CSS is not styling tags. It is providing visual data properties to nodes in the DOM.
Agreed on API deprecation, the surface is so broad at this point that it's nearly impossible to build a browser from scratch. I've been doing webdev since 2009 and I'm still finding new APIs that I've never heard of before.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Awesome! I made a blog using XML+XSLT, back in high school. It was worth it just to see the flabbergasted look on my friends faces when I told them to view the source code of the page, and it was just XML with no visible HTML or CSS[0].
Some people seem to think XSLT is used for the step from DOM -> Graphics. This is not the first time I have send a comment implying that, but it is wrong. XSLT is for the step from 'normalized data' -> DOM. And I like it, that this can be done in a declarative way.
The "severe security issue" in libxml2 they mention is actually a non-issue and the code in question isn't even used by Chrome. I'm all for switching to memory-safe languages but badmouthing OSS projects is poor style.
It is also kinda a self-burn. Chromium an aging code base [1]. It is written in a memory unsafe language (C++), calls hundreds of outdated & vulnerable libraries [2] and has hundreds of high severity vulnerabilities [3].
Given Google's resources, I'm a little surprised they having created an LLM that would rewrite Chromium into Go/Rust and replace all the stale libraries.
Google is too cheap to fund or maintain the library they've built their browser with after its hobbyist maintainers got burnt out, for more than a decade so they're ripping out the feature.
Their whole browser is made up of unsafe languages and their attempt to sort of make c++ safer has yet to produce a usable proof of concept compiler. This is a fat middle finger in the face of all the people's free work they grabbed to collect billions for their investors.
Nobody is badmouthing open source. It's the core truth, open source libraries can become unmaintained for a variety of reasons, including the code base becoming a burden to maintain by anyone new.
And you know what? That's completely fine. Open source doesn't mean something lives forever
The issue in question is just one of the several long-unfixed vulnerabilities we know about, from a library that doesn't have that many hands or eyes on it to begin with.
Sounded like the maintainers of libxml2 have stepped-back, so there needs to be a supported replacement, because it is widely used. (Or if you are worried the reputation of "OSS", you can volunteer!)
Where's the best collection or entry point to what you've written about Chrome's use of Gnome's XML libraries, the maintenance burden, and the dearth of offers by browser makers foot the bill?
To anyone who says to use JS instead of XSLT: I block JS because it is also used for ads, tracking and bloat in general. I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all).
I think being able to do client-side templating without JS is an important feature and I hope that since browser vendors are removing XSLT they will add some kind of client-side templating to replace it.
The percentage of visitors who block JS is extremely small. Many of those visits are actually bots and scrapers that don’t interpret JS. Of the real users who block JS, most of them will enable JS for any website they actually want to visit if it’s necessary.
What I’m trying to say is that making any product decision for the extremely small (but vocal) minority of users who block JS is not a good product choice. I’m sorry it doesn’t work for your use case, but having the entire browser ecosystem cater to JS-blocking legitimate users wouldn’t make any sense.
I block JS, too. And so does about 1-2% of all Web users. JavaScript should NOT be REQUIRED to view a website. It makes web browsing more insecure and less private, makes page load times slower, and wastes energy.
Of note here is that the segment we're talking about is actually an intersection of two very small cohorts; the first, as you note, are people who don't own a television errr disable Javascript, and the second is sites that actually rely on XSLT, of which there are vanishingly few.
XSLT is being exploited right now for security vulnerabilities, and there is no solution on the horizon.
The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
As they say, security is not a product, it’s a process. The process we have for existing browser technologies is better. That process is better because more people use it.
But even if we were to try to consider the technologies in isolation, and imagine a timeline where things were different? I doubt whether XML+XSLT is the superior platform for security. If it had won, we’d just have a different nightmare of intermingled content and processing. Maybe more stuff being done client-side. I expect that browser and OS manufacturers would be warping content to insert their own ads.
>You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
> The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
Yes, they also have much more vulnerabilities, because browsers are JIT compiling JS to w+x memory pages. And JS continues to get more complex with time. This is just fundamentally not the case with XSLT.
We're comparing a few XSLT vulnerabilities to hundreds of JIT compiler exploits.
> I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all)
Recent XSLT parser exploits were literally the reason this whole push to remove it was started, so this change will specifically be helping people in your shoes.
I feel like there's a bias here due to XSLT being negletted and hence not receiving the same powers as JS. If it did get more development in the browser I'm pretty sure it would get the same APIs that we hate JS for, and since it's already Turing complete chances are people will find ways to misuse it and bloat websites.
Makes me kind of sad. I started my carrier back in days when XHTML and co were lauded as the next thing. I worked with SOAP and WDSLs. I loved that one can express nearly everything in XML. And namespaces… Then came json and apart from being easier to read for humans I wondered why we switch from this one great exchange format to this half baked one. But maybe I’m just nostalgic. But every time I deal with json parsers for type serialization and the question how to express HashMaps and sets, how to provide type information etc etc I think back to XML and the way that everything was available on board. Looked ugly as hell though :)
json is sort of a gresham's law "bad money drives out the good" but for tech: lazy and forgiving technologies drive out the better but stricter ones.
bad technology seems to make life easier at the beginning, but that's why we now have sloppy websites that are an unorganized mess of different libraries, several MB in size without reason, and an absolute usability and accessibility nightmare.
xhtml and xml were better, also the idea separating syntax from presentation, but they were too intelligent for our own good.
> lazy and forgiving technologies drive out the better but stricter ones.
JSON is not "lazy and forgiving" (seriously, go try adding a comment to it).
It was just laser-focused on what the actual problem was that needed to be solved by many devs in day-to-day practice.
Meanwhile XML wanted to be an entire ecosystem, its own XML Cinematic Universe, where you had to adopt it all to really use it.
It's not surprising to me that JSON won out, but it's not because it's worse, it's actually much better than XML for the job it ended up being used for (a generic format to transfer state between running programs supporting common data structures with no extraneous add-ons or requirements).
XML is better for a few other things, but those things are far less commonly needed.
I like XSLT, and I’ve been using the browser-based APIs in my projects, but I must say that XSLT ecosystem has been in a sad state:
- Browsers have only supported XSLT 1.0, for decades, which is the stone age of templating. XSLT 3.0 is much nicer, but there’s no browser support for it.
- There are only two cross-platform libraries built for it: libxslt and Saxon. Saxon seriously lacks ergonomics to say the least.
One option for Google as a trillion dollar company would be to drive an initiative for “better XSLT” and write a Rust-based replacement for libxslt with maybe XSLT 3.0 support, but killing it is more on-brand I guess.
I also dislike the message “just use this [huge framework everyone uses]”. Browser-based template rendering without loading a framework into the page has been an invaluable boon. It will be missed.
If you are using XSLT to make your RSS or atom feeds readable in a browser should somebody click the link you may find this post by Jake Archibald useful: https://jakearchibald.com/2025/making-xml-human-readable-wit... - it provides a JavaScript-based alternative that I believe should work even after Chrome remove this feature.
I think that's sad. XSLT is in my point a view a very misunderstood technology. It gets hated on a lot. I wonder if this hate is by people who actually used and understood it, though. In any case, more often than not this is by people who in the same sentence endorse JavaScript (which, by any objective way of measuring is just a language far more poorly designed).
IMO XSLT was just too difficult for most webdevs. And IMO this created a political problem where the 'frontend' folks needed to be smarter than the 'backend' generating the XML in the first place.
XSLT might make sense as part of a processing pipeline. But putting it on front of your website was just an unnecessary and inflexible layer, so that's why everyone stopped doing it. (except rss feeds and etc.)
I'm not much of a programmer, but XSLT being declarative means that I can knock out a decent-looking template without having to do a whole lot of programming work.
Au contraire: the more you understand and use XSLT, the more you hate it. People who don't understand it and haven't used it don't have enough information and perspective to truly hate it properly. I and many other people don't hate XSLT out of misunderstanding at all: just the opposite.
XSLT is like programming with both hands tied behind your back, or pedaling a bicycle with only one leg. For any non-trivial task, you quickly hit a wall of complexity or impossibility, then the only way XSLT is useful is if you use Microsoft's non-standard XSLT extensions that let you call out to JavaScript, then you realize it's so easy and more powerful to simply do what you want directly in JavaScript there's absolutely no need for XSLT.
I understand XSLT just fine, but it is not the only templating language I understand, so I have something to compare it with. I hate XSLT and vastly prefer JavaScript because I've known and used both of them and other worse and better alternatives (like Zope Page Templates / TAL / METAL / TALES, TurboGears Kid and Genshi, OpenLaszlo, etc).
>My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
>Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
Well said. I wrote an XSLT based application back in the early 2000s, and I always imagined the creators of XSLT as a bunch of slavering demented sadists. I hate XSLT with a passion and would take brainfuck over it any day.
Hearing the words Xalan, Xerces, FOP makes me break out in a cold sweat, 20 years later.
That's upsetting. Being able to do templating without using JavaScript was a really cool party trick.
I've used it in an unfinished website where all data was stored in a single XML file and all markup was stored in a single XSLT file. A CGI one-liner then made path info available to XSLT, and routing (multiple pages) was achieved by doing string tests inside of the XSLT template.
In my opinion this is not “we agree lets remove it”. This is “we agree to explore the idea”.
Google and Freed using this as a go ahead because the Mozilla guy pasted a pollyfill. However it is very clearly NOT an endorsement to remove it, even though bad actors are stating so.
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support1. If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
Freed et al also explicitly chose to ignore user feedback for their own decision and not even try to improve XSLT security issues at the cost of performance.
Ah, so this is removing libxslt. For a minute I thought XSLT processing was provided by libxml2, and I remembered seeing that the Ladybird browser project just added a dependency on libxml2 in their latest progress update https://ladybird.org/newsletter/2025-10-31/.
I'm curious to see what happens going forward with these aging and under-resourced—yet critical—libraries.
There is no way to make JavaScript so limited in scope as XSLT is.
But what I want only is XSLT on live DOM nodes, when editing. Simple templating good engine, and to stay. Not a fancy stuff (reredoxes adinfinis).
That are capabilities that progressing-processing-oriented people will never get even close to that which document(ing)-oriented people (users) transparently had and is about to get lost.
The World Wide Web, invented at CERN in 1989 by Tim Berners-Lee, is a
system of interlinked hypertext _documents_ - not interlinked programs (opaque and superior to take control over any data).
I know it makes me an old and I am biased because one of the systems in my career I am most proud of I designed around XSLT transformations, but this is some real bullshit and a clear case why a private company should not be the de facto arbiter of web standards. Have a legacy system that depends on XSLT in the browser? Sucks to be you, one of our PMs decided the cost-benefit just wasn't there so we scrapped it. Take comfort in the fact our team's velocity bumped up for a few weeks.
And yes I am sour about the fact as an American I have to hope the EU does something about this because I know full-well it's not happening here in The Land of the Free.
I don't use XSLT and don't object to this, but seeing "security" cited made me realize how reflexively distrustful I've become of them using that justification for a given decision. Is this one actually about security? Who knows!
Didn't this come pretty directly after someone found some security vulns? I think the logic was, this is a huge chunk of code that is really complex which almost nobody uses outside of toy examples (and rss feeds). Sure, we fixed the issue just reported, but who knows what else is lurking here, it doesn't seem worth it.
As a general rule, simplifying and removing code is one of the best things you can do for security. Sure you have to balance that with doing useful things. The most secure computer is an unplugged computer but it wouldn't be a very useful one; security is about tradeoffs. There is a reason though that security is almost always cited - to some degree or another, deleting code is always good for security.
The vulnerabilities themselves often didn't really affect Chrome, but by the maintainers' own admission the code was never intended to be security critical. They got burned out after a series of vulnerability reports with publication deadlines made them decide to just take security bugs like normal bugs so the community could help fix things. That doesn't really fit with the "protect users by keeping security issues ecret for three months" approach corporations prefer. Eventually the maintainers stepped down.
Neither Google nor Apple were willing to sponsor a fork of the project but clearly they can't risk unmaintained dependencies in their billion dollar product, so they're rushing to pull the plug.
"Who knows what's lurking there" is a good argument to minimize attack surface, but Google has only been adding more attack surface over the past couple of years. I find it hard to defend that processing a structured document should be outside of a browser's feature set, but Javascript USB drivers and serial ports are necessary to drive the web. The same way libxml2 was never intended to be security critical, many GPU drivers were never written to protect from malicious programs, yet WebGPU and similar technology i being pushed hard and fast.
If we're deleting code to improvve against theoretical security risks, I know plenty of niche APIs that should probably be axed.
> As a general rule, simplifying and removing code is one of the best things you can do for security.
Sure, but that’s not what they’re doing in the big picture. XSLT is a tiny drop in the bucket compared to all the surface area of the niche, non-standard APIs tacked onto Chromium. It’s classic EEE.
There are security issues in the C implementation they currently use. They could remove this without breaking anything by incorporating the JS XSLT polyfill into the browser. But they won't because money.
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
It's true that there are security issues, but it's also true that they don't want to put any resources into making their XSLT implementation secure. There is strong unstated subtext that a huge motivation is that they simply want to rip this out of Chrome so they don't have to maintain it at all.
I'd never written any XSL before last week when I got the crazy idea of putting my resume in XML format and using stylesheets to produce both resume and CV forms, redacted and not, with references or not. And when I had it working in HTML, I started on the Typst stylesheet. xsltproc, being so old, basically renders the results instantly. And Typst, being so new, does as well.
XSLT seem like it could be something implemented with WebAssembly (and/or JavaScript), in an extension (if the extension mechanism is made suitable; I think some changes might be helpful to support this and other things), possibly one that is included by default (and can be overridden by the user, like any other extension should be); if it is implemented in that way then it might avoid some of the security issues. (PDF could also be implemented in a similar way.)
(There are also reasons why it might be useful to allow the user to manually install native code extensions, but native code seems to be not helpful for this use, so to improve security it should not be used for this and most other extensions.)
The lead dev driving the Chrome deprecation built a wasm polyfill https://github.com/mfreed7/xslt_polyfill. Multiple people proposed in the Github discussions leading up to this that Google simply make the polyfill ship with Chrome as an on-by-default extension that could be disabled in settings, but he wouldn't consider it.
Removing html and css would also make the browser more secure - but I would argue also very counter productive for users.
Most of read-only content and lite editing can be achieved with raw data + xslt.
The web has become a sledgehammer for clacking a nut.
For example, with xslt you could easily render a read only content without complex and expensive office apps. That is enough for Academia, Gov, and small businesses.
If they really cared about "security" they would remove JS or try to encourage minimising its use. That is a huge attack surface in comparison, but they obviously want to keep it so they can shove in more invasive and hostile user-tracking and controlling functionality.
I do all of my browsing with Javascript disabled. I've done this for decades now, as a security precaution mainly, but I've also enjoyed some welcome side-effects where paywalls disappeared and ads became static and unobtrusive. I wasn't looking for those benefits but I'll take 'em. In stride.
I've also witnessed a welcome (but slow) change in site implementations over the years: there are few sites completely broken by the absence of JS. Still some give blank screens and even braindead :hidden attributes thrown into the <noscript> main page to needlessly forbid access... but not as many as back in the day when JS first became the rage.
I don't know much about XSLT other than the fact that my Hiawatha web server uses it to make my directory listings prettier, and I don't have to add CSS or JS to get some style. I hate to see a useful standard abandoned by the big boys, but what can I do about it?
I bristle when I encounter pages with a few hundred words of content surrounded by literally megabytes of framework and flotsam, but that's the gig, right, wading through the crap to find the ponies.
It's a shame the browser developers are making an open, interoperable, semantic web more difficult. It's not surprising, though. Browsers started going downhill after they removed the status bar and the throbber and made scrollbars useless.
> Didn't this effort start with Mozilla and not Google?
Maybe round one of it like ten years ago did? From what I understand, it's a Google employee who opened the "Hey, I want to get rid of this and have no plans to provide a zero-effort-for-users replacement." Github Issue a few months back.
It started with Mozilla, Apple, and Opera jumping ship and forming WHATWG. That stopped new XML related technologies from being adopted in browsers twenty years ago. Google is just closing the casket and burying the body.
Why would I forget about XSLT a really good technology pushed to the wayside by bad faith actors? Why would I forget Mason Freed? A person dedicating themselves to ruining perfectly good technology that needs a little love.
Do you have some sort of exclusive short term memory or something where you can’t remember someone’s name? Bizarre reply. Other people may have had a similarly lazy idea, but Mason is the one pushing and leading the charge.
It seems maybe you want me to blame this on Google as a whole but that would mean bypassing blame and giving into their ridiculous bs.
Blame Apple and Mozilla, too, then. They all agreed to remove it.
They all agreed because XSLT is extremely unpopular and worse than JS in every way. Performance/bloat? Worse. Security? MUCH worse. Language design? Unimaginably worse.
EDIT: I wrote thousands of lines of XSLT circa 2005. I'm grateful that I'll never do that again.
This is only repeated by people who have never used it.
XSLT is still a great way of easily transforming xml-like documents. It's orders of magnitude more concise than transforming using Javascript or other general programming languages. And people are actively re-inventing XSLT for JSON (see `jq`).
Comparing single-purpose declarative language that is not even really turing-complete with all the ugly hacks needed to make DOM/JS reasonably secure does not make any sense.
Exactly what you can abuse in XSLT (without non-standard extensions) in order to do anything security relevant? (DoS by infinite recursion or memory exhaustion does not count, you can do the same in JS...)
> XSLT is extremely unpopular and worse than JS in every way
This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".
They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Only Google is pushing forward and twisting that message.
This is the problem with any C/C++ codebase, using rust instead would have been a better solution than just removing web standards from what is supposed to be a web browser.
> When that solution isn't wanted, the polyfill offers another path.
A solution is only a solution if it solves the problem.
This sort of thing, basically a "reverse X/Y problem", is an intellectually dishonest maneuver, where a thing is dubbed a "solution" after just, like, redefining the problem to not include the parts that make it a problem.
The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
> As mentioned previously, the RSS/Atom XML feed can be augmented with one line, <script src="xslt-polyfill.min.js" xmlns="http://www.w3.org/1999/xhtml"></script>, which will maintain the existing behavior of XSLT-based transformation to HTML.
Oh, yeah? It's that easy? So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right? Or is this another instance where well-paid engineers on the Chrome team who elected to accept the responsibility of maintaining the stability of the Web have decided that they like the getting-paid part but don't like the maintaining-the-stability-of-the-Web part and are talking out of both sides of their mouths?
> So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right?
As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
> As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
This is a (poor) attempt at gaslighting/retconning.
The phrase "Don't break the Web" is not original to this thread.
(I can't say I look forward to your follow-up reply employing sleights of hand like claims about how stuff like Flash that was never standardized, or the withdrawal of experimental APIs that weren't both stable/finalized and implemented by all the major browsers, or the long tail of stuff on developer.mozilla.org that is marked "deprecated" (but nonetheless still manages to work) are evidence of your claim and that browser makers really do have a history of doing this sort of thing. This is in fact the first time something like this has actually happened—all because there are engineers working on browsers at Google (and Mozilla and Apple) that are either confused about how the Web differs from, say, Android and iOS, or resentful of their colleagues who get to work on vendor SDKs where the API surface area is routinely rev'd to remove whatever they've decided no longer aligns with their vision for their platform. That's not what the Web is, and those engineers can and should go work on Android and iOS instead of sabotaging the far more important project of attending to the only successful attempt at a vendor-neutral, ubiquitous, highly accessible, substrate for information access that no one owns and that doesn't fuck over the people who rely on it being stable.)
"The reality is that for all of the work that we've put into HTML, and CSS, and the DOM, it has fundamentally utterly failed to deliver on its promise.
It's even worse than that, actually, because all of the things we've built aren't just not doing what we want, they're holding developers back. People build their applications on frameworks that _abstract out_ all the APIs we build for browsers, and _even with those frameworks_ developers are hamstrung by weird limitations of the web."
I think in the context of that link, they see React as a failing of the web. If the W3C/WHATWG/browser vendors had done a reasonable job of moving web technology forward, things like React would be far less necessary. But they spent all their time and energy working on things like <aside> and web components instead of listening to web developers and building things that are actually useful in a web developer’s day-to-day life. Front-end frameworks like React, on the other hand, did a far better job of listening to developers and building what they needed.
So basically browsers had this [..] the question now is there is no investment in this. None. And there hasn't been for a really long time from the browser's perspectives.
XSLT shows up then to be very robust technology that survived the test of time already - if for decades (!) regardless of lack of support, investment, with key browser bugs not fixed by purpose stuck at version 1.0 - it's still being used in that working part - and if used it holds up well and last, in meantime, elsewhere:
XPath And XSLT continue to evolve. They've really continued to evolve. And people are currently working on an XSLT-4.
And because it's a declarative way of transforming trees and collections of trees. And declarative means you don't say how to do it. You say, 'This is what I want'..
.. it's timeless: _abstracted definition_ to which imperative solutions could be reduced in the best case
- with unaware of that authors repetitively trying (and soon having to) to reimplement that "not needed" ( - if abstracted already out ! - ) part ( ex. https://news.ycombinator.com/item?id=45183624 ) - in more or less common or compatible ways
- so, better keep it - as not everybody can afford expensive solutions and there are nonprofits too that don't depend on % of money wasted repeating same work and like to KISS !
panos: next item, removing XSLT. There are usage numbers.
stephen: I have concerns. I kept this up to date historically for Chromium, and I don't trust the use counters based on my experience. Total usage might be higher.
dan: even if the data were accurate, not enough zeros for the usage to be low enough.
mason: is XSLT supported officially?
simon: supported
mason: maybe we could just mark it deprecated in the spec, to make the statement that we're not actively working on it.
brian: we could do that on MDN too. This would be the first time we have something baseline widely available that we've marked as removed.
dan: maybe we could offer helpful pointers to alternatives that are better, and why they're better.
panos: maybe a question for olli. But I like brian's suggestion to mark it in all the places.
dan: it won't go far unless developers know what to use instead.
brian: talk about it in those terms also. Would anyone want to come on the podcast and talk about it? I'm guessing people will have objections.
emilio: we have a history of security bugs, etc.
stephen: yeah that was a big deal
mason: yeah we get bugs about it and have to basically ignore them, which sucks
brian: people do use it and some like it
panos: put a pin in it, and talk with olli next time?
As for the rest of your [working for Google] comment. To put it simply, you come off as someone inexperienced, maybe I'm wrong and you have a big list of features you've successfully removed and public discussions you had in the process, if so, there's probably something to learn from those that's different here.
Your response is like seeing the cops going to the wrong house to kick in your neighbors door, breaking their ornaments in their entry way, and then saying to yourself, "Good. I hate yellow, and would never have any of that tacky shit in my house."
As your first sentence of your comment indicates, the fact that it's supported and there for people to use doesn't (and hasn't) result in you being forced to use it in your projects.
Yes but software, and especially browser, complexity has balooned enormously over the years. And while XSLT probably plays a tiny part in that, it's likely embedded in every Electron app that could do in 1MB what it takes 500 MB to do, makes it incrementally harder to build and maintain a competing browser, etc., etc. It's not zero cost.
I do tend to support backwards compatibility over constant updates and breakage, and needless hoops to jump through as e.g. Apple often puts its developers through. But having grown up and worked in the overexuberant XML-for-everything, semantic-web 1000-page specification, OOP AbstractFactoryTemplateManagerFactory era, I'm glad to put some of that behind us.
Unquestionably the right move. From the various posts on HN about this, it's clear that (A) not many people use it (B) it increases security vulnerability surface area (C) the few people who do claim to use have nothing to back up the claim
The major downside to removing this seems to be that a lot of people LIKE it. But eh, you're welcome to fork Chromium or Firefox.
Chrome and other browsers could virtually completely mitigate the security issues by shipping the polyfil they're suggesting all sites depending on XSLT deploy in the browser. By doing so, their XSLT implementation would become no less secure than their javascript implementation (and fat chance they'll remove that). The fact that they've rejected doing so is a pretty clear indication that security is just an excuse, IMO.
I wish more people would see this. They know exactly how to sandbox it, they’re telling you how to, they’re even providing and recommending a browser extension to securely restore the functionality they’re removing!
The security argument can be valid motivation for doing something, but is utterly illegitimate as a reason for removing. They want to remove it because they abandoned it many years ago, and it’s a maintenance burden. Not a security burden, they’ve shown exactly how to fix that as part of preparing to remove it!
I recently had an interesting chat with Liam Quin (who was on W3C's XML team) about XML and CDATA on Facebook, where he revealed some surprising history!
Liam Quinn in his award winning weirdest hat, also Microsoft's Matthew Fuchs' talk on achieving extensibility and reuse for XSLT 2.0 stylesheets, and Stephan Kesper̀̀'s simple proof that XSLT and XQuery are Turing complete using μ-recursive functions, and presentations about other cool stuff like Relax/NG:
How do we communicate the idea that declarative markup is a good idea? Declarative markup is where you identify what is there, not what it does. This is a title, not, make this big and bold. This is a a part number, not, make this blink when you click it - sure, you can do that to part numbers, but don't encode your aircraft manual that way.
But this idea is hard to grasp, for the same reason that WYSIAYG word processors (the A stands for All, What you see is all you get) took over from descriptive formatting in a lot of cases.
For an internal memo, for an insurance letter to a client, how much matters? Well, the insurance company has to be able to search the letters for specific information for 10, 20, 40, 100 years. What word processor did you use 40 years ago? Wordstar? Magic Wand? Ventura?
Liam Quin: hahaha i actually opposed the inclusion of CDATA sections when we were designing XML (by taking bits we wanted from SGML), but they were already in use by the people writing the XML spec! But now you’ve given me a reason to want to keep them. The weird syntax is because SGML supported more keywords, not only CDATA, but they were a security fail.
Don Hopkins: There was a REASON for the <![SYNTAX[ ]]> ?!?!? I though it was just some kind of tribal artistic expressionism, like lexical performance art!
At TomTom we were using xulrunner for the cross platform content management tool TomTom Home, and XUL abused external entities for internationalizing user interface text. That was icky!
For all those years programming OpenLaszlo in XML with <![CDATA[ JavaScript code sections ]>, my fingers learned how to type that really fast, yet I never once wondered what the fuck ADATA or BDATA might be, and why not even DDATA or ZDATA? What other kinds of data are there anyway? It sounds kind of like quantum mechanics, where you just have to shrug and not question what the words mean, because it's just all arbitrarily weird.
Liam Quin: haha it’s been 30 years, but, there’s CDATA (character data), replaceable character data (RCDATA) in which `é` entity definitions are recognised but not `<`, IGNORE and INCLUDE, and the bizarre TEMP which wraps part of a document that might need to be removed later. After `<!` you could also have comments, <!-- .... --> for example (all the delimiters in SGML could be changed).
Don Hopkins: What is James Clark up to these days? I loved his work on Relax/NG, and that Dr. Dobb's interview "The Triumph of Simplicity".
Note: James Clark is arguably the single most important engineer in XML history:
- Lead developer of SGMLtools, expat, and Jade/DSSSL
- Co-editor of the XML 1.0 specification
- Designer of XSLT 1.0 and XPath 1.0
- Creator of Relax NG, one of the most elegant schema languages ever devised
He also wrote the reference XSLT implementation XT, used in early browsers and toolchains before libxslt dominated.
James Clark’s epic 2001 Doctor Dobb's Journal "A Triumph of Simplicity: James Clark on Markup Languages and XML" interview captures his minimalist design philosophy and his critique of standards and committee-driven complexity (which later infected XSLT 2.0).
It touches on separation of concerns, simplicity as survival, a standard isn't one implementation, balance of pragmatism and purity, human-scale simplicity, uniform data modeling, pluralism over universality, type systems and safety, committe pathology, and W3C -vs- ISO culture.
He explains why XML is designed the way it is, and reframes the XSLT argument: his own philosophy shows that when a transformation language stops being simple, it loses the very quality that made XML succeed.
Question, how hard is it going to be to add XSLT back with WASM? I have built a few stylesheets for clients to view their raw XML in browser. I even add charts for data tables with XSLT.
Nice find — interesting to see browsers moving to drop XSLT support.
I used XSLT once for a tiny site and it felt like magic—templating without JavaScript was freeing.
But maybe it’s just niche now, and browser vendors see more cost than payoff.
Curious: have any of you used XSLT in production lately?
I lead a team that manage trade settlements for hedge funds; data is exported from our systems as XML and then transformed via XSLT into whatever format the prime brokers require.
All the transformed are maintained by non-developers, business analysts mainly. Because the language is so simple we don't need to give them much training, just get IntelliJ installed on their machine, show them a few samples and let them work away.
Good, XSLT was crap. I wrote an RSS feed XSLT template. Worst dev experience ever. No one is/was using XSLT. Removing unused code is a win for browsers. Every anti bloat HNer should be cheering
The first few times you use it, XSLT is insane. But once something clicks, you figure out the kinds of things it’s good for.
I am not really a functional programming guy. But XSLT is a really cool application of functional programming for data munging, and I wouldn’t have believed it if I hadn’t used it enough for it to click.
Right. I didn't use it much on the client side so I am not feeling this particular loss so keenly.
But server side, many years ago I built an entire CMS with pretty arbitrary markup regions that a designer could declare (divs/TDs/spans with custom attributes basically) in XSLT (Sablotron!) with the Perl binding and a customised build of HTML Tidy, wrapped up in an Apache RewriteRule.
So designers could do their thing with dreamweaver or golive, pretty arbitrarily mark up an area that they wanted to be customisable, and my CMS would show edit markers in those locations that popped up a database-backed textarea in a popup.
What started off really simple ended up using Sablotron's URL schemes to allow a main HTML file to be a master template for sub-page templates, merge in some dynamic functionality etc.
And the thing would either work or it wouldn't (if the HTML couldn't be tidied, which was easy enough to catch).
The Perl around the outside changed very rarely; the XSLT stylesheet was fast and evolved quite a lot.
XSLT's matching rules allow a 'push' style of transform that's really neat. But you can actually do that with any programming language such as Javascript.
Actually a transformation system can reduce bloat, as people don't have to write their own crappy JavaScript versions of it.
Being XML the syntax is a bit convoluted, but behind that is a good functional (in sense of functional programming language, not functioning) system which can be used for templating etc.
The XML made it a bit hard to get started and anti-XML-spirit reduced motivation to get into it, but once you know it, it beats most bloaty JavaScript stuff in that realm by a lot.
I'm always puzzled by statements like this. I'm not much of a programmer and I wrote a basic XSLT document to transform rss.xml into HTML in a couple of hours. I didn't find it very hard at all (anecdotes are not data, etc)
Although it's sad to see an interesting feature go, they're not wrong about security. It's more important to have a small attack surface if this was maintained by one guy in Nebraska and he doesn't maintain it any more.
No, XSLT isn't required for the open web. Everything you can do with XSLT, you can also do without XSLT. It's interesting technology, but not essential.
Yes, this breaks compatibility with all the 5 websites that use it.
One extremely important XSLT use-case is for RSS/Atom feeds. Right now, clicking on a link to feed brings up a wall of XML (or worse, a download link). If the feed has an XSLT stylesheet, it can be presented in a way that a newcomer can understand and use.
I realize that not that many feeds are actually doing this, but that's because feed authors are tech-savvy and know what to do with an RSS/Atom link.
But someone who hasn't seen/used an RSS reader will see a wall of plain-text gibberish (or a prompt to download the wall of gibberish).
XSLT is currently the only way to make feeds into something that can still be viewed.
I think RSS/Atom are key technologies for the open web, and discovery is extremely important. Cancelling XSLT is going in the wrong direction (IMHO).
I've done a bunch of things to try to get people to use XSLT in their feeds: https://www.rss.style/
You can see it in action on an RSS feed here (served as real XML, not HTML: do view/source): https://www.fileformat.info/news/rss.xml
> One extremely important...
Not to downplay what you think is important, but I think it's pretty important that governments and public bodies use XSLT.
https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih....
https://www.govinfo.gov/content/pkg/BILLS-119hr400ih/xml/BIL...
https://www.weather.gov/xml/current_obs/KABE.xml
https://www.europarl.europa.eu/politicalparties/index_en.xml
https://apps.tga.gov.au/downloads/sequence-description.xml
https://cwfis.cfs.nrcan.gc.ca/downloads/fwi_obs/WeatherStati...
https://converters.eionet.europa.eu/xmlfile/EPRTR_MethodType...
They don't put ads on their sites, so I'm not surprised Google doesn't give a fuck about them...
> “They don't put ads on their sites, so I'm not surprised…”
Similarly, Chrome regularly breaks or outright drops support for web features used only in private enterprise networks. Think NTLM or Kerberos authentication, private CA revocation list checking, that kind of thing.
Again, nobody uses Google Ads on internal apps!
Many governments and public bodies used Flash, ActiveX and Java applets, but I'm certainly glad we got rid of those.
1 reply →
We do the same with our feeds at Standard Ebooks: https://standardebooks.org/feeds/rss/new-releases
The page is XML but styled with XSLT.
FWIW the original post explicitly mentioned this use case and offered two ways to workaround.
Gotta love the reference to the <link> header element. There used to be an icon in the browser URL bar when a site had a feed, but they nuked that too.
4 replies →
iIRC, all of the proposed workarounds involved updating the sites using XSLT, which may not always be particularly easy, or even something publishers will realize they need to do.
Here's a 3rd option :)
For RSS/Atom feeds presented as links in a site (for convenience to users), developers can always offer a simple preview for the feed output using: https://feedreader.xyz/
Just URL-encode the feed like so: https://feedreader.xyz/?url=https%3A%2F%2Fwww.theverge.com%2...
...and you get a nice preview that's human readable.
Another use case I discovered and implemented many years ago was styling a sitemap.xml for improved UX / aesthetics.
> not that many feeds are actually doing this
Isn't this kind of an argument for dropping it? Yeah it would be great if it was in use but even the people who are clicking and providing RSS feeds don't seem to care that much.
You are probably right, but it is depressing how techies don't see the big picture & don't want to provide an on-ramp to the RSS/Atom world for newcomers.
6 replies →
I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
"XSLT is currently the only way to make feeds into something that can still be viewed."
You could use content negotiation just fine. I just hit my personal rss.xml file, and the browser sent this as the Accept header:
except it has no newline, which I added for HN.
You can easily ship out an HTML rendering of an RSS file based on this. You can have your server render an XSLT if you must. You can have your server send out some XSLT implemented in JS that will come along at some point.
To a first approximation, nobody cares enough to use content negotiation any more than anyone cares about providing XML stylesheets. The tech isn't the problem, the not caring is... and the not caring isn't actually that big a problem either. It's been that way for a long time and we aren't actually all that bothered about it. It's just a "wouldn't it be nice" that comes up on those rare occasions like this when it's the topic of conversation and doesn't cross anyone's mind otherwise.
You've been in the RSS world since the beginning and never seen a stylized feed?
I've not been in the RSS world very much. I don't use news readers. And even I have seen a stylized RSS in the wild.
Our individual experiences are of course anecdotal, I'm just surprised at how different they are given your background.
> nor have I seen one.
Once upon a time, nice in-browser rendering of RSS/Atom feeds complete with search and sorting was a headliner feature of Safari.
https://www.askdavetaylor.com/how_do_i_subscribe_to_rss_feed...
Another point: it is shocking how many feeds have errors in them. I analyzed the feeds of some of the top contributors on HN, and almost all had something wrong with them.
Even RSS wizards would benefit from looking at a human-readable version instead of raw XML.
I ended up writing a feed analyzer that you can try on your feed: https://www.rss.style/feed-analyzer.html
1 reply →
I think it used to be more popular in early days. At one point i think firefox was styling rss feeds by default so people stopped using xslt as much.
You can still style them with css if you want. I dont really see the point. RSS is for machines to read not humans.
there's a fairly good chance that you simply haven't noticed, because it was working as intended, e.g. https://news.ycombinator.com/item?id=45824952
That's my point: you know all about RSS & feeds and don't need it. But what about someone who hasn't been using them since the beginning?
I think every page with an RSS feed should have a link to the feed in the html body. And it should be friendly to people who are not RSS wizards.
The phrase you are looking for to describe this discourse is “concern trolling.”
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
Maybe it's more for people who have no idea what RSS is and click on the intriguing icon. If they weren't greeted with a load of what seems like nonsense for nerds there could have been broader adoption of RSS.
2 replies →
> I've been involved in the RSS world since the beginning and I've never clicked on an RSS link and expected it to be rendered in a "nice" way, nor have I seen one.
So that excludes you from the "someone who hasn't seen/used a RSS reader" demographic mentioned in the comment you are replying to.
I never realized styling RSS feeds was an options. Now looking at some of the examples, I wonder how many times I've clicked on "Feed", then rolled my eyes and closed it because I thought it wasn't RSS. More than zero, I'm sure.
It's the right direction if you're google. This is why Google should not be allowed to control the web. Support firefox, dump google.
> Cancelling XSLT is going in the wrong direction (IMHO).
XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away. The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
It seems like something an extension ought to be capable of, and if not, fix the extension API so it can. In firefox I think it would be a full-blown plugin, which is a lower-level thing than an extension, but I don't know whether Chromium even has a concept of such a thing.
> XSLT isn't going anywhere: hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away.
Not having it available from the browser really reduces the ability to use it in many cases, and lots of the nonbrowser XSLT ecosystem relies on the same insecure, unmaintained implementation. There is at least one major alternative (Saxon), and if browser support was switching backing implementation rather than just ending support, “XSLT isn’t going anywhere” would be a more natural conclusion, but that’s not, for whatever reason, the case.
8 replies →
Or perhaps the multi-billion-dollar corporations could stop piggy-backing on volunteers and invest in maintaining the Web platform?
6 replies →
So... you want newbies to install an extension/plugin before they get a human-readable view of a feed???
That's about as new-user-hostile as I can imagine.
1 reply →
> XSLT isn't going anywhere
XSLT as a feature is being removed from web browsers, which is pretty significant. Sure it can still be used in standalone tools and libraries, but having it in web browsers enabled a lot of functionality people have been relying on since the dawn of the web.
> hardwiring into the browser an implementation that's known to be insecure and is basically unmaintained is what's going away
So why not switch to a better maintained and more secure implementation? Firefox uses TransforMiix, which I haven't seen mentioned in any of Google's posts on the topic. I can't comment on whether it's an improvement, but it's certainly an option.
> The people doing the wailing/rending/gnashing about the removal of libxslt needed to step up to fix and maintain it.
Really? How about a trillion dollar corporation steps up to sponsor the lone maintainer who has been doing a thankless job for decades? Or directly takes over maintenance?
They certainly have enough resources to maintain a core web library and fix all the security issues if they wanted to. The fact they're deciding to remove the feature instead is a sign that they simply don't.
And I don't buy the excuse that XSLT is a niche feature. Their HTML bastardization AMP probably has even less users, and they're happily maintaining that abomination.
> It seems like something an extension ought to be capable of
I seriously doubt an extension implemented with the restricted MV3 API could do everything XSLT was used for.
> and if not, fix the extension API so it can.
Who? Try proposing a new extension API to a platform controlled by mega-corporations, and see how that goes.
It's encouraging to see browsers actually deprecate APIs, when I think a lot of problems with the Web and Web security in particular is people start using new technologies too fast but don't stop using old ones fast enough.
That said, it's also pretty sad. I remember back in the 2000s writing purely XML websites with stylesheets for display, and XML+XSLT is more powerful, more rigorous, and arguably more performant now in the average case than JSON + React + vast amounts of random collated libraries which has become the Web "standard".
But I guess LLMs aren't great at generating XSLT, so it's unlikely to gain back that market in the near future. It was a good standard (though not without flaws), I hope the people who designed it are still proud of the influence it did have.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Yup, "been there, done that" - at the time I think we were creating reports in SQL Server 2000, hooked up behind IIS.
It feels this is being deprecated and removed because it's gone out of fashion, rather than because it's actually measurably worse than whatever's-in-fashion-today... (eg React/Node/<whatever>)
Yeah it was great. You could sort/filter/summarize in different ways all without a server round-trip. At the time it seemed magical to users.
100%. I’ve been neck deep over the past few months in developing a bunch of Windows applications, and it’s convinced me that never deprecating or removing anything in the name of backwards incompatibility is the wrong way. There’s a balance to be struck like anything, but leaving these things around means we continue to pay for them in perpetuity as new vulnerabilities are found or maintenance is required.
What about XML + CSS? CSS works the exact same on XML as it does on HTML. Actually, CSS works better on XML than HTML because namespace prefixes provide more specific selectors.
The reason CSS works on XML the same as HTML is because CSS is not styling tags. It is providing visual data properties to nodes in the DOM.
Agreed on API deprecation, the surface is so broad at this point that it's nearly impossible to build a browser from scratch. I've been doing webdev since 2009 and I'm still finding new APIs that I've never heard of before.
> I remember back in the 2000s writing purely XML websites with stylesheets for display
Awesome! I made a blog using XML+XSLT, back in high school. It was worth it just to see the flabbergasted look on my friends faces when I told them to view the source code of the page, and it was just XML with no visible HTML or CSS[0].
[0] https://www.w3schools.com/xml/simplexsl.xml - example XML+XSLT page from w3schools
Some people seem to think XSLT is used for the step from DOM -> Graphics. This is not the first time I have send a comment implying that, but it is wrong. XSLT is for the step from 'normalized data' -> DOM. And I like it, that this can be done in a declarative way.
The "severe security issue" in libxml2 they mention is actually a non-issue and the code in question isn't even used by Chrome. I'm all for switching to memory-safe languages but badmouthing OSS projects is poor style.
It is also kinda a self-burn. Chromium an aging code base [1]. It is written in a memory unsafe language (C++), calls hundreds of outdated & vulnerable libraries [2] and has hundreds of high severity vulnerabilities [3].
People in glass houses shouldn't throw stones.
[1] https://github.com/chromium/chromium/commits/main/?after=c5a...
[2] https://github.com/chromium/chromium/blob/main/DEPS
[3] https://www.cvedetails.com/product/15031/Google-Chrome.html?...
Given Google's resources, I'm a little surprised they having created an LLM that would rewrite Chromium into Go/Rust and replace all the stale libraries.
Google is too cheap to fund or maintain the library they've built their browser with after its hobbyist maintainers got burnt out, for more than a decade so they're ripping out the feature.
Their whole browser is made up of unsafe languages and their attempt to sort of make c++ safer has yet to produce a usable proof of concept compiler. This is a fat middle finger in the face of all the people's free work they grabbed to collect billions for their investors.
Nobody is badmouthing open source. It's the core truth, open source libraries can become unmaintained for a variety of reasons, including the code base becoming a burden to maintain by anyone new.
And you know what? That's completely fine. Open source doesn't mean something lives forever
The issue in question is just one of the several long-unfixed vulnerabilities we know about, from a library that doesn't have that many hands or eyes on it to begin with.
And why doesn’t Google contribute to fixing and maintaining code they use?
13 replies →
If that was case they would switch to (rust XPath/XSLT) Xee.
Sounded like the maintainers of libxml2 have stepped-back, so there needs to be a supported replacement, because it is widely used. (Or if you are worried the reputation of "OSS", you can volunteer!)
Where's the best collection or entry point to what you've written about Chrome's use of Gnome's XML libraries, the maintenance burden, and the dearth of offers by browser makers foot the bill?
This has been chewed on ad nauseum on HN already, to the point I won't even try to make a list of the articles but just link a search result: https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=fal...
KPIs of the hottest languages on the web show that XSLT has become hotter that Java and C++ in the last days. XML is back, dudes!
Best thread IMHO, lots of thoughtful root-level comments:
"Remove mentions of XSLT from the html spec" https://news.ycombinator.com/item?id=44952185
To anyone who says to use JS instead of XSLT: I block JS because it is also used for ads, tracking and bloat in general. I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all).
I think being able to do client-side templating without JS is an important feature and I hope that since browser vendors are removing XSLT they will add some kind of client-side templating to replace it.
> I block JS
The percentage of visitors who block JS is extremely small. Many of those visits are actually bots and scrapers that don’t interpret JS. Of the real users who block JS, most of them will enable JS for any website they actually want to visit if it’s necessary.
What I’m trying to say is that making any product decision for the extremely small (but vocal) minority of users who block JS is not a good product choice. I’m sorry it doesn’t work for your use case, but having the entire browser ecosystem cater to JS-blocking legitimate users wouldn’t make any sense.
I block JS, too. And so does about 1-2% of all Web users. JavaScript should NOT be REQUIRED to view a website. It makes web browsing more insecure and less private, makes page load times slower, and wastes energy.
21 replies →
Of note here is that the segment we're talking about is actually an intersection of two very small cohorts; the first, as you note, are people who don't own a television errr disable Javascript, and the second is sites that actually rely on XSLT, of which there are vanishingly few.
XSLT is being exploited right now for security vulnerabilities, and there is no solution on the horizon.
The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
As they say, security is not a product, it’s a process. The process we have for existing browser technologies is better. That process is better because more people use it.
But even if we were to try to consider the technologies in isolation, and imagine a timeline where things were different? I doubt whether XML+XSLT is the superior platform for security. If it had won, we’d just have a different nightmare of intermingled content and processing. Maybe more stuff being done client-side. I expect that browser and OS manufacturers would be warping content to insert their own ads.
>You imagine that XSLT is more secure but it’s not. It’s never been. Even pure XSLT is quite capable of Turing-complete tomfoolery, and from the beginning there were loopholes to introduce unsafe code.
Are there examples of this?
> The browser technologies that people actually use, like JavaScript, have active attention to security issues, decades of learnings baked into the protocol, and even attention from legislators.
Yes, they also have much more vulnerabilities, because browsers are JIT compiling JS to w+x memory pages. And JS continues to get more complex with time. This is just fundamentally not the case with XSLT.
We're comparing a few XSLT vulnerabilities to hundreds of JIT compiler exploits.
4 replies →
> I don't block XSLT because I haven't come across malicious use of XSLT before (though to be fair, I haven't come across much use of XSLT at all)
Recent XSLT parser exploits were literally the reason this whole push to remove it was started, so this change will specifically be helping people in your shoes.
So it's a parser implementation problem, not XSLT per se.
I feel like there's a bias here due to XSLT being negletted and hence not receiving the same powers as JS. If it did get more development in the browser I'm pretty sure it would get the same APIs that we hate JS for, and since it's already Turing complete chances are people will find ways to misuse it and bloat websites.
Makes me kind of sad. I started my carrier back in days when XHTML and co were lauded as the next thing. I worked with SOAP and WDSLs. I loved that one can express nearly everything in XML. And namespaces… Then came json and apart from being easier to read for humans I wondered why we switch from this one great exchange format to this half baked one. But maybe I’m just nostalgic. But every time I deal with json parsers for type serialization and the question how to express HashMaps and sets, how to provide type information etc etc I think back to XML and the way that everything was available on board. Looked ugly as hell though :)
json is sort of a gresham's law "bad money drives out the good" but for tech: lazy and forgiving technologies drive out the better but stricter ones.
bad technology seems to make life easier at the beginning, but that's why we now have sloppy websites that are an unorganized mess of different libraries, several MB in size without reason, and an absolute usability and accessibility nightmare.
xhtml and xml were better, also the idea separating syntax from presentation, but they were too intelligent for our own good.
> lazy and forgiving technologies drive out the better but stricter ones.
JSON is not "lazy and forgiving" (seriously, go try adding a comment to it).
It was just laser-focused on what the actual problem was that needed to be solved by many devs in day-to-day practice.
Meanwhile XML wanted to be an entire ecosystem, its own XML Cinematic Universe, where you had to adopt it all to really use it.
It's not surprising to me that JSON won out, but it's not because it's worse, it's actually much better than XML for the job it ended up being used for (a generic format to transfer state between running programs supporting common data structures with no extraneous add-ons or requirements).
XML is better for a few other things, but those things are far less commonly needed.
2 replies →
I like XSLT, and I’ve been using the browser-based APIs in my projects, but I must say that XSLT ecosystem has been in a sad state:
- Browsers have only supported XSLT 1.0, for decades, which is the stone age of templating. XSLT 3.0 is much nicer, but there’s no browser support for it.
- There are only two cross-platform libraries built for it: libxslt and Saxon. Saxon seriously lacks ergonomics to say the least.
One option for Google as a trillion dollar company would be to drive an initiative for “better XSLT” and write a Rust-based replacement for libxslt with maybe XSLT 3.0 support, but killing it is more on-brand I guess.
I also dislike the message “just use this [huge framework everyone uses]”. Browser-based template rendering without loading a framework into the page has been an invaluable boon. It will be missed.
If you are using XSLT to make your RSS or atom feeds readable in a browser should somebody click the link you may find this post by Jake Archibald useful: https://jakearchibald.com/2025/making-xml-human-readable-wit... - it provides a JavaScript-based alternative that I believe should work even after Chrome remove this feature.
I think that's sad. XSLT is in my point a view a very misunderstood technology. It gets hated on a lot. I wonder if this hate is by people who actually used and understood it, though. In any case, more often than not this is by people who in the same sentence endorse JavaScript (which, by any objective way of measuring is just a language far more poorly designed).
IMO XSLT was just too difficult for most webdevs. And IMO this created a political problem where the 'frontend' folks needed to be smarter than the 'backend' generating the XML in the first place.
XSLT might make sense as part of a processing pipeline. But putting it on front of your website was just an unnecessary and inflexible layer, so that's why everyone stopped doing it. (except rss feeds and etc.)
Iterating upon a JSON in raw JS is much sexier than learning XSLT. (at least, JS allows breakpoints in the Chrome debugger)
What does XSLT provide that you cannot achieve with plain JS?
I'm not much of a programmer, but XSLT being declarative means that I can knock out a decent-looking template without having to do a whole lot of programming work.
It can style RSS without enabling javascript and all that spy and malware.
Au contraire: the more you understand and use XSLT, the more you hate it. People who don't understand it and haven't used it don't have enough information and perspective to truly hate it properly. I and many other people don't hate XSLT out of misunderstanding at all: just the opposite.
XSLT is like programming with both hands tied behind your back, or pedaling a bicycle with only one leg. For any non-trivial task, you quickly hit a wall of complexity or impossibility, then the only way XSLT is useful is if you use Microsoft's non-standard XSLT extensions that let you call out to JavaScript, then you realize it's so easy and more powerful to simply do what you want directly in JavaScript there's absolutely no need for XSLT.
I understand XSLT just fine, but it is not the only templating language I understand, so I have something to compare it with. I hate XSLT and vastly prefer JavaScript because I've known and used both of them and other worse and better alternatives (like Zope Page Templates / TAL / METAL / TALES, TurboGears Kid and Genshi, OpenLaszlo, etc).
https://news.ycombinator.com/item?id=16227249
>My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
>Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
Counterpoint: the more I used XSLT, the more I liked it, and the more I was frustrated that the featureset that ships in browsers is frozen in 1999
5 replies →
Well said. I wrote an XSLT based application back in the early 2000s, and I always imagined the creators of XSLT as a bunch of slavering demented sadists. I hate XSLT with a passion and would take brainfuck over it any day.
Hearing the words Xalan, Xerces, FOP makes me break out in a cold sweat, 20 years later.
That's upsetting. Being able to do templating without using JavaScript was a really cool party trick.
I've used it in an unfinished website where all data was stored in a single XML file and all markup was stored in a single XSLT file. A CGI one-liner then made path info available to XSLT, and routing (multiple pages) was achieved by doing string tests inside of the XSLT template.
"Removing established open standards for a more walled garden" -> Fixed
To those who saw a chrome.com link and got triggered:
> The Firefox[^0] and WebKit[^1] projects have also indicated plans to remove XSLT from their browser engines.
[^0]: https://github.com/mozilla/standards-positions/issues/1287#i...
[^1]: https://github.com/whatwg/html/issues/11523#issuecomment-314...
In my opinion this is not “we agree lets remove it”. This is “we agree to explore the idea”.
Google and Freed using this as a go ahead because the Mozilla guy pasted a pollyfill. However it is very clearly NOT an endorsement to remove it, even though bad actors are stating so.
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support1. If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
Freed et al also explicitly chose to ignore user feedback for their own decision and not even try to improve XSLT security issues at the cost of performance.
Last I heard for WebKit removing it was the only outcome they saw.
40 replies →
So XPath locators won't be available in Playwright and Selenium in Chrome? This could be huge for QA and RPA.
They are still keeping the XPath APIs so XPath locators will still work.
Ah, so this is removing libxslt. For a minute I thought XSLT processing was provided by libxml2, and I remembered seeing that the Ladybird browser project just added a dependency on libxml2 in their latest progress update https://ladybird.org/newsletter/2025-10-31/.
I'm curious to see what happens going forward with these aging and under-resourced—yet critical—libraries.
As someone who built an XSLT renderer and remembers having an awful time with the spec: good riddance.
Data and its visualisation should be strictly separate, and not require an additional engine in your environment of choice.
There is no way to make JavaScript so limited in scope as XSLT is.
But what I want only is XSLT on live DOM nodes, when editing. Simple templating good engine, and to stay. Not a fancy stuff (reredoxes adinfinis).
That are capabilities that progressing-processing-oriented people will never get even close to that which document(ing)-oriented people (users) transparently had and is about to get lost.
The World Wide Web, invented at CERN in 1989 by Tim Berners-Lee, is a system of interlinked hypertext _documents_ - not interlinked programs (opaque and superior to take control over any data).
I know it makes me an old and I am biased because one of the systems in my career I am most proud of I designed around XSLT transformations, but this is some real bullshit and a clear case why a private company should not be the de facto arbiter of web standards. Have a legacy system that depends on XSLT in the browser? Sucks to be you, one of our PMs decided the cost-benefit just wasn't there so we scrapped it. Take comfort in the fact our team's velocity bumped up for a few weeks.
And yes I am sour about the fact as an American I have to hope the EU does something about this because I know full-well it's not happening here in The Land of the Free.
I don't use XSLT and don't object to this, but seeing "security" cited made me realize how reflexively distrustful I've become of them using that justification for a given decision. Is this one actually about security? Who knows!
Didn't this come pretty directly after someone found some security vulns? I think the logic was, this is a huge chunk of code that is really complex which almost nobody uses outside of toy examples (and rss feeds). Sure, we fixed the issue just reported, but who knows what else is lurking here, it doesn't seem worth it.
As a general rule, simplifying and removing code is one of the best things you can do for security. Sure you have to balance that with doing useful things. The most secure computer is an unplugged computer but it wouldn't be a very useful one; security is about tradeoffs. There is a reason though that security is almost always cited - to some degree or another, deleting code is always good for security.
The vulnerabilities themselves often didn't really affect Chrome, but by the maintainers' own admission the code was never intended to be security critical. They got burned out after a series of vulnerability reports with publication deadlines made them decide to just take security bugs like normal bugs so the community could help fix things. That doesn't really fit with the "protect users by keeping security issues ecret for three months" approach corporations prefer. Eventually the maintainers stepped down.
Neither Google nor Apple were willing to sponsor a fork of the project but clearly they can't risk unmaintained dependencies in their billion dollar product, so they're rushing to pull the plug.
"Who knows what's lurking there" is a good argument to minimize attack surface, but Google has only been adding more attack surface over the past couple of years. I find it hard to defend that processing a structured document should be outside of a browser's feature set, but Javascript USB drivers and serial ports are necessary to drive the web. The same way libxml2 was never intended to be security critical, many GPU drivers were never written to protect from malicious programs, yet WebGPU and similar technology i being pushed hard and fast.
If we're deleting code to improvve against theoretical security risks, I know plenty of niche APIs that should probably be axed.
4 replies →
> As a general rule, simplifying and removing code is one of the best things you can do for security.
Sure, but that’s not what they’re doing in the big picture. XSLT is a tiny drop in the bucket compared to all the surface area of the niche, non-standard APIs tacked onto Chromium. It’s classic EEE.
https://developer.chrome.com/docs/web-platform/
1 reply →
There are security issues in the C implementation they currently use. They could remove this without breaking anything by incorporating the JS XSLT polyfill into the browser. But they won't because money.
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
— https://www.offensivecon.org/speakers/2025/ivan-fratric.html
— https://www.youtube.com/watch?v=U1kc7fcF5Ao
> libxslt -- unmaintained, with multiple unfixed vulnerabilities
— https://vuxml.freebsd.org/freebsd/b0a3466f-5efc-11f0-ae84-99...
It's true that there are security issues, but it's also true that they don't want to put any resources into making their XSLT implementation secure. There is strong unstated subtext that a huge motivation is that they simply want to rip this out of Chrome so they don't have to maintain it at all.
Especially when Google largely has the money to maintain the alleged unsecure library... of course it's an excuse to break the web once again.
XSLT is fantastic. You just feed it an XML file and it can change it into HTML, without any need for javascript.
> example.com needs to review the security of your connection before proceeding.
This text from Cloudflare challenge pages is just a flat-out lie.
Its "Security" when they want to do a thing, its "WebCompat" when they don't.
Previous discussion https://news.ycombinator.com/item?id=44952185
I'd never written any XSL before last week when I got the crazy idea of putting my resume in XML format and using stylesheets to produce both resume and CV forms, redacted and not, with references or not. And when I had it working in HTML, I started on the Typst stylesheet. xsltproc, being so old, basically renders the results instantly. And Typst, being so new, does as well.
- Chrome 155 (Nov 17, 2026): XSLT stops functioning on Stable releases, for all users other than Origin Trial and Enterprise Policy participants.**
- Chrome 164 (Aug 17, 2027): Origin Trial and Enterprise Policy stop functioning. XSLT is disabled for all users.**
Not the first time I've seen on Google's pages that the use of asterisks then lacks the corresponding footnotes.
I to this day think the move from dsssl to xslt was the biggest mistake in the SGML to XML evolution.
they went from a clean scheme based standard to a human-unreadable "use a GUI tool" syntax.
st text serna was a wysiwyg XML FO rendering editor: throw in call stylesheets and some input XML, wysiwyg edit away
but xslt didn't take off nor did derived products.
When do we see the headline: Removing Javascript for a more secure browser?
So the honest title should have been: Removing XSLT because it cannot serve adds
(Yes, the underlying implementation might be insecure. But how secure would Javascript be with the same amount of maintenance in the last 20 years?)
XSLT seem like it could be something implemented with WebAssembly (and/or JavaScript), in an extension (if the extension mechanism is made suitable; I think some changes might be helpful to support this and other things), possibly one that is included by default (and can be overridden by the user, like any other extension should be); if it is implemented in that way then it might avoid some of the security issues. (PDF could also be implemented in a similar way.)
(There are also reasons why it might be useful to allow the user to manually install native code extensions, but native code seems to be not helpful for this use, so to improve security it should not be used for this and most other extensions.)
The lead dev driving the Chrome deprecation built a wasm polyfill https://github.com/mfreed7/xslt_polyfill. Multiple people proposed in the Github discussions leading up to this that Google simply make the polyfill ship with Chrome as an on-by-default extension that could be disabled in settings, but he wouldn't consider it.
Just.. and for so long:
XSLT is WWW standard, JavaScript is not (it's ECMA standard) - and there is no JavaScript specification on W3C pages .
( https://www.w3.org/wiki/JavaScript )
Shall JavaScript to become a web standard first - then to be used to "replace" already standard solution ?
It wasn't clear to me from reading this whether
with CSS will also stop being supported. There's no need to deprecate that, surely?
Pour one out for @vgr-land https://news.ycombinator.com/item?id=45006098
Removing html and css would also make the browser more secure - but I would argue also very counter productive for users.
Most of read-only content and lite editing can be achieved with raw data + xslt.
The web has become a sledgehammer for clacking a nut.
For example, with xslt you could easily render a read only content without complex and expensive office apps. That is enough for Academia, Gov, and small businesses.
If they really cared about "security" they would remove JS or try to encourage minimising its use. That is a huge attack surface in comparison, but they obviously want to keep it so they can shove in more invasive and hostile user-tracking and controlling functionality.
I do all of my browsing with Javascript disabled. I've done this for decades now, as a security precaution mainly, but I've also enjoyed some welcome side-effects where paywalls disappeared and ads became static and unobtrusive. I wasn't looking for those benefits but I'll take 'em. In stride.
I've also witnessed a welcome (but slow) change in site implementations over the years: there are few sites completely broken by the absence of JS. Still some give blank screens and even braindead :hidden attributes thrown into the <noscript> main page to needlessly forbid access... but not as many as back in the day when JS first became the rage.
I don't know much about XSLT other than the fact that my Hiawatha web server uses it to make my directory listings prettier, and I don't have to add CSS or JS to get some style. I hate to see a useful standard abandoned by the big boys, but what can I do about it?
I bristle when I encounter pages with a few hundred words of content surrounded by literally megabytes of framework and flotsam, but that's the gig, right, wading through the crap to find the ponies.
It's a shame the browser developers are making an open, interoperable, semantic web more difficult. It's not surprising, though. Browsers started going downhill after they removed the status bar and the throbber and made scrollbars useless.
Destroying the open web instead of advocating to fix one of the better underutilized browser technologies for a more Profitable Google.
I will not forget the name Mason Freed, destroyer of open collaborative technology.
Didn't this effort start with Mozilla and not Google? I think you will in fact forget the name Mason Freed, just like most of us forgot about XSLT.
> Didn't this effort start with Mozilla and not Google?
Maybe round one of it like ten years ago did? From what I understand, it's a Google employee who opened the "Hey, I want to get rid of this and have no plans to provide a zero-effort-for-users replacement." Github Issue a few months back.
8 replies →
It started with Mozilla, Apple, and Opera jumping ship and forming WHATWG. That stopped new XML related technologies from being adopted in browsers twenty years ago. Google is just closing the casket and burying the body.
Why would I forget about XSLT a really good technology pushed to the wayside by bad faith actors? Why would I forget Mason Freed? A person dedicating themselves to ruining perfectly good technology that needs a little love.
Do you have some sort of exclusive short term memory or something where you can’t remember someone’s name? Bizarre reply. Other people may have had a similarly lazy idea, but Mason is the one pushing and leading the charge.
It seems maybe you want me to blame this on Google as a whole but that would mean bypassing blame and giving into their ridiculous bs.
5 replies →
> Destroying the open web instead of advocating to fix one of the better underutilized browser technologies for a more Profitable Google.
Google, Mozilla and Apple do not care if it doesn't make them money, unless you want to pay them billions to keep that feature?
> I will not forget the name Mason Freed, destroyer of open collaborative technology.
This is quite petty.
So is blatantly ignoring pushback against removing a feature like this. Eye for an eye.
Blame Apple and Mozilla, too, then. They all agreed to remove it.
They all agreed because XSLT is extremely unpopular and worse than JS in every way. Performance/bloat? Worse. Security? MUCH worse. Language design? Unimaginably worse.
EDIT: I wrote thousands of lines of XSLT circa 2005. I'm grateful that I'll never do that again.
This is only repeated by people who have never used it.
XSLT is still a great way of easily transforming xml-like documents. It's orders of magnitude more concise than transforming using Javascript or other general programming languages. And people are actively re-inventing XSLT for JSON (see `jq`).
9 replies →
> Security? MUCH worse.
Comparing single-purpose declarative language that is not even really turing-complete with all the ugly hacks needed to make DOM/JS reasonably secure does not make any sense.
Exactly what you can abuse in XSLT (without non-standard extensions) in order to do anything security relevant? (DoS by infinite recursion or memory exhaustion does not count, you can do the same in JS...)
3 replies →
How is it worse than JS? It's a different thing...
> They all agreed to remove it.
All those people suck, too.
Were you counting on a different response?
> XSLT is extremely unpopular and worse than JS in every way
This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".
5 replies →
They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Only Google is pushing forward and twisting that message.
4 replies →
Good. Finally getting rid of this security and usage nightmare. There's a polyfill and an extensions so even the diehards will be pleased
This is the problem with any C/C++ codebase, using rust instead would have been a better solution than just removing web standards from what is supposed to be a web browser.
I wrote a bunch of stuff with XSLT back in the day that I thought was pretty cool but I can't for the life of me remember what it was...
That's our freedom of not being forced to use JavaScript for everything being taken away !
> When that solution isn't wanted, the polyfill offers another path.
A solution is only a solution if it solves the problem.
This sort of thing, basically a "reverse X/Y problem", is an intellectually dishonest maneuver, where a thing is dubbed a "solution" after just, like, redefining the problem to not include the parts that make it a problem.
The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
> As mentioned previously, the RSS/Atom XML feed can be augmented with one line, <script src="xslt-polyfill.min.js" xmlns="http://www.w3.org/1999/xhtml"></script>, which will maintain the existing behavior of XSLT-based transformation to HTML.
Oh, yeah? It's that easy? So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right? Or is this another instance where well-paid engineers on the Chrome team who elected to accept the responsibility of maintaining the stability of the Web have decided that they like the getting-paid part but don't like the maintaining-the-stability-of-the-Web part and are talking out of both sides of their mouths?
> So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right?
As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
And that's the issue with XSLT: it won't.
> As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."
This is a (poor) attempt at gaslighting/retconning.
The phrase "Don't break the Web" is not original to this thread.
(I can't say I look forward to your follow-up reply employing sleights of hand like claims about how stuff like Flash that was never standardized, or the withdrawal of experimental APIs that weren't both stable/finalized and implemented by all the major browsers, or the long tail of stuff on developer.mozilla.org that is marked "deprecated" (but nonetheless still manages to work) are evidence of your claim and that browser makers really do have a history of doing this sort of thing. This is in fact the first time something like this has actually happened—all because there are engineers working on browsers at Google (and Mozilla and Apple) that are either confused about how the Web differs from, say, Android and iOS, or resentful of their colleagues who get to work on vendor SDKs where the API surface area is routinely rev'd to remove whatever they've decided no longer aligns with their vision for their platform. That's not what the Web is, and those engineers can and should go work on Android and iOS instead of sabotaging the far more important project of attending to the only successful attempt at a vendor-neutral, ubiquitous, highly accessible, substrate for information access that no one owns and that doesn't fuck over the people who rely on it being stable.)
13 replies →
"The reality is that for all of the work that we've put into HTML, and CSS, and the DOM, it has fundamentally utterly failed to deliver on its promise.
It's even worse than that, actually, because all of the things we've built aren't just not doing what we want, they're holding developers back. People build their applications on frameworks that _abstract out_ all the APIs we build for browsers, and _even with those frameworks_ developers are hamstrung by weird limitations of the web."
- https://news.ycombinator.com/item?id=34612696#34622514
I find it so weird that browser devs can point to the existence of stuff like React and not feel embarrassed.
> I find it so weird that browser devs can point to the existence of stuff like React and not feel embarrassed.
Sorry, I don't follow. What's embarrassing about React?
I think in the context of that link, they see React as a failing of the web. If the W3C/WHATWG/browser vendors had done a reasonable job of moving web technology forward, things like React would be far less necessary. But they spent all their time and energy working on things like <aside> and web components instead of listening to web developers and building things that are actually useful in a web developer’s day-to-day life. Front-end frameworks like React, on the other hand, did a far better job of listening to developers and building what they needed.
1 reply →
Would it be possible to move it to an add-on for those who still want it? Are WebExtensions supporting third-party-libs?
https://www.igalia.com/chats/xslt-liam
XSLT shows up then to be very robust technology that survived the test of time already - if for decades (!) regardless of lack of support, investment, with key browser bugs not fixed by purpose stuck at version 1.0 - it's still being used in that working part - and if used it holds up well and last, in meantime, elsewhere:
or Xee: A Modern XPath and XSLT Engine in Rust 381 points 8 months ago https://news.ycombinator.com/item?id=43502291 .
And because it's a declarative way of transforming trees and collections of trees. And declarative means you don't say how to do it. You say, 'This is what I want'..
.. it's timeless: _abstracted definition_ to which imperative solutions could be reduced in the best case - with unaware of that authors repetitively trying (and soon having to) to reimplement that "not needed" ( - if abstracted already out ! - ) part ( ex. https://news.ycombinator.com/item?id=45183624 ) - in more or less common or compatible ways
- so, better keep it - as not everybody can afford expensive solutions and there are nonprofits too that don't depend on % of money wasted repeating same work and like to KISS !
https://github.com/whatwg/html/issues/11146#issuecomment-275...
.. just like that, but: https://github.com/whatwg/html/issues/11582#issuecomment-321...
TIL: Chrome supports XSLT.
Good riddance I guess - it and most of the tech from the "XML era" was needlessly overcomplicated.
XSLT is really powerful and it is declarative, like CSS, but can both push and pull.
It's a loss, if you ask me, to remove it from client-side, but it's one I worked through years ago.
It's still really useful on the server side for document transformation.
Imagine a WASM XSLT interpreter wouldn't be to hard to compile?
1 reply →
Your response is like seeing the cops going to the wrong house to kick in your neighbors door, breaking their ornaments in their entry way, and then saying to yourself, "Good. I hate yellow, and would never have any of that tacky shit in my house."
As your first sentence of your comment indicates, the fact that it's supported and there for people to use doesn't (and hasn't) result in you being forced to use it in your projects.
Yes but software, and especially browser, complexity has balooned enormously over the years. And while XSLT probably plays a tiny part in that, it's likely embedded in every Electron app that could do in 1MB what it takes 500 MB to do, makes it incrementally harder to build and maintain a competing browser, etc., etc. It's not zero cost.
I do tend to support backwards compatibility over constant updates and breakage, and needless hoops to jump through as e.g. Apple often puts its developers through. But having grown up and worked in the overexuberant XML-for-everything, semantic-web 1000-page specification, OOP AbstractFactoryTemplateManagerFactory era, I'm glad to put some of that behind us.
If that makes me some kind of gestappo, so be it.
6 replies →
Perhaps, but isn't the contemporary tech stack orders of magnitude more complicated? Doesn't feel like a strong motivating argument.
Unquestionably the right move. From the various posts on HN about this, it's clear that (A) not many people use it (B) it increases security vulnerability surface area (C) the few people who do claim to use have nothing to back up the claim
The major downside to removing this seems to be that a lot of people LIKE it. But eh, you're welcome to fork Chromium or Firefox.
Chrome and other browsers could virtually completely mitigate the security issues by shipping the polyfil they're suggesting all sites depending on XSLT deploy in the browser. By doing so, their XSLT implementation would become no less secure than their javascript implementation (and fat chance they'll remove that). The fact that they've rejected doing so is a pretty clear indication that security is just an excuse, IMO.
I wish more people would see this. They know exactly how to sandbox it, they’re telling you how to, they’re even providing and recommending a browser extension to securely restore the functionality they’re removing!
The security argument can be valid motivation for doing something, but is utterly illegitimate as a reason for removing. They want to remove it because they abandoned it many years ago, and it’s a maintenance burden. Not a security burden, they’ve shown exactly how to fix that as part of preparing to remove it!
1 reply →
by definition XSLT is more secure than JavaScript.
2 replies →
"[Y]ou're welcome to fork Chromium or Firefox" is the software developer equivalent of saying "you're welcome to go fuck yourself."
what exactly is the security concern with xslt?
It parses untrusted input, the library is basically unmaintained, it’s not often audited but anytime someone looks they find a CVE.
This is answered in the article.
XSLT the idea contains few (but not zero) unavoidable security flaws.
libxslt the library is a barely-maintained dumpster fire of bad practices.
They should audit LLMs.
I recently had an interesting chat with Liam Quin (who was on W3C's XML team) about XML and CDATA on Facebook, where he revealed some surprising history!
Liam Quinn in his award winning weirdest hat, also Microsoft's Matthew Fuchs' talk on achieving extensibility and reuse for XSLT 2.0 stylesheets, and Stephan Kesper̀̀'s simple proof that XSLT and XQuery are Turing complete using μ-recursive functions, and presentations about other cool stuff like Relax/NG:
https://www.cafeconleche.org/oldnews/news2004August5.html
Liam Quin's post:
https://www.facebook.com/liam.quin/posts/pfbid0X6jE58zjcEK5U...
#XML people!
How do we communicate the idea that declarative markup is a good idea? Declarative markup is where you identify what is there, not what it does. This is a title, not, make this big and bold. This is a a part number, not, make this blink when you click it - sure, you can do that to part numbers, but don't encode your aircraft manual that way.
But this idea is hard to grasp, for the same reason that WYSIAYG word processors (the A stands for All, What you see is all you get) took over from descriptive formatting in a lot of cases.
For an internal memo, for an insurance letter to a client, how much matters? Well, the insurance company has to be able to search the letters for specific information for 10, 20, 40, 100 years. What word processor did you use 40 years ago? Wordstar? Magic Wand? Ventura?
#markupMonday #declarativeMarkup
Don Hopkins: I Wanna Be <![CDATA[
https://donhopkins.medium.com/i-wanna-be-cdata-3406e14d4f21
Liam Quin: hahaha i actually opposed the inclusion of CDATA sections when we were designing XML (by taking bits we wanted from SGML), but they were already in use by the people writing the XML spec! But now you’ve given me a reason to want to keep them. The weird syntax is because SGML supported more keywords, not only CDATA, but they were a security fail.
Don Hopkins: There was a REASON for the <![SYNTAX[ ]]> ?!?!? I though it was just some kind of tribal artistic expressionism, like lexical performance art!
At TomTom we were using xulrunner for the cross platform content management tool TomTom Home, and XUL abused external entities for internationalizing user interface text. That was icky!
For all those years programming OpenLaszlo in XML with <![CDATA[ JavaScript code sections ]>, my fingers learned how to type that really fast, yet I never once wondered what the fuck ADATA or BDATA might be, and why not even DDATA or ZDATA? What other kinds of data are there anyway? It sounds kind of like quantum mechanics, where you just have to shrug and not question what the words mean, because it's just all arbitrarily weird.
Liam Quin: haha it’s been 30 years, but, there’s CDATA (character data), replaceable character data (RCDATA) in which `é` entity definitions are recognised but not `<`, IGNORE and INCLUDE, and the bizarre TEMP which wraps part of a document that might need to be removed later. After `<!` you could also have comments, <!-- .... --> for example (all the delimiters in SGML could be changed).
Don Hopkins: What is James Clark up to these days? I loved his work on Relax/NG, and that Dr. Dobb's interview "The Triumph of Simplicity".
https://web.archive.org/web/20020224025029/http://www.ddj.co...
Note: James Clark is arguably the single most important engineer in XML history:
- Lead developer of SGMLtools, expat, and Jade/DSSSL
- Co-editor of the XML 1.0 specification
- Designer of XSLT 1.0 and XPath 1.0
- Creator of Relax NG, one of the most elegant schema languages ever devised
He also wrote the reference XSLT implementation XT, used in early browsers and toolchains before libxslt dominated.
James Clark’s epic 2001 Doctor Dobb's Journal "A Triumph of Simplicity: James Clark on Markup Languages and XML" interview captures his minimalist design philosophy and his critique of standards and committee-driven complexity (which later infected XSLT 2.0).
It touches on separation of concerns, simplicity as survival, a standard isn't one implementation, balance of pragmatism and purity, human-scale simplicity, uniform data modeling, pluralism over universality, type systems and safety, committe pathology, and W3C -vs- ISO culture.
He explains why XML is designed the way it is, and reframes the XSLT argument: his own philosophy shows that when a transformation language stops being simple, it loses the very quality that made XML succeed.
More like google forgot how to write secure code and wings it, or just doesn't give a fuck.
lol, talking about "secure browser" on chrome dot com
[dead]
Question, how hard is it going to be to add XSLT back with WASM? I have built a few stylesheets for clients to view their raw XML in browser. I even add charts for data tables with XSLT.
Nice find — interesting to see browsers moving to drop XSLT support. I used XSLT once for a tiny site and it felt like magic—templating without JavaScript was freeing. But maybe it’s just niche now, and browser vendors see more cost than payoff.
Curious: have any of you used XSLT in production lately?
Yes. It's used heavily in the publishing and standards industries that store the documents in JATS and other XML-based formats.
Because browsers only support XSLT 1.0 the transform to HTML is typically done server side to take advantage of XSLT 2.0 and 3.0 features.
It's also used by the US government:
1. https://www.govinfo.gov/bulkdata/BILLS
2. https://www.govinfo.gov/bulkdata/FR/resources
I lead a team that manage trade settlements for hedge funds; data is exported from our systems as XML and then transformed via XSLT into whatever format the prime brokers require.
All the transformed are maintained by non-developers, business analysts mainly. Because the language is so simple we don't need to give them much training, just get IntelliJ installed on their machine, show them a few samples and let them work away.
We couldn't have managed with anything else.
[dead]
Good, XSLT was crap. I wrote an RSS feed XSLT template. Worst dev experience ever. No one is/was using XSLT. Removing unused code is a win for browsers. Every anti bloat HNer should be cheering
The first few times you use it, XSLT is insane. But once something clicks, you figure out the kinds of things it’s good for.
I am not really a functional programming guy. But XSLT is a really cool application of functional programming for data munging, and I wouldn’t have believed it if I hadn’t used it enough for it to click.
Right. I didn't use it much on the client side so I am not feeling this particular loss so keenly.
But server side, many years ago I built an entire CMS with pretty arbitrary markup regions that a designer could declare (divs/TDs/spans with custom attributes basically) in XSLT (Sablotron!) with the Perl binding and a customised build of HTML Tidy, wrapped up in an Apache RewriteRule.
So designers could do their thing with dreamweaver or golive, pretty arbitrarily mark up an area that they wanted to be customisable, and my CMS would show edit markers in those locations that popped up a database-backed textarea in a popup.
What started off really simple ended up using Sablotron's URL schemes to allow a main HTML file to be a master template for sub-page templates, merge in some dynamic functionality etc.
And the thing would either work or it wouldn't (if the HTML couldn't be tidied, which was easy enough to catch).
The Perl around the outside changed very rarely; the XSLT stylesheet was fast and evolved quite a lot.
XSLT's matching rules allow a 'push' style of transform that's really neat. But you can actually do that with any programming language such as Javascript.
> Every anti bloat HNer should be cheering
Actually a transformation system can reduce bloat, as people don't have to write their own crappy JavaScript versions of it.
Being XML the syntax is a bit convoluted, but behind that is a good functional (in sense of functional programming language, not functioning) system which can be used for templating etc.
The XML made it a bit hard to get started and anti-XML-spirit reduced motivation to get into it, but once you know it, it beats most bloaty JavaScript stuff in that realm by a lot.
> No one is/was using XSLT.
Ah, when ignorance leads to arrogance; It is massively utilised by many large entreprise or state administration in some countries.
Eg if you're american the library of congress uses it to show all legislative text.
I'm always puzzled by statements like this. I'm not much of a programmer and I wrote a basic XSLT document to transform rss.xml into HTML in a couple of hours. I didn't find it very hard at all (anecdotes are not data, etc)
XSLT is complete and utter garbage. Good riddance.
Removing JavaScript for a more secure browser.
Although it's sad to see an interesting feature go, they're not wrong about security. It's more important to have a small attack surface if this was maintained by one guy in Nebraska and he doesn't maintain it any more.
No, XSLT isn't required for the open web. Everything you can do with XSLT, you can also do without XSLT. It's interesting technology, but not essential.
Yes, this breaks compatibility with all the 5 websites that use it.