U.S. legislation is delivered on the web as XSLT-styled XML and depends on web browsers continuing to provide native support for client-side XSLT. Examples:
I am reminded of when Google attempted to kill MathML a few years ago because it was complexity and no one was using it (… because Chromium didn’t support it—all others did). There was widespread rebellion, and it actually led to Igalia implementing it and Google accepting and shipping it.
It wouldn’t surprise me to see a resurgence of interest in XSLT after this, if only for formatting Atom/RSS feeds.
(BTW, prefer Atom unless you’re operating in podcasting, it’s far more sane in ways that occasionally actually matter, and everything supports it except in podcasting which Apple ruined. If you want a featureful stylesheet to look at for reference, mine is the best I know of: https://chrismorgan.info/atom.xsl, https://temp.chrismorgan.info/2022-05-10-rss.xsl.)
I'd like to genuinely ask: what's the benefit of providing a visually appealing feed? I thought feeds were meant for programs. Do you/people directly browse individual feeds? Nice feed look BTW!
Some people aren’t familiar with auto discovery or feeds in general, so it’s desirable to have a link in your navigation that goes to your feed. When people click that link, they should see something useful and not just markup or a download prompt.
> I don't think there's a strong "Open Web" argument to be made here. XML data files being able to be reformatted into HTML is somewhat of an accident of history; we don't have similar functionality for any other data type, despite, for example, JSON files being vastly more common on the web than XML.
Ironically, we would have support for transforming JSON if XSLT in the browser had been kept up-to-date.
Interestingly, "XML data files being able to be reformatted into HTML" was a deliberate choice to support and encourage the separation of content and presentation. CSS was introduced for the same reason but to address a slightly different (but complimentary) aspect of the same ambition and it's flourished. Imagine how widely CSS would be being used if browser support was still stuck at CSS 1.0.
I tried to comment on this just now but I was blocked as the thread has been limited to collaborators, their word not mine, but wow.
So I guess I'll put it here assuming someone reads.
Given the various comments people have about the dependencies on XSLT that various standards, applications and workflows have. I believe that the default display of XML documents in Chrome, Firefox, and I suppose Edge is handled by an XSLT.
It used to be that MSXML shipped with an XSLT (and even earlier a wd-xsl document) as a resource that was used to style any XML document for display if the XML document did not have an associated stylesheet when you opened it in IE or other views that used IE for rendering.
I believe this same thing is done by the browsers mentioned, or at least it used to be
which is why when you open an XML document without styling information in those documents it is represented as a tree view with expandable collapsible nodes.
I suppose they will just implement a default rendering for XML using some other code rather than running an XSLT upon it, but this applies as well as all these edge cases of handling RSS feeds etc.
Safari does not have a stylesheet rendering for unstyled XML which is why it just shows the text nodes of the document and nothing else.
The quality of the github comments: accusing developers of being dictators, being overly emotionally, the hate towards people who actually made the web happen (Smaug, Anne, Emilo, etc…), the "why not just…" or "hire more people" remarks...
For a browser developer, this is depressing. I've worked on Gecko for 10+ years, and we were constantly called names for absolutely any change we would do. Insulted and accused of the worst intentions.
Personally I've found googles responses to be very rude; they've asked for feedback and people have come back saying "We're still using this, please don't just remove it" and despite that they seem to be completely uncompromising on removing it without any adjustments like shipping the wasm polyfill instead of native code.
It kind of baffles me that they could even consider this, maybe I'm just naiive but the webs greatest strength has always been it's backwards compatibility; I can fire a page up written 30 years ago and it still renders (assuming it wasn't built in flash lol). Breaking the user experience like this and saying "well the owners need to update their site" doesn't work - a lot of these pages won't be actively maintained or under the control of someone who can make changes.
The unwillingness to ship the polyfill as a substitute for the native code really puts the lie to any notion that this is really about security/attack surface angle that some people have latched onto, too.
Also, "Smaug, Anne, Emilo" did not "make the web happen." They have influenced how the web has developed, in particular favouring functionality and uses that are dependent on Javascript, and neglecting to ensure parity of opportunity for other approaches to flourish.
Or you could try having more empathy and being less done deaf yourself.
Users don't like when you take functionality away from them. This is an appropriate response to a proposal to break part of the web just to make things a bit easier for browser developers (who are meanwhile adding a gazillion other things that are much more complex and actively hurt the users interests).
As someone who is an open source dev (but not for anything this prominent).
Sometimes you have to remove features to make a product good. Its sad, but if your product includes the kitchen sink, its not a good product and drags everything down.
> I've worked on Gecko for 10+ years, and we were constantly called names for absolutely any change we would do. Insulted and accused of the worst intentions.
I think if Gecko crashed less that'd be great.
I think if Gecko starts selling me a VPN service, and the parent org gets busy doing a bunch of real-estate investments, I wonder if you're making a web browser anymore.
> the hate towards people who actually made the web happen (Smaug, Anne, Emilo, etc…)
I'm sorry I disagree.
I am hearing them say they can't make the web happen, because it's hard and they're not very good at programming, they put so many bugs in their code they just can't fix it, and it's really interfering with their efforts to add another privacy-impacting feature that they can use to sell more ads.
I think if every one of them got hit by a bus tomorrow absolutely nothing would change on the web except maybe we'd keep XSLT for another six months.
I want to appreciate anything you've done for Gecko, but it's hard if you don't realise it's people like me made the web happen too: I've been building web applications since 1994, and my applications have run on billions of devices at this point, and paid for my house, and some twenty years ago I used XSLT.
Do you really think I should bail them out by rewriting my fully working code so they don't have to fix their smelly broken code? You really think I have no standing to be a little bit annoyed by that attitude?
This usually happens when a lot of people were forced to use a library or tool by their boss as part of a company mandate... They're already frustrated about not having a say about their tools and so when something goes wrong with the mandated tool/library, they just explode with rage.
I've participated in these two kinds of projects so I can see a clear difference in user behaviors.
Coerced users are particularly hateful, especially when the library or tool has serious flaws.
It sucks, people just doing their job should be treated with respect.
That said I can also feel like the technocratic decision making process make it so some people aren't given any voice nor choice. Its whatever the US tech giants want that decides for the rest of us.
It’s ok to have emotions, even as an adult, we all have feelings. However, it’s important to be kind to other humans and to treat humans with respect. Even on the internet, even when people are proposing removing features from a browser. Now it can be difficult to voice opposition without coming off as rude but its definitely an important skill for a professional to have.
I think this is especially true on GitHub where people are using their real professional identities. I’m honestly shocked that anyone can just comment on these proposals given how toxic it gets. Imagine if this is your day to day work environment - you’re trying to improve the web, which is already a tremendously difficult thing while all of these keyboard warriors are insulting you and your efforts. I wouldn’t want to wish that on anyone.
> Web developers aren't Vulcans. We have and use emotions.
You might find that the people on their end, too, have and use emotions.
Acknowledging and voicing your emotional and mental position is one thing, that alone doesn't make it overly emotional. What does is being so taken by them, that it ends up trampling on others'.
The problem with browsers is that they rely on other standards. If a browser needs to maintain backward compatibility and requires an evolving standard that ALSO requires backward compatibility, this acts like a multiplier on implementation complexity. This also means it becomes increasingly difficult to spin up a new browser engine, and the fewer of those there are, the easier it is for the big ones to just add what they want and have it become a standard, reinforcing the issue.
XSLT adds ~100k lines of specialized code to browser engines with near-zero telemetry usage (<0.01% of page loads), making it a prime example of the maintenance burden vs. utility problem you're describing.
> near-zero telemetry usage (<0.01% of page loads),
Perhaps without realising it, you are still describing numbers in the hundreds of millions perhaps billions, which isn't even close to zero when you're talking about atoms:
Google does 200$ billion USD in annual ad revenue: At even a mere $1 CPM that's 200 trillion impressions a year.
They put 5 ads on every page and we're at 40 trillion page loads a year that Google knows about.
You tell me that even 0.001% of that is XML/XSLT and we're still talking hundreds of millions of page loads every year.
My blogroll's better-known than my actual site, and it's feed-reader compatible OPML that I've just made fun with additional attributes and XSLT. A server-side transform to vend duplicate OPML and HTML would be a bummer. I'm not an Important Web Platform user or anything, but I wish more people would share their feed-reader exports – and I've thought about trying to share tooling to extend/display them like this. It'll be sad if that ends up impossible.
I'd consider JS and WASM quite a bit more risky from a security standpoint, which is why everyone should refuse loading and executing those by default.
Securing an XSLT 3.0 implementation would be much easier.
This could be debatable if browsers had any UI at all to display XML. It's incomprehensible that if you open the open web solution for subscribing to web content (RSS) you're greeted with a wall of unformatted text. Right now, XSLT is the poor fix to that browser's basic inability.
I used it everyday for the first 5 years of my career, got my first big job off the back of a shorter graduate job that exposed me to it.
It will always have a special heart too.
However, in the way it was used in my roles at least - I found it enforced far too rigid a separation between the data and the presentation.
There were multiple times a backend could not perform some function or transformation of the data, for various and always non technical reasons.
That left it to the xslt developers to figure out a solution, and sometimes due to the limits of the language that involved writing a custom java function / xslt plugin.
Things that were incredibly simple when some sort of scripting language is available in your frontend web app could be incredibly convoluted when all you had was an xslt processor.
Would you say the same about CSS if browsers only supported version 1.0 while it was being used in many other contexts and version 4.0 was being worked on?
The only useful thing I have seen it do in the past couple of decades has been to style Atom/RSS feeds. I haven’t personally used it in 25 years. The complexity and attack surface area isn’t justified by its utility, so it’s hard to make the case for keeping it.
How about “not breaking stuff” which can not be upgraded? Like old sites/services without active maintainers but still useful. Or hardware appliances that still work, but will not get firmware update ever. Let alone rss feeds, brought up multiple times in the linked thread.
Looks like builtin polyfill (similar to pdfjs in FF) would do. But google seems to be reluctant doing it.
When’s a reasonable time to pull the plug on out of fashion legacy stuff? Things can’t always remain backwards compatible forever. I think the places this is still in use can build contingencies where required
Lets remind ourselves that thanks to Google we also did not got WebGL 2.0 Compute, it was too much for Chrome team to spend their resources between WebGL 2.0 Compute and WebGPU.
How great that five years later WebGPU is something we can rely on in portable way. /s
> It was terrible, but i was so proud of it at the time.
Terrible why? Bookmarklets and XSLT (and things like Greasemonkey userscripts) were some of the things that made the web more "read-write" in the '00s: anyone could remix web content however they saw fit, optimizing it for their personal use-cases, and often attracted kids and "normies" to coding.
Even today, they can be used to do stuff that most people find magical. Unfortunately they are unfashionable, so they're slowly getting strangled by the big players who want to have all the control all the time.
Maybe off topic, but I was really inspired by this article and started thinking about other ways to have "in-browser templating". And then realized that the old function document.write() is actually very convenient for this. Tried to write up the pros and cons here: https://vladimirslepnev.me/write
How is the answer to any question "Should we remove X from the web platform?" where X wasn't introduced in the last week an has actual users not a resounding "No, WTF is wrong with you.".
Easy. Just use the GNOME argument: round a small enough minority down to "nobody" and then confidently claim "nobody actually uses that, and we don't want to maintain it, so we're removing it."
It’s always about balance. I think I read, somewhere, that Microsoft actually reintroduced bugs into their OS, because so many people depended on workarounds and the buggy behavior.
Isn't it a shame that we can only have one program installed on our computers.
Imagine (if you can!) being able to have two programs, one (program A) that supports JS and all that shite, and another (program B) that supports XSLT and all that shite. If you're still with me imagine that program A could just call program B when it detects stuff that it doesn't support and vice versa.
I know, I know, a measly 16 core CPU with 32Gi of memory is not going to be capable of such feats, but one can dream...
U.S. legislation is delivered on the web as XSLT-styled XML and depends on web browsers continuing to provide native support for client-side XSLT. Examples:
https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih.... https://www.govinfo.gov/content/pkg/BILLS-119hr400ih/xml/BIL...
I am reminded of when Google attempted to kill MathML a few years ago because it was complexity and no one was using it (… because Chromium didn’t support it—all others did). There was widespread rebellion, and it actually led to Igalia implementing it and Google accepting and shipping it.
It wouldn’t surprise me to see a resurgence of interest in XSLT after this, if only for formatting Atom/RSS feeds.
(BTW, prefer Atom unless you’re operating in podcasting, it’s far more sane in ways that occasionally actually matter, and everything supports it except in podcasting which Apple ruined. If you want a featureful stylesheet to look at for reference, mine is the best I know of: https://chrismorgan.info/atom.xsl, https://temp.chrismorgan.info/2022-05-10-rss.xsl.)
I'd like to genuinely ask: what's the benefit of providing a visually appealing feed? I thought feeds were meant for programs. Do you/people directly browse individual feeds? Nice feed look BTW!
Most of the time you wouldn't even notice that a web page is the result of a client-side XSLT transformation, for example:
https://xmpp.org/extensions/xep-0182.xml
Some people aren’t familiar with auto discovery or feeds in general, so it’s desirable to have a link in your navigation that goes to your feed. When people click that link, they should see something useful and not just markup or a download prompt.
From the GitHub thread (https://github.com/whatwg/html/issues/11523#issuecomment-315...)
> I don't think there's a strong "Open Web" argument to be made here. XML data files being able to be reformatted into HTML is somewhat of an accident of history; we don't have similar functionality for any other data type, despite, for example, JSON files being vastly more common on the web than XML.
Ironically, we would have support for transforming JSON if XSLT in the browser had been kept up-to-date.
Interestingly, "XML data files being able to be reformatted into HTML" was a deliberate choice to support and encourage the separation of content and presentation. CSS was introduced for the same reason but to address a slightly different (but complimentary) aspect of the same ambition and it's flourished. Imagine how widely CSS would be being used if browser support was still stuck at CSS 1.0.
I tried to comment on this just now but I was blocked as the thread has been limited to collaborators, their word not mine, but wow.
So I guess I'll put it here assuming someone reads.
Given the various comments people have about the dependencies on XSLT that various standards, applications and workflows have. I believe that the default display of XML documents in Chrome, Firefox, and I suppose Edge is handled by an XSLT.
It used to be that MSXML shipped with an XSLT (and even earlier a wd-xsl document) as a resource that was used to style any XML document for display if the XML document did not have an associated stylesheet when you opened it in IE or other views that used IE for rendering.
I believe this same thing is done by the browsers mentioned, or at least it used to be
https://stackoverflow.com/questions/9463402/default-xml-styl...
which is why when you open an XML document without styling information in those documents it is represented as a tree view with expandable collapsible nodes.
I suppose they will just implement a default rendering for XML using some other code rather than running an XSLT upon it, but this applies as well as all these edge cases of handling RSS feeds etc.
Safari does not have a stylesheet rendering for unstyled XML which is why it just shows the text nodes of the document and nothing else.
So the libxml/libxslt unpaid volunteer maintainer wants to stop doing 'disclosure embargo' of reported security issues: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
Shortly after that, Google Chrome want to remove XSLT support.
Coincidence?
The quality of the github comments: accusing developers of being dictators, being overly emotionally, the hate towards people who actually made the web happen (Smaug, Anne, Emilo, etc…), the "why not just…" or "hire more people" remarks...
For a browser developer, this is depressing. I've worked on Gecko for 10+ years, and we were constantly called names for absolutely any change we would do. Insulted and accused of the worst intentions.
I see it hasn't changed.
Personally I've found googles responses to be very rude; they've asked for feedback and people have come back saying "We're still using this, please don't just remove it" and despite that they seem to be completely uncompromising on removing it without any adjustments like shipping the wasm polyfill instead of native code.
It kind of baffles me that they could even consider this, maybe I'm just naiive but the webs greatest strength has always been it's backwards compatibility; I can fire a page up written 30 years ago and it still renders (assuming it wasn't built in flash lol). Breaking the user experience like this and saying "well the owners need to update their site" doesn't work - a lot of these pages won't be actively maintained or under the control of someone who can make changes.
The unwillingness to ship the polyfill as a substitute for the native code really puts the lie to any notion that this is really about security/attack surface angle that some people have latched onto, too.
Hiding any negative comment is especially childish.
> the hate towards people who actually made the web happen (Smaug, Anne, Emilo, etc…)
The vast majority of comments on https://github.com/whatwg/html/issues/11523 are polite and respectful.
Also, "Smaug, Anne, Emilo" did not "make the web happen." They have influenced how the web has developed, in particular favouring functionality and uses that are dependent on Javascript, and neglecting to ensure parity of opportunity for other approaches to flourish.
In my own personal experience on dealing with him, Anne is an arrogant douche who quite frankly hasn't been yelled at enough
A lot of the hate is actually justified (with the exception of hixxie)
Or you could try having more empathy and being less done deaf yourself.
Users don't like when you take functionality away from them. This is an appropriate response to a proposal to break part of the web just to make things a bit easier for browser developers (who are meanwhile adding a gazillion other things that are much more complex and actively hurt the users interests).
As someone who is an open source dev (but not for anything this prominent).
Sometimes you have to remove features to make a product good. Its sad, but if your product includes the kitchen sink, its not a good product and drags everything down.
The users yell at me too sometimes.
1 reply →
> I've worked on Gecko for 10+ years, and we were constantly called names for absolutely any change we would do. Insulted and accused of the worst intentions.
I think if Gecko crashed less that'd be great.
I think if Gecko starts selling me a VPN service, and the parent org gets busy doing a bunch of real-estate investments, I wonder if you're making a web browser anymore.
> the hate towards people who actually made the web happen (Smaug, Anne, Emilo, etc…)
I'm sorry I disagree.
I am hearing them say they can't make the web happen, because it's hard and they're not very good at programming, they put so many bugs in their code they just can't fix it, and it's really interfering with their efforts to add another privacy-impacting feature that they can use to sell more ads.
I think if every one of them got hit by a bus tomorrow absolutely nothing would change on the web except maybe we'd keep XSLT for another six months.
I want to appreciate anything you've done for Gecko, but it's hard if you don't realise it's people like me made the web happen too: I've been building web applications since 1994, and my applications have run on billions of devices at this point, and paid for my house, and some twenty years ago I used XSLT.
Do you really think I should bail them out by rewriting my fully working code so they don't have to fix their smelly broken code? You really think I have no standing to be a little bit annoyed by that attitude?
This usually happens when a lot of people were forced to use a library or tool by their boss as part of a company mandate... They're already frustrated about not having a say about their tools and so when something goes wrong with the mandated tool/library, they just explode with rage.
I've participated in these two kinds of projects so I can see a clear difference in user behaviors.
Coerced users are particularly hateful, especially when the library or tool has serious flaws.
It sucks, people just doing their job should be treated with respect.
That said I can also feel like the technocratic decision making process make it so some people aren't given any voice nor choice. Its whatever the US tech giants want that decides for the rest of us.
Why shouldn't people be overly emotional? Humans are emotional - and that's a good thing.
This change would make people sad because things they like would stop working.
It would cause them stress because they would have to work hard to fix or replace things.
It would cause them anger because some unaccountable people would be making decisions without considering them.
It would make them afraid that those same people might destroy something else which is useful.
These are all valid and useful emotional responses. Telling someone "if you do this it will make me sad" should be useful feedback.
Web developers aren't Vulcans. We have and use emotions.
It’s ok to have emotions, even as an adult, we all have feelings. However, it’s important to be kind to other humans and to treat humans with respect. Even on the internet, even when people are proposing removing features from a browser. Now it can be difficult to voice opposition without coming off as rude but its definitely an important skill for a professional to have.
I think this is especially true on GitHub where people are using their real professional identities. I’m honestly shocked that anyone can just comment on these proposals given how toxic it gets. Imagine if this is your day to day work environment - you’re trying to improve the web, which is already a tremendously difficult thing while all of these keyboard warriors are insulting you and your efforts. I wouldn’t want to wish that on anyone.
6 replies →
> Web developers aren't Vulcans. We have and use emotions.
You might find that the people on their end, too, have and use emotions.
Acknowledging and voicing your emotional and mental position is one thing, that alone doesn't make it overly emotional. What does is being so taken by them, that it ends up trampling on others'.
> Why shouldn't people be overly emotional?
By definition overly emotional is bad – that’s what separates “overly emotional” from just “emotional”.
Regardless, having emotions is not the problem, lashing out at others because of those emotions is the problem.
> These are all valid and useful emotional responses. Telling someone "if you do this it will make me sad" should be useful feedback.
The person you are responding to said:
> we were constantly called names for absolutely any change we would do. Insulted and accused of the worst intentions.
Why are you misrepresenting this as “it will make me sad”?
5 replies →
It's disheartening that this Chrome dev in the comments shows himself to be completely unaware of the RSS ecosystem...
Quite frankly i think if browsers had better default display of RSS then it wouldnt really matter.
I still can't see how browser support is useful for that.
Now to think of it, I'd like to see one useful example of its usage. Haven't seen one in a long long time.
Take a look at this feed:
https://interconnected.org/home/feed
You’ll see if you view source that it’s an RSS (XML) file that your browser doesn’t know how to render. But at the top, there is this:
Your browser loads that XSLT and uses it to transcode the XML into HTML, which your browser can now render. The source is here:
https://github.com/genmon/aboutfeeds/blob/main/tools/pretty-...
1 reply →
The problem with browsers is that they rely on other standards. If a browser needs to maintain backward compatibility and requires an evolving standard that ALSO requires backward compatibility, this acts like a multiplier on implementation complexity. This also means it becomes increasingly difficult to spin up a new browser engine, and the fewer of those there are, the easier it is for the big ones to just add what they want and have it become a standard, reinforcing the issue.
XSLT adds ~100k lines of specialized code to browser engines with near-zero telemetry usage (<0.01% of page loads), making it a prime example of the maintenance burden vs. utility problem you're describing.
> near-zero telemetry usage (<0.01% of page loads),
Perhaps without realising it, you are still describing numbers in the hundreds of millions perhaps billions, which isn't even close to zero when you're talking about atoms:
Google does 200$ billion USD in annual ad revenue: At even a mere $1 CPM that's 200 trillion impressions a year.
They put 5 ads on every page and we're at 40 trillion page loads a year that Google knows about.
You tell me that even 0.001% of that is XML/XSLT and we're still talking hundreds of millions of page loads every year.
And that's just the ones Google knows about: Pages without Google ads like say https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih.... should definitely not be included in that telemetry at all, indicating the value should be much higher
https://maya.land/blogroll.opml
My blogroll's better-known than my actual site, and it's feed-reader compatible OPML that I've just made fun with additional attributes and XSLT. A server-side transform to vend duplicate OPML and HTML would be a bummer. I'm not an Important Web Platform user or anything, but I wish more people would share their feed-reader exports – and I've thought about trying to share tooling to extend/display them like this. It'll be sad if that ends up impossible.
Even skechers.com doesn't use it any more [1]
[1] https://thedailywtf.com/articles/Sketchy-Skecherscom
Why is it always Google employees that want to kill things?
It's more interesting and controversial to hear about
I'd consider JS and WASM quite a bit more risky from a security standpoint, which is why everyone should refuse loading and executing those by default.
Securing an XSLT 3.0 implementation would be much easier.
This could be debatable if browsers had any UI at all to display XML. It's incomprehensible that if you open the open web solution for subscribing to web content (RSS) you're greeted with a wall of unformatted text. Right now, XSLT is the poor fix to that browser's basic inability.
Firefox at least did have a sensible default style sheet for RSS at some point.
I always kind of liked XSLT but it clearly has not caught on.
I used it everyday for the first 5 years of my career, got my first big job off the back of a shorter graduate job that exposed me to it.
It will always have a special heart too.
However, in the way it was used in my roles at least - I found it enforced far too rigid a separation between the data and the presentation.
There were multiple times a backend could not perform some function or transformation of the data, for various and always non technical reasons.
That left it to the xslt developers to figure out a solution, and sometimes due to the limits of the language that involved writing a custom java function / xslt plugin.
Things that were incredibly simple when some sort of scripting language is available in your frontend web app could be incredibly convoluted when all you had was an xslt processor.
I think most people just aren't aware its available in the browser.
Would you say the same about CSS if browsers only supported version 1.0 while it was being used in many other contexts and version 4.0 was being worked on?
That polyfill is 46 Megabytes in size. This would suck so much to have this suddenly in the build.
Some recent discussion on XSLT here:
XSLT – Native, zero-config build system for the Web – 27th June 2025 (328 comments):
https://news.ycombinator.com/item?id=44393817
The only useful thing I have seen it do in the past couple of decades has been to style Atom/RSS feeds. I haven’t personally used it in 25 years. The complexity and attack surface area isn’t justified by its utility, so it’s hard to make the case for keeping it.
> so it’s hard to make the case for keeping it.
How about “not breaking stuff” which can not be upgraded? Like old sites/services without active maintainers but still useful. Or hardware appliances that still work, but will not get firmware update ever. Let alone rss feeds, brought up multiple times in the linked thread.
Looks like builtin polyfill (similar to pdfjs in FF) would do. But google seems to be reluctant doing it.
When’s a reasonable time to pull the plug on out of fashion legacy stuff? Things can’t always remain backwards compatible forever. I think the places this is still in use can build contingencies where required
13 replies →
Lets remind ourselves that thanks to Google we also did not got WebGL 2.0 Compute, it was too much for Chrome team to spend their resources between WebGL 2.0 Compute and WebGPU.
How great that five years later WebGPU is something we can rely on in portable way. /s
The wikipedia api has an option where you can add an xslt stylesheet to its output.
When i was young and stupid and learning to program i made an xslt stylesheet to extract dictionary definitions from the api.
It was meant to be combined with a bookmarklet that when you double clicked a word opened it in an iframe.
It was terrible, but i was so proud of it at the time.
It seems like it stopped working at some point, i guess browsers are probably more strict with mime types now. https://en.wiktionary.org/w/api.php?action=parse&format=xml&...
Sorry if this is too off topic, it just triggered some memories
> It was terrible, but i was so proud of it at the time.
Terrible why? Bookmarklets and XSLT (and things like Greasemonkey userscripts) were some of the things that made the web more "read-write" in the '00s: anyone could remix web content however they saw fit, optimizing it for their personal use-cases, and often attracted kids and "normies" to coding.
Even today, they can be used to do stuff that most people find magical. Unfortunately they are unfashionable, so they're slowly getting strangled by the big players who want to have all the control all the time.
1 reply →
Maybe off topic, but I was really inspired by this article and started thinking about other ways to have "in-browser templating". And then realized that the old function document.write() is actually very convenient for this. Tried to write up the pros and cons here: https://vladimirslepnev.me/write
Amusingly, just like XSLT, document.write is in the process of being deprecated: https://developer.mozilla.org/en-US/docs/Web/API/Document/wr...
How is the answer to any question "Should we remove X from the web platform?" where X wasn't introduced in the last week an has actual users not a resounding "No, WTF is wrong with you.".
Easy. Just use the GNOME argument: round a small enough minority down to "nobody" and then confidently claim "nobody actually uses that, and we don't want to maintain it, so we're removing it."
In order to reach that point faster, never improve the feature and make it increasingly cumbersome to use to turn potential users off it.
1 reply →
That was the actual answer, but those comments got hidden and the discussion locked.
Because not everything should be kept in browsers for all eternity? Thankfully it isn't like that and things do get removed.
They absolutely should and this should be considered when adding new features. That's called having a stable platform that others can build upon.
7 replies →
Can't they make a memory safe XSLTProcessor by just compiling libxml to WebAssembly?
Obligatory xkcd: https://xkcd.com/1172/
I love XKCD, but this particular strip has been used to justify all sorts of arbitrary and negative decisions. XSLT support is not a bug.
Same could be said for many other strips.
It’s always about balance. I think I read, somewhere, that Microsoft actually reintroduced bugs into their OS, because so many people depended on workarounds and the buggy behavior.
Isn't it a shame that we can only have one program installed on our computers.
Imagine (if you can!) being able to have two programs, one (program A) that supports JS and all that shite, and another (program B) that supports XSLT and all that shite. If you're still with me imagine that program A could just call program B when it detects stuff that it doesn't support and vice versa.
I know, I know, a measly 16 core CPU with 32Gi of memory is not going to be capable of such feats, but one can dream...
And how does a "web" work, which links "Hoyertext" between those places using XSLT and those which don't?
And well, the browser tries to be the only program one uses, where all applications run inside it.