Google is killing the open web

3 months ago (wok.oblomov.eu)

> in 2023, Google renames their chatbot from Bard to [Gemini][gemini] thereby completely eclipsing the 4-year-old independent protocol by the same name; this is possibly coincidental, which would make it the only unintentional attack on the open web by Google in the last 15 or so years —and at this point even that is doubtful;

So the theory is that Google chose the name of its AI -- easily one of the hardest and most revenue-impacting naming decisions it's made in years -- in order to create a name collision with a protocol nobody's heard of that's trying to revive GOPHER?

This is so obviously false that you have to re-read the rest of the article with the knowledge that the author is misunderstanding what they're seeing.

Much of what the author describes is increasing security and not wanting to work with XML.

  • I suppose the definition of intentional is a bit murky here.

    Yeah, you're right that Google probably didn't look at a list of open web technologies that they disagree with and choose one for their new tool. I guess I'll call that "malicious intention".

    I'm sure that, however the name was picked, Google's lawyers looked for prior uses of the name. I'm sure it came up, and Google shrugged its shoulders in indifference. Maybe someone brought up the fact this would hurt some open standard or whatever, but nobody in power cared. Is this the same kind of malicious? Probably not, but it still shows that Google doesn't care about the open web and the collateral damage they cause.

  • Did you at least read the excerpt you posted? It says the opposite of your conclusion, this is not even the worse interpretation of what was said, it's plain false.

I love the little historical overview in the post. With more than 25 years of hindsight, the push against user-centred standards is so obvious. W3C is always better than whatever coolaid-du-jour big corps wants you to drink because (at the very least), someone actually thinks "how is this going to affect people using it" as opposed to Google/Apple's approach "How is this going to affect our revenue".

  • To be honest, in my recollection, in 2013 what the W3C was doing was actually seen as user hostile and HTML5 was seen as a good thing for users.

    Part of the community really hated XHTML and its strictness. I remember Mozilla being at the vanguard then rather than Google.

    I think the situation was and is a lot more messy and complicated than what the article presents but presenting it fully would make for a less compelling narrative.

    As is I don’t really buy it personally.

    • Hostile actions would be IE’s strategy for monopolizing browser in 90s and Google paying Apple and Mozilla to monopolize search starting in 2004, killing off Reader in 2013.

      Taking over standards groups is a gray area with tradeoffs. It helped Google preserve monopoly in search but clearly devs and the web benefited as well.

      XHTML2 was panned because it was super strict without clear benefits. Keeping HTML backwards compatible is clearly a very good thing. I don’t fully understand the author’s passion for XSLT- it’s cumbersome and it wasn’t popular with devs.

      I agree with the headline and some aspects but XML is a bad hill to die on and much of the writing is hyperbolic and more than a little out of touch.

    • I do agree reality was a lot more messy, but i also think it still paints a compelling case that Google in particular acted how it did (mostly) to shape the web to its own best interests.

      That it wasn't literally "Google railroaded WHATWG/W3C/everyone else to get what it wanted" doesn't mean Google didn't take advantage of the situation to kill open web standards to its own benefit. I imagine Mozilla, for instance, went along with as much as they did because Google accounted for most of their revenue.

    • I've been there at the time, and the pushback against XHTML always struck me as disingenuous. XHTML was not at all difficult to write! The only real argument against it was that it wasn't always valid HTML, and browsers didn't want to support it specifically, so when people published XHTML pages it would sometimes break if the browser tried to interpret it as HTML. But they have broken HTML backwards compatibility so much worse many times since then...

    • > Part of the community really hated XHTML and its strictness.

      A big part of this is that people were concatenating XML together manually, to predictable disaster.

      Nowadays they use JSX and TypeScript, far more strict than XML ever was, and absolutely love it.

      1 reply →

  • Nobody pays for anything with user centric standards. If software were free to produce and services were free to run this would work, but it doesn’t. Software in particular is incredibly time consuming and expensive, especially if you want to make it usable.

    • People definitely do pay for it when it's available, even more at the core of this issue is that people would prefer alternatives that are open, where their data can be easily ported to some competitor service if it's better which directly affects the bottomline from companies that push against open standards.

      I think you got it clearly reversed in your mind...

      3 replies →

The article is about intentional killing XSLT/XML in the browser. I think it is evolutionary: devs switched to JSON, AI agents don't care at all - they can handle anything; XML just lost naturally, like GOPHER

  • The problem is not XML vs. JSON. This is not about choosing the format to store a node app's configuration. This is about an entire corpus of standards, protocols that depend on this. The root problem for me is:

    1) Google doing whatever they want with matters that affect every single human on the planet.

    2) Google running a farse with "public feedback" program where they don't actually listen, or in this case ask for feedback after the fact.

    3) Google not being truthful or introspective about the reasons for such a change, especially when standardized alternatives have existed for years.

    4) Honestly, so much of standard, interoperable "web tech" has been lost to Chrome's "web atrocities" and IE before that... you'd think we've learned the lesson to "never again" have a dominant browser engine in the hands of a "for profit" corp.

    • The narrative would be more compelling to me if Google didn’t fail to impose their technology on the web so many times.

      NaCL? Mozilla won this one. Wasm is a continuation of asm.js.

      Dart? It now compiles to Wasm but has mostly failed to replace js while Typescript filled the niche.

      Sure, Google didn’t care much for XML. They had a proper replacement for communication and simple serialisation internally in protobuf which they never actually try to push for web use. Somehow json ended up becoming the standard.

      I personally don’t give much credit to the theory of Google as a mastermind patiently under minding the open web for years via the standards.

      Now if we talk about how they have been pushing Chrome through their other dominant products and how they have manipulated their own products to favour it, I will gladly agree that there is plenty to be said.

      2 replies →

    • Yes, this is the real issue, and it is a pity so many comments delve into json vs. xml and not on the title stating that "google is killing the open web". A new stage of the web is forming where Big Tech AI isn't just chatbots, but matured to offer fully operational end-to-end services. All AI-operated and served, up to tailor-made domain-specific UI. Then the corporations, winners in their market, don't have a need for open web anymore to slurp data from. All open web data absorbed, now fresh human creativity flows in exclusively via these service, directly feeding the AI systems.

      1 reply →

  • > XML just lost naturally, like GOPHER

    Lost? The format is literally everywhere and a few more places. Hard to say something lost when it's so deeply embedded all over the place. Sure, most developers today reach for JSON by default, but I don't think that means every other format "lost".

    Not sure why there is always such a focus on who is the "winner" and who is the "loser", things can co-exists just fine.

  • +1 I think XML "lost" some time ago. I really doubt anyone would chose to use it for anything new these days.

    I think, from my experience at least, that we keep getting these "component reuse" things coming around "oh you can use Company X's schema to validate your XML!" "oh you can use Company X's custom web components in your web site!" etc etc yet it rarely if ever seems to be used. It very very rarely ever feels like components/schemas/etc can be reused outside of their intended original use cases, and if they can they are either so trivially simple it's hardly worth the effort, or they are so verbose and cumbersome and abstracted trying to be all things to all people then it is a real pain to work with. (And for the avoidance of doubt I don't mean things like tailwind et Al here)

    I'm not sure who keeps dreaming these things up with this "component reuse" mentality but I assume they are in "enterprise" realms where looking busy and selling consulting is more important than delivering working software that just uses JSON :)

    • It may be that nobody would choose XML as the base for their new standard. But there are a ton of existing standards built around XML that are widely used and important today. RSS, GPX, XMPP, XBRL, XSLT, etc. These things aren't being replaced with JSON-based open standards. If they die, we will likely be left without any usable open standards in their respective niches.

      1 reply →

    • probably nobody would choose it for anything new because the sweet spots for XML usage have all already been taken, that said if someone was to say hey we need to redo some of these standards they can of course find ways to make JSON work for some standards that are XML nowadays, but for a lot of them JSON would be the absolute worst and if you were redoing them you would use XML to redo.

      example formats that should not ever be JSON

      TEI https://tei-c.org/ EAD https://www.loc.gov/ead/ docbook https://docbook.org/

      are three obvious ones.

      basically anything that needs to combine structured and unstructured data and switch between the two at different parts of your tree are probably better represented as XML.

      6 replies →

    • > I really doubt anyone would chose to use it for anything new these days.

      I use it to store complex 3D objects. It works surprisingly well.

    • XML might have “lost” but it’s still a format being used by many legacy and de novo projects. Transform libraries are also alive and well, some of them coming with hefty price tags.

    • "I really doubt anyone would chose to use it for anything new these days."

      Funny how went from "use it for everything" (no matter how suitable) to "don't use it for anything new" in just under two decades.

      To me XML as a configuration file format never made sense. As a data exchange format it has always been contrived.

      For documents, together with XSLT (using the excellent XPath) and the well thought out schema language RelaxNG it still is hard to beat in my opinion.

    • LLMs produce much more consistent XML than JSON because JSON is a horrible language that can be formatted in 30 different ways with tons of useless spaces everywhere, making for terrible next token prediction.

      1 reply →

    • > I really doubt anyone would chose to use it for anything new these days.

      Use the correct tool for the job. If that tool is XML, then I use it instead of $ShinyThing.

      1 reply →

  • XML is not a file format only. It's a complete ecosystem built around that file. Protocols, verifiers, file formats built on top of XML.

    You can get XML and convert it to everything. I use it to model 3D objects for example, and the model allows for some neat programming tricks while being efficient and more importantly, human readable.

    Except being small, JSON is worst of both worlds. A hacky K/V store, at best.

    • Calling XML human readable is a stretch. It can be with some tooling, but json is easier to read with both tooling and without. There's some level of the schema being relevant to how human readable the serialization is, but I know significantly fewer people that can parse an XML file by sight than json.

      Efficient is also... questionable. It requires the full turing machine power to even validate iirc. (surely does to fully parse). by which metric is XML efficient?

      10 replies →

    • I mean, at least JSON has a native syntax to indicate an array, unlike XML which requires that you tack on a schema.

      <MyRoot> <AnElement> <Item></Item> </AnElement> </MyRoot>

      Serialize that to a JavaScript object, then tell me, is "AnElement" a list or not?

      That's one of the reasons why XML is completely useless on the web. The web is full of XML that doesn't have a schema because writing one is a miserable experience.

      5 replies →

  • > AI agents don't care at all

    And I don't care at all about the feelings of AI agents. That a tool that's barely existed for 15 minutes doesn't need a feature is irrelevant when talking about whether or not to continue supporting features that have been around for decades.

  • Agreed. Having actually built and deployed an app that could render entirely from XML with XSLT in the browser: I wouldn't do it again.

    Conceptually it was beautiful: We had a set of XSL transforms that could generate RSS, Atom, HTML, and a "cleaned up" XML from the same XML generated by our frontend, or you could turn off the 2-3 lines or so of code used to apply the XSL on the server side and get the raw XML, with the XSLT linked so the browser would apply it.

    Every URL became an API.

    I still like the idea, but hate the thought of using XSLT to do it. Because of how limited it is, we ended up having e.g. multiple representations of dates in the XML because trying to format dates nicely in XSLT for several different uses was an utter nightmare. This was pervasive - there was no realistic prospect of making the XML independent of formatting considerations.

    • XSLT is much nicer to use if you just create a very simple templating language that compiles to XSLT. Subset of XLST already has a structure of typical templating language. It can even be done with regexps.

      Then simplicity becomes a feature. You can write your page in pretty much pure HTML, or even pure HTML if you use comments or custom tags for block markers. Each template is simple and straightforward to write and read.

      And while different date format seems to be a one off thing you'd prefer to deal with as late as possible in the stack, if you think broader, like addressing global audience in their respective languages and cultures, you want to support that on the server so the data (dates, numbers, labels) lands on the client in the correct language and culture. Then doing just dates and perhaps numbers in the browser is just inconsistent.

      If browsers implemented https://en.m.wikipedia.org/wiki/Efficient_XML_Interchange the web would get double digit percent lighter and faster and more accessible to humans and ai.

      But that would let you filter out ads orders of magnitude easier. So it won't happen.

      4 replies →

  • Ironically LLMs are actually better at processing and especially outputting correct XML than they are at JSON.

  • I think JSON is generally better than XML (although XML is better for some things, mostly it isn't), but JSON is not so good either; I think DER format is much better.

  • The only reason AI agents don't care about XML is because the developers decided, yet again, to attempt to recreate the benefits of REST on top of JSON.

    That's been tried multiple times over the last two decades and it just ends up with a patchwork of conventions and rules defining how to jam a square peg into a round hole.

    • Many years ago I tried very hard to go all-in on XML. I loved the idea of serving XML files that contain the data and an XSLT file that defined the HTML templates that would be applied to that XML structure. I still love that idea. But the actual lived experience of developing that way was a nightmare and I gave up.

      "Developers keep making this bad choice over and over" is a statement worthy of deeper examination. Why? There's usually a valid reason for it. In this instance JSON + JS framework of the month is simply much easier to work with.

      10 replies →

    • People have been building things differently for the last 10 years, using json/grpc/graphql (that's why replacing complex formats like xml/wsdl/soap with just JSON is a bad idea), so why train(spend money) AI for legacy tech?

Coincidentally spotted this source code in some Microsoft Windows Server® 2022 Remote Desktop Web Access thingy yesterday:

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="../Site.xsl"?>
    <?xml-stylesheet type="text/css" href="../RenderFail.css"?>

    <RDWAPage 
        helpurl="http://go.microsoft.com/fwlink/?LinkId=141038" 
        (…)

So I doubt XSLT is going away any time soon.

I also think Google is doing many bad things with it (although many of the things are not specific to Google, they are doing most of it). Removing stuff, and also adding stuff that just makes it worse, as well.

Many of the things they add, or that other things are replaced with, are seems to just mostly benefit Google (and sometimes Cloudflare), rather than actually helping you. This is true of the new Web Authentication systems as much as with other things. (And, they seem to want to make you use bloated JavaScripts even if neither the author nor reader want to do.)

> in 2025 Google announces a change in their Chrome Root Program Policy that within 2026 they will stop supporting certificate with an Extended Key Usage that includes any usage other than server

I agree that Google should not have done that, but it is often more useful to use different certificates for clients anyways.

While I think XML is generally not as good as other formats (I think DER is generally better), it works better than some other formats for some things. This is not a reason to get rid of XSLT though; it is useful. There are other reasons to not require it (e.g. to simplify implementations, but they are currently too complicated mainly due to the newer stuff instead anyways), but that does not mean that it cannot be used, that it cannot be implemented, etc. (For example, a static site generator might convert XML+XSLT to HTML if you need it while also providing the original XML+XSLT files to anyone who wants them, therefore making server-side and client-side working.)

  • I find it really interesting the amount of effort everyone putting in nagging a bad thing rather than ignoring and working on alternatives. I guess everyone is doing a favor to the bad decision maker. People are attracted to negative phenomens and here you go another one, and its rewarding the op.

    • Yes, it would be better to work on alternatives, and I have done some of these things (and so have some other people). However, that won't fix WWW (or Chrome or Google), it just means it is an alternative (which is still a good thing to have, though). However, sometimes they even try to (or sometimes just does as a consequence of existing specifications, rather than deliberatley trying to) prevent any alternative that is actually good.

They have been killing open anything for a long time. Very similar to Microsoft. As an example, they have the power to block emails for a large portion of the Internet. This is used for good, like spam and scams, but also bad, like political viewpoints they don't like.

The same can be said about their search engine. This most likely has already altered the outcomes of elections and should have been investigated years ago.

Coders have this tendency to value ideology over practicality. What matters is something that works and people use, not a theoretical picture of how it could have worked in an alternative timeline.

  • Actually, control means practicality. Linux won the server wars and that was a combination of ideology AND practicality.

    If a company breaks something so only their path works it's short term practicality to use it and long term practicality to fight for an alternative that keeps control in the developers hand.

    Monopolies are terrible for software developers. Quality and customisation tend to go down, which means less value for the Devs.

  • > Coders have this tendency to value ideology over practicality.

    It would be a horrible existence to value anything else. What reason is there to get up in the morning if you think things couldn't be better?

Google is evil, but man, I never missed XSLT. I'm old enough to remember it and it gives me war flashbacks.

The good thing is that it makes you strong and resilient to pain over time. It's painfully unreadable. It's verbose (ask chatgpt to write a simple if statement). Loops? - here's your foreach and that's all we have. Debugging is for weak, stdout is your debugger.

It's just shit tech, period. I hope devs that write soul harvesting surveillance software at Google go to hell where they are forced to write endless xslt's. Maybe that's the reason they want to remove it from Chrome.

  • I don't really get the hatred for XSLT. It's not the most beautiful language, I'll give you that, but it's really not as bad as people make it out to be.

    I can't imagine wanting to use anything more complex than a for-each loop in XSLT. You can hack your way into doing different loops but that's like trying to implement do/while in Haskell.

    Is it that I've grown too comfortable with thinking in terms of functional programming? Because the worst part of XSLT I can think of is the visual noise of closing brackets.

    • Probably you never _worked_ with XSLT (which is good for you). Very simple things quickly become 1K of unreadable text.

      E.g. showing the last element of the list with different something ``` <xsl:for-each select="items/item"> <xsl:choose> <xsl:when test="position() = last()"> <span style="color:red;"> <xsl:value-of select="."/> </span> </xsl:when> <xsl:otherwise> <span style="color:blue;"> <xsl:value-of select="."/> </span> </xsl:otherwise> </xsl:choose> </xsl:for-each> ```

      Or ask chatgpt to count total weight of a shipping based on xml with items that have weights. I did and it's too long to paste here.

      > It's not the most beautiful language, I'll give you that, but it's really not as bad as people make it out to be.

      TBH I can say that about any language or platform that I ever touched. ZX Spectrum is not that bad, although has its limits. That 1960x 29-bit machine is not that bad, just takes time to get used to it. C++ is not that bad for web development, it's totally doable too.

      The thing is that some technologies are more suitable for modern tasks then others, you'll just do much, much more (and better) with JSON model and JS code than XSLT.

It was named Gemini because it was developed by the twin teams at Google, Google Brain and DeepMind. That's the only reason.

It is already ChromeOS Application Platform for quite some time now.

Every Chrome installation or related fork, plus Electron shippments, counts.

Getting rid of XSLT from the browser would be a mistake, no doubt about it.

You can see it clear as day in the github thread that they weren't asking permission, they were doing it no matter what, all their concerns about security just being the pretext.

It would have been more honest of them to just tell everyone to go fuck themselves.

  • > their concerns about security just being the pretext.

    It seems entirely reasonable to be concerned about XSLT’s effects on security:

    > Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.

    https://www.offensivecon.org/speakers/2025/ivan-fratric.html

    https://www.youtube.com/watch?v=U1kc7fcF5Ao

    • AFAIK browsers rely on an old version of xslt libraries and haven’t upgraded to newer versions

      They also seem to be putting pressure on the library maintainer resulting in them saying they’re not going to embargo security bugs

  • What do you think their real reason for wanting to remove XSLT is, if not what they claim?

    • They don't want to support it (because of their perceived cost-benefit ratio for what they're interested in developing/maintaining), and hence if it is removed from the browser standards then they aren't required to support it (as opposed to driving people to other browsers)? One could ask why do WebUSB and similar "standards" given those would seem (to me) to be a much greater security issue?

    • There are other implementations of XSLT available besides libxslt, some even in Javascript. Security is something that could be overcome and they wouldn't need to break styling on RSS feeds or anything, it could be something like how FF has a js for dealing with PDFs.

      It doesn't need to be some big conspiracy: they see the web as an application runtime instead of being about documents and information, don't give a fuck about XML technologies, don't use them internally and don't feel anyone else needs to.

Google is a corporation maximizing shareholder value. That this goal is not aligned with serving the greater good and freedom should come as no surprise.

What can we do to stop Google killing the open web other than complaining?

One way is to tell everyone to use Firefox (uBlock origin works there)

It is still an issue that the Mozilla Foundation is still 80% funded by Google though, so this needs to be solved first.

Somehow Firefox needs to be moved away from Mozilla if they cannot find an alternative funding source other than Google.

  • You can donate to some project like ladybird or servo if they take donations. Or contribute.

    • Ladybird looks promising, but I don't see any donation form for this, only sponsorships.

      If that is the case, we need to come together and donate thousands to ladybird en masse.

      It might take around ~30 years for adoption but it is a start.

      1 reply →

  • Use the open web yourself, build on it, align with standars, and help them mature, participate in open standard bodies.

    • How long will this take us?

      Don't you think Google and the other big tech companies already has massive influence in the W3C and web standards?

  • Stop using Chrome and Electron, but of course no one will, it was devs that made them what they are today.

  • If you read the article you will see that Mozilla supports the removal of XLST. So switching to Firefox which also turned off RSS support several years ago is hardly a good choice.

  • You need to admit to yourself that maintaining a critical piece of software like a web browser costs a lot in work and finance and start figuring where and how you'll fund people that do more than complain.

    Developing software is hard - and OSS hasn't found a way to do hard things yet.

Other anecdotal experiences with Google and their specific attitude towards user/developer needs:

- Stable Array.sort (2008 – 2018): Of course it doesn't have to be stable, spec does not dictate it right now, it is good for performance, and some other browser even started to do this like we do: http://crbug.com/v8/90 . - Users don't userstyle (2015– ) Of course we absolutely can an will remove this feature from the core, despite it is mandated by several specifications: https://bugs.chromium.org/p/chromium/issues/detail?id=347016 . - SMIL murder attempt was addressed in the OP article (I think they keep similar sentiment towards MathML either) but luckily was eventually retracted. I guess/hope this XSLT will have similar "storm in the teacup" trajectory.

XML for document content (like, the whole point of markup) = awesome.

XML for app configuration or basic data transfer formats = horrible.

Unfortunately I fear so many people got burned by the latter issues they forgot (or missed entirely) all the greatness of the first.

  • I agree that XML is much better for document content and markup than for configuration and other stuff. I think still there are problems with XML, although that problem can be avoided (and often is, now, since JSON is often used instead, but JSON has its own problems, too).

  • P.S. Google has a bazillion dollars but can't figure out how to maintain a new secure XSLT library or update to a newer one which exists already? The usage argument is dumb…maybe a lot more sites would find good uses for XML/XSLT if this stuff was actually maintained and promoted properly!

    • Why would Google want to bother? Who actually uses XSLT today for making webpages? Why should browsers spend effort on supporting XML+XSLT-based pages in addition to HTML+CSS-based pages?

“The reason implementations are riddled with CVEs is neglect”

Imo this misses the point a bit. If it is neglected and is going to keep producing bugs and not many people are developing on it, then it maybe makes sense to kill it.

This also means new browsers won’t have to implement it maybe?

  • Because those neglecting it are the same that want to remove it. So it's not “we want to remove it because it's neglected”, but “we want to remove it so we'll neglect it”. This is a pretty standard M.O. for the destruction of the commons.

    If you look at the WHATWG GH issue, you'll see that two distinct, modern, maintained implementations of XSLT, one of which in Rust (so considerably less likely to be affected by memory bugs) have been proposed as alternatives to what's currently used in WebKit and Blink. The suggestions has been ignored without a motivation, because the neglect is the point.

To me, when google rename Bard to Gemini, they should have been dragged into court. But the Gemini people have no funds for this, so big money wins. But at the least a trademark complaint could have been filed.

In any case, I do not use google at all unless forced. My old gmail address is a "dump" where if a site asks for am email, they get that one. I only long into to gmail to delete the "spam" I get.

Site not loading? Maybe the open web isn’t all it’s cracked up to be? /s

  • Might not load if you’re in Ukraine and the location of the server you’re trying to access is in Russia. Blocked on the country level. Works very well for me, I can easily distinguish someone trying to pretend they are in the EU, while being in Russia. Have no idea whether this is the case, but it does not load for me either.