Great to see that somebody else creates a true open source XSLT 3 and XPATH 3 implementation!
I worked on projects which refused to use anything more modern than XSLT & XPATH 1.0 because of lack of support in the non Java/Net World (1.0 = tech from 1999). Kudos to Saxon though, it was and is great but I wished there were more implementations of XSLT 2.0 & XPATH 2.0 and beyond in the open source World... both are so much more fun and easier to use in 2.0+ versions.
For that reason I've never touched XSLT 3.0 (because I stuck to Saxon B 9.1 from 2009). I have no doubt it's a great spec but there should be other ways than only Saxon HE to run it in an open source way.
It's like we have an amazing modern spec but only one browser engine to run it ;)
Well, it's not as if this is the first free alternative. Here is a wonderful, incredibly powerful tool, not written in Java, but in Free Pascal, which is probably too often underestimated: Xidel[1]. Just have a look at the features and check its Github page[2]. I've often been amazed at its capabilities and, apart from web scraping, I mainly use it for XQuery executions - so far the latest version 0.9.9 has also implemented XPath/XQuery 3.1 perfectly for my requirements. Another insider tip is that XPath/XQuery 3.1 can also be used to transform JSON wonderfully - JSONiq is therefore obsolete.
I've worked on archive projects with complex TEI xml files (which is why when people say xml is bad and it should be all json or whatever, I just LOL), and fortunately, my employer will pay for me to have an editor (Oxygen) that includes the enterprise version of Saxon and other goodies. An open-source xml processing engine that wasn't decades out of date would be a big deal in the digital humanities world.
I don't think people realize just how important XML is in this space (complex documentary editing, textual criticism, scholarly full-text archives in the humanities). JSON cannot be used for the kinds of tasks to which TEI is put. It's not even an option.
Nothing could compel me to like XSLT. I admire certain elements of its design, but in practice, it just seems needlessly verbose. But I really love XPath, though.
My hope is that we can get a little collective together that is willing to invest in this tooling, either with time or money. I didn't have much hope, but after seeing the positive response today more than before.
Oxygen was such a clunky application back when I used it for DH. But very powerful and the best tool in the game. Would love to see a modern tool that doesn't get in the way for all those poorly paid, overworked DH research assistants caffeinated in the dead of night banging out the tedious, often very manual, TEI-XML encoding work...
> I worked on projects which refused to use anything more modern than XSLT & XPATH 1.0 because of lack of support in the non Java/Net World (1.0 = tech from 1999).
For some things that may just be down to how uses are specified. For YANG, the spec calls out XPath 1.0 as the form in which constrains (must and when statements) must be expressed.
There are many humongous XML sources. E.g. the Wikipedia archive is 42GB of uncompressed text. Holding a fully parsed representation of it in memory would take even more, perhaps even >100GB which immediately puts this size of document out of reach.
Anything could be supported with sufficient effort, but streaming hasn't been my priority so far and I haven't explored it in detail. I want to get XSLT 3.0 working properly first.
There's a potential alternative to streaming, though - succinct storage of XML in memory:
The parsed in memory overhead goes down to 20% of the original XML text in my small experiments.
There's a lot of questions on how this functions in the real world, but this library also has very interesting properties like "jump to the descendant with this tag without going through intermediaries".
May I ask why? I used to do a lot of XSLT in 2007-2012 and stuck with XSLT 2.0. I don't know what's in 3.0 as I've never actually tried it but I never felt there was some feature missing from 2.0 that prevented me to do something.
As for streaming, an intermediary step would be the ability to cut up a big XML file in smaller ones. A big XML document is almost always the concatenation of smaller files (that's certainly the case for Wikipedia for example). If one can output smaller files, transform each of them, and then reconstruct the initial big file without ever loading it in full in memory, that should cover a huge proportion of "streaming" needs.
0.2x of the original size would certainly make big documents more accessible. I've heard of succinct storage, but not in the context of xml before, thanks for sharing!
"How hard is it to implement XML/XSLT/XPATH streaming?"
It's actually quite annoying on the general case. It is completely possible to write an XPath expression that says to match a super early tag on an arbitrarily-distant further tag.
In another post in this thread I mention how I think it's better to think of it as a multicursor, and this is part of why. XPath doesn't limit itself to just "descending", you can freely cursor up and down in the document as you write your expression.
So it is easy to write expressions where you literally can't match the first tag, or be sure you shouldn't return it, until the whole document has been parsed.
I think from a grammar side, XPath had made some decisions that make it really hard to generally implement it efficiently. About 10 years ago I was looking into binary XML systems and compiling stuff down for embedded systems realizing that it is really hard to e.g. create efficient transducers (in/out pushdown automata) for XSLT due to complexity of XPath.
100GB doesn't sound that out of reach. It's expensive in a laptop, but in a desktop that's about $300 of RAM and our supported by many consumer mainboards. Hetzner will rent me a dedicated server with that amount of ram for $61/month.
If the payloads in question are in that range, the time spent to support streaming doesn't feel justified compared to just using a machine with more memory. Maybe reducing the size of the parsed representation would be worth it though, since that benefits nearly every use case
I just pulled the 100GB number out of nowhere, I have no idea how much overhead parsed xml consumes, it could be less or it could be more than 2.5x (it probably depends on the specific document in question).
In any case I don't have $1500 to blow on a new computer with 100GB of ram in the unsubstantiated hope that it happens to fit, just so I can play with the Wikipedia data dump. And I don't think that's a reasonable floor for every person that wants to mess with big xml files.
I used to work for a NoSQL company that was more or less an XQUERY engine. One of the things we would complain about is we did use wikipedia as a test data set, so the standing joke was for those of us dealing with really big volumes we'd complain about 'only testing Wikipedia' sized things. Good times.
StackExchange also, not necessarily streamable but records are newline delimited which makes it easier to sample (at least the last time I worked with the Data Dump).
This, thirty years later, is the best pitch for XML I’ve read. Essentially, it’s a slow moving, standards-based approach to data interoperability.
I hated it the minute I learned about it, because it missed something I knew I cared about, but didn’t have a word for in the 90s - developer ergonomics. XML sucks shit for someone who wants to think tersely and code by hand. Seriously, I hate it with a fiery passion.
Happily to my mind the economics of easier-for-creators -> make web browsers and rendering engines either just DEAL with weird HTML, or else force people to use terse data specs like JSON won out. And we have a better and more interesting internet because of it.
However, I’m old enough now to appreciate there is a place for very long-standing standards in the data and data transformation space, and if the XML folks want to pick up that banner, I’m for it. I guess another way to say it is that XML has always seemed to be a data standard which is intended to be what computers prefer, not people. I’m old enough to welcome both, finally.
> XML has always seemed to be a data standard which is intended to be what computers prefer, not people.
On one hand, you aren't wrong: XML has in fact been used for machine-to-machine communication mostly. OTOH, XML was just introduced as a subset of SGML doing away with the need of vocabulary-specific markup declarations for mere parsing in favor of always requiring explicit start- and end-element tags. Whereas HTML is chock full of SGMLisms such as tag inference (for example inferring paragraph ends on block elements), empty ("self-closing") elements and enumerated ("boolean") attributes driven by per-element declarations.
One can argue to death whether the web should work as a mere document delivery network with rigid markup a la XML, or that browsers should also directly support SGML authoring idioms such as the above shortform mechanisms. SGML also has text macros/shared fragments (entities) and even allows defining own parsing tokens for markdown, math, CSV, or custom syntaxes. HTML leans towards SGML in that its documentation portrays HTML as an authoring language, but browsers are lacking even in basic SGML features such as entities.
That’s a flame war that’s been raging for decades for sure.
I do wonder what web application markup would look like today if designed from scratch. It is kind of amazing that HTML and CSS can be used for creating beautiful documents viewable on pretty much any device with a screen AND also for creating dynamic applications with pixel-perfect rendering, special effects, integrations with the device’s hardware, and even external peripherals.
If there was ever scope creep in a project this would be it. And given the recent discussion on here of curses based interfaces it reminded me just how primitive other GUI application layout tools can be while still achieving amazing results. Even something like GTK does not need the intense level of layout engine support and yet is somehow considered richer in some ways and probably more performant for a lot of stuff that’s done with it.
So I am curious what web application development would look like today if it wasn’t for HTML being “good enough”.
"This, thirty years later, is the best pitch for XML I’ve read."
I wish someone would write "XML - The Good Parts".
Others might argue that this is JSON but I'd disagree:
- No comments is a non-starter
- No proper integers
- No date format
- Schema validation is a primitive toy compared what we had for XML
- Lack of allowed trailing commas
YAML ain't better. I hated whitespace handling in XML, it's a miracle how YAML could make it even worse.
XML is from era long past and I certainly don't want to go back there, but it had its good parts and I feel we have not really learned a lot from its mistakes.
In the end maybe it is just that developer ergonomics is largely a matter of taste and no language will ever please everyone.
It's funny to hear people in the comments here talk about XML in the past tense.
I know it's passé in the web dev world, but in my work we still work with XML all the time. We even have work in our queue to add support for new data sources built on XML (specifically QIF https://qifstandards.org/).
It's fine with me... I've come to like XML. It's nice to have a standard, easy way to do seschemas, validators, processors, queries, etc. It can be overdone and it's not for every use case, but it's pretty good at what it does.
Developer ergonomics is drastically underappreciated, even in modern times. Since we're talking about textual data formats, I'll go out on a limb here and say that I hate YAML. Double checking exactly how many spaces are present on each line is tedious. It manages to make a simple task like copy-pasting something from a different file (at a different indentation level) into an error-prone process. I'll take angle brackets any day.
You haven’t felt hate until you’ve counted spaces in your Helm templates in order to know what value to put after `nindent`. The punchline is that k8s doesn’t even speak yaml, the protocol is all json and it’s the tooling that inflicts yaml on us.
I can live with yaml as a config format, but once logic starts creeping in, give me anything else.
> Developer ergonomics is drastically underappreciated, even in modern times.
When was the last time you had an editor that wouldn't just auto close the current tag with "</" ? I mean it's a god-send for knowing where you are at in large structure. You aren't scrolling to the top to find which tag you are in.
>XML has always seemed to be a data standard which is intended to be what computers prefer, not people
Interesting take, but I'm always a little hesitant to accept any anthropomorphizing of computer systems.
Isn't it always about what we can reason and extrapolate about what the computer is doing? Obviously computers have no preference so it seems like you're really saying
"XML is a poor abstraction for what it's trying to accomplish" or something like that.
Before jQuery, chrome, and web 2.0, I was building xslt driven web pages that transformed XML in an early nosql doc store into html and it worked quite beautifully and allowed us to skip a lot of schema work that we definitely were ready or knowledgeable enough to do.
EDIT: It was the perfect abstraction and tool for that job. However the application was very niche and I've never found a person or team who did anything similar (and never had the opportunity to do anything similar myself again)
I did this for many years at a couple different companies. As you said it worked very well especially at the time (early 2000’s). It was a great way to separate application logic from presentation logic especially for anything web based. Seems like a trivial idea now but at the time I loved it.
In fact the RSS reader I built still uses XSLT to transform the output to HTML as it’s just the easiest way to do so (and can now be done directly in the browser).
Re xslt based web applications - a team at my employer did the same circa 2004. It worked beautifully except for one issue: inefficiency. The qps that the app could serve was laughable because each page request went through the xslt engine more than once. No amount of tuning could fix this design flaw, and the project was killed.
Another reason was the overall XML ecosystem grew unwieldy and difficult to navigate: XPath, XSLT, SOAP, WSDL, Xpointer, XLink, SOAP, XForms... They all made sense in their own way, but it was difficult to master them all. That complexity, plus the poor ergonomics, is what paved the way for JSON to become preferred.
I quite liked it when it first came out, I'd been dealing with a ton of bespoke formats up until then. Pretty much every one was ambiguous and painful to deal with. It was a step forward being able to push people towards a standard for document transfer.
I suspect it was SOAP and WSDL that killed it for a lot of people though. That was a typical example of a technical solution looking for a problem and complete overkill for most people.
The whole namespace thing was probably a step too far as well.
You should try using a LISP like Racket for XML. Because XML can be expressed directly as S-expressions, XML and LISP go together like peanut butter and jelly.
In my experience, at least with Clojure, it's much more convenient to serialize XML into a map-like structure. With your example, the data structure would look like so.
Some people use namespaced keywords (e.g. :xml/tag) to help disambiguate keys in the map. This kind of data structure tends to be more convenient than dealing with plain sexps or so-called "Hiccup syntax". i.e.
The above syntax is convenient to write, but it's tedious to manipulate. For instance, one needs to dispatch on types to determine whether an element at some index is an attribute map or a child. By using the former data structure, one simply looks up the :attrs or :content key. Additionally, the map structure is easier to depth-first search; it's a one-liner with the tree-seq function.
I've written a rudimentary EPUB parser in Clojure and found it easier to work with zippers than any other data structure to e.g. look for <rootfile> elements with a <container> ancestor.
Zippers are available in most programming languages, thankfully, so this advantage is not really unique to Clojure (or another Lisp). However, I will agree that something like sexps (or Hiccup) is more convenient than e.g. JSX, since you are dealing with the native syntax of the language rather than introducing a compilation step and non-standard syntax.
This looks like it loses the distinction between attributes and nested tags?
As in, I don't see a difference between `(attr "val")` which expresses an attribute key/value pair and `(thing "world")` which expresses a tag/content relationship. Even if I thought the rule might be "if the first element of the list is a list itself then it should be interpreted as a set of attribute key value pairs" then I would still be ambiguous with:
(foo (bar "baz") "content")
which could serialize to either:
<foo bar="baz">content</foo>
or:
<foo><bar>baz</bar>content</foo>
In fact, this ambiguity between attributes and children has always been one of the head scratching things for me about XML. Well, the thing I've always disliked the most is namespaces but that is another matter.
I used to do a lot of XSLT coding, by hand, in text editors that weren't proper IDEs, and frankly it wasn't very hard to do.
There's something very zen-like with this language; you put a document in a kind of sieve and out comes a "better" document. It cannot fail; it can be wrong, full of errors, of course (although if you're validating the result against a schema it cannot be very wrong); but it will almost never explode in your face.
And then XSLT work kind of disappeared; I miss it a lot.
I'm gonna be honest, I find terseness to be highly overrated by programmers. I value it in moderation, but for a lot of people they say things like "this language is verbose" like that is a problem unto itself. If verbosity is gaining you something (generally clarity), then I think that's a reasonable cost to pay. Terseness is not, in my opinion, a goal unto itself (though many programmers certainly treat it as such). It's something you should seek only to the extent that it makes a language easier to use.
And not only does the XML format have bad developer ergonomics, most XML parsers are equally terrible to use. There are many things I like about XML: name spaces, schemas, XPath, to some degree even XSLT. But the typical XML developer experience is terrible on every layer
YAML is great. For simple configuration files. For anything more complex it gets gnarly quick, but honestly? If I need a config file for a script I'm writing I will reach for YAML every time. It really is amazing for that use case.
> XML sucks shit for someone who wants to think tersely and code by hand. Seriously, I hate it with a fiery passion.
At the risk of glibly missing the main point of your comment, take a look at KDL. Unlike JSON/TOML/YAML, it features XML-style node semantics. Unlike XML, it's intended to be human-readable and writeable by hand. It has specifications for both a query language and a schema language as well as implementations in a bunch of languages. https://kdl.dev/
The main thing I hate about XML (apart from the tedious syntax and terrible APIs - who thought SAX was a sane idea?) is that the data model is wrong for 99% of use cases.
XML gives you an object soup where text objects can be anywhere and data can be randomly stored in tags or attributes.
It just doesn't at all match the object model used by basically all programming languages.
I think that's a big reason JSON is so successful. It's literally the object model used by JavaScript. There's no weird impedance mismatch between the data represented on disk and in your program.
Then someone had to go and screw things up with YAML...
I can’t say this with certainty, but I have some reason to suspect I might be partially to blame for this fun fact!
A couple years ago, I stumbled on a discussion considering deprecation/removal of XSLT support in Chrome. At some point in the discussion, they mentioned observing a notable uptick in usage—enough of an uptick (from a baseline of approximately zero) that they backed out.
The timing was closely correlated with work I’d done to adapt a library, which originally used XSLT via native Node extensions, to browser XSLT APIs. The project isn’t especially “popular” in the colloquial sense of the term, but it does have a substantial niche user base. I’m not sure how much uptake the browser adaptation of this library has had since, but some quick napkin math suggested it was at least plausible that the uptick in usage they saw might have been the onslaught of automated testing I used to validate the change while I was working on it.
Being interested in archaic technologies, I built a website using XML/XSLT not that long ago. The site was an archive of a band I was in, which made it fundamentally data oriented: We recorded multiple albums, with different tracks, and a different lineup of musicians each time. There's lots of different databases I could built a static site generator around, but what if the browser could render the page straight from the data? That's what's cool about XML/XSLT. On paper, I think it's actually a pretty nice idea: The browser starts by loading the actual data, and then renders it into HTML according to a specific stylesheet. Obviously the history of browser tech forked in a different direction, but the idea remains good. What if there was native browser support for styling JSON into HTML?
The fact it could be compiled in WASM is a good thing, given the Chrome team was considering removing libxml and XSLT support a few years back. The reasons cited were mostly about security (and share of users).
It's another proof that working on fundamental tools is a good thing.
Very cool! I recently wrote an XSLT 2 transpiler for js (https://github.com/egh/xjslt) - it's nice to see some options out there! Writing the xpath engine is probably the hard part (I relied on fontoxpath). I'm going to be looking into what you have done for inspiration!
XPath is a very nice language for querying over XML. Most places pitch it as a "declarative" syntax, but as I am quite skeptical of "declarative" as a concept, you can also look at the vast majority of the XPath standard as a way to imperatively drive a multicursor over an XML document, diving in out and out nodes and extracting bits of text and such, without having to write the equivalent code in your language to do so, which will be inevitably quite a bit more verbose. When you need it, it's really useful.
In my very opinionated opinion, XPath is about 99% of the value of XSLT, and XSLT itself is a misfire. Embedding an XML language in XML, rather than being an amazing value proposition, is actually a huge and really annoying mistake, in much the same way and for much the same reason as anyone who has spent much time around shell scripting has found trying to embed shell strings in shell strings (and, if the situation is particularly dire, another third or fourth level of such nesting) is quite unpleasant. Imagine trying to deal with bash, except you have to first quote all the command lines as bash strings like you're using bash -c, all the time. I think "XPath + your favorite language" has all the power of XSLT and, generally, better ergonomics and comprehensibility. Once you've got the selection of nodes in hand, a general-purpose programming language is a better way to deal with their contents then what XSLT provides. Hence why it has always languished.
XQuery is the best of both worlds - you get almost all the benefits of XSLT like e.g. the ability to define your own functions, but with non-XML-based syntax that is a superset of XPath.
Basically the only thing it's missing in XQuery vs XSLT is template rules and their application; but IMO simple ones are just as easy to write explicitly, and complex rulesets are hard to reason about and maintain anyway.
It’s been a while since I’ve had to deal with XML, but I remember finding it fairly convenient to restructure XML documents with XSLT. Modifying the data in those documents, much less so. I think there’s a sweet spot.
To someone who hasn’t worked much with XML, this seems like a reasonable take!
For cases where a host system wants to execute user-defined data transformations safely, XSLT seems like it might be useful. When they mature, maybe WASM and WASI will fill the same niche with better developer ergonomics?
Interesting take about XSLT. But I agree... XSLT could be something much more simple (and non XML initself) and combined with XPATH. It feels like a lot of boiler code to write XSLT.
XPATH+XSLT is SQL for XML, declarative selection and transformation.
Using an XML library to iterate through an entire XML document without XPATH is like looping through entire database tables without a JOIN filter or a WHERE clause.
XSLT is the SELECT, transforming XML output with a new level of crazy for recursion.
XPath is a superb query language for XML (or anything that you can structure as a DOM) --- it is also, with some obscure exceptions, the only query language with serious adoption, so it's an easy choice and readily available in XML tools. The only caveat is there are various spec versions and most never added support for newer versions.
Let's look at JSON by comparison. Hmm, let's see: JSONPath, JMESPath, jq, jsonql, ...
JQ is the most feature-rich of the bunch. It's defacto standard and I usually just default to it because it offers so much - assignment, various builtins such as base64 encoding.
The disadvantage is that it's not easily embeddable in your own programs - so programs use JSONPath / Go templates often.
I manage a team who build and maintain trading data reports for brokers, we have everything generate in a fairly standard format and customize to those brokers exact needs with XSLT. Hundreds of reports, couldnt manage without it.
E.g. massive XML documents with complexity which you need to be transformed into other structured XML. Or if you need to parse complex XML. Some people hate XSLT, XPATH with a passion and would rather write much more complex lxml code. It has a steep learning curve but once you understand the fundamentals you can transform XML more easily and especially predictable and reliable than ever.
Another example: If you have very large XML you cannot fit even into memory you can still stream process them with XSLT.
It makes you the master of XML transformations and fetching information out of complex XML ;)
What alternatives exist for extracting structured data from the web? I have several ETL pipelines that use htmltidy to turn tag soup into something approximately valid and xmlstarlet to transform it into tabular data.
I have used it when using scraping some data from web pages using scrapy framework. It's reliable way to extract something from web pages compared to regex.
Love to see stuff outside the Java space since I really like thedoing stuff in XSLT. Question: Does this work on a textual XML representation or can you plug in different XML readers? I have had really great fun in the past using http://www.ananas.org/xi/ transforming arbitrarily for formated files using XSLT. Also it is today really important that XML Reader has error correction capabilities, since lots of tools don't write well-formed XML, which often is a showstopper for employing to transforms from my experience.
I wonder if this could perhaps some day be used in Wine, for the MSXML implementations. Maybe not, since those implementations need to be bug-compatible where applications depend on said bugs; but the current implementation(s) are also not fantastic. I believe it is still using libxml2.
(Aside: A long time ago, I had written an alternate XPath 1.1 implementation for Wine during GSoC, but rather shamefully, I never actually got it merged. Life became very hectic for me during that time period and I never really looped back to it. Still feel pretty bad about it all these years later.)
Nice !
I've a scrapper using XPath/XSLT extensively and 90% of the XPath selectors work like for years without a change.
With CSS selectors I've had more problems...
CSS selectors have spent the last few decades reinventing XPath. XPath introduced right from the beginning the notion of axes, which allow you to navigate down, up, preceding, following, etc. as makes sense. XPath also always had predicates, even in version 1.0. CSS just recently started supporting :has() and :is(), in particular. Eventually, CSS selectors will match XPath's query abilities, although with worse syntax.
The problem with CSS selectors (at least in scrapers) is also that they change relatively often, compared to (html) document structure, thats why XPath last longer.
But you are right, CSS selectors compared to 20 years old XPath are realy worse.
- XPath literally didn't exist when CSS selectors were introduced
- XPath's flexibility makes it a lot more challenging to implement efficiently, even more so when there are thousands of rules which need to be dynamically reevaluated at each document update
- XPath is lacking conveniences dedicated to HTML semantics, and handrolling them in xpath 1.0 was absolutely heinous (go try and implement a class predicate in xpath 1.0 without extensions)
I have a service that extracts <meta> tags in webpages and to do that I'm currently using (and need) three different dependencies: html5ever, markup5ever_rcdom, markup5ever. I don't like those to be honest, the documentation is quite bad and it was difficult to understand how I should have used the libraries to achieve such a simple task.
XPath on the other hand makes this extremely easy in comparison, I wonder how this will perform compared to my current solution.
Unfortunately at this point there's no HTML parser frontend for Xee (and its underlying library Xot) yet (HTML 5 parser serialization is supported at least in code). It shouldn't be too hard to add at least HTML 5 support using something like html5ever.
I always hate it when license files have "yes, but" language in them because if the license file differs in some non-obvious way, now I have to pay lawyers to interpret it
Doesn’t look like “yes, but” language to me. Looks like the code is plain old MIT and the author is doing their due diligence with respect to vendored content in the repository subject to different licensing. Seems like they are being paid by a company to work on this, so it’s not surprising that they actually pay attention to copyright.
The fact that many project maintainers forget about vendored content and haphazardly slap the MIT license (or whatever) verbatim into a LICENSE file doesn’t actually give you a get-out-of-paying-lawyers-free card! If anything, Xee’s COPYRIGHT file gives me more confidence in my legal footing than an unadulterated LICENSE file would. It indicates the maintainer at least has a basic understanding of how copyright applies to their project.
Nice! I tried using XQuery (superset of XPath 3) for a while through the BaseX implementation. It's pretty nice, but you have to face XML problems like namespaces, document order, attributes vs nodes, you don't know if you can have 0, 1 or more nodes, etc. Something I wish was more readily available would be to run XPath against JSON, yaml, etc. It's a nicer language than say jq, but its ties to XML sometimes make it hard to transfer.
Another pain point with XML is the lack of inline schema, so the languages around like XPath have to work with arbitrary structures unlike say JSON where you at least have basic primitives like map/dict, numbers, bool, etc
I recently had the pleasure of using XSLT after never having seen it before. I used it to transform a huge 130K line XML manifest with MAPI property metadata into C# source code. It was so simple, readable, and intuitive to use.
I learnt XSLT in university back in the early/mid part of the first decade of this century. I didn't much enjoy it. I've never used it, but all my career I've had to deal with terrible ad hoc templating languages. I recently had total freedom to choose what terrible ad hoc templating language to use, and I chose XSLT. I actually totally liked it: and it seemed to have everything I've needed. In previous jobs, there was always tickets that amounted to "make a fork of the terrible ad hoc templating language and hack it until it does this", but I reckon XSLT could do everything and then some.
This is fantastic to see! I've used XML off and on since it was the red hot tech of the early 2000s. I wouldn't choose it today for a green field project, but it's still around in so many places, so we definitely need a high-performance, high-quality library written in Rust for this.
This could become a great foundation for a typed, (mostly) etree-compatible, python library built on top of this. I've used lxml for years and it's still my goto, but there are lots of places where it could be modernized.
This is great, I’ve been looking for performant and safe XML processing to replace IBM stuff (websphere/datapower) that we really only keep around for hw accelerated payload processing. At our scale, lxml and others + BYO gateway tech has a similar run cost even considering IBM licensing. I hate running their crap, which requires k8s at a version that’s some hair-thin slice above the minimum supported EKS version, it’s almost like they want us to live in 24/7 fear of being OOS.
This is really good news, I am looking forward to trying it out! Is XQuery also planned as an additional frontend? By the way, there is also χrust, a rust project working towards pretty similar goals (XPath 3.1, XQuery 3.1 and XSLT 3.0). At first glance, the architecture also seems quite similar, it is not as far along, though. Have you had any contact with them?
Just want to say that Microsoft has some sort of implementation of an xml application using Microsoft word or Ms word. But I have struggled to find examples I can use, but for a long time I have been trying to convert an office repository of corporate resumes to xml.
I miss XHTML and XSL times. Time, when Web would have been more prepared for the AI consumption, less dynamic nonsense, and more focus on the actual content. Time shows all these Flash and Java gimmicks died off.
NCBI still emits XML from their most prominent databases (e.g., PubMed). I'm looking forward to adopting this library into some of my production code that interfaces with PubMed!
XSLT was not popular for its original intended application - which is to say, serving XML data from web servers and translating it to HTML (or XSL:FO, or ...) on the client as needed. However, it was used plenty for XML processing outside of that particular niche.
New projects these days rarely have to process complicated XML to begin with. But when you do, I'd say XSLT (or perhaps better yet, XQuery) is a very useful tool to have in your toolbox.
syntext serna was such an engineering marvel. a wysiwyg XML editor that used xslt to fo to specify your rendering. was built in the context of docbook and dita but did work for any xsd with a xslt to fo. amazing technology. ahead of its time. and then came json :-(
As opposed to what for cooking "PDF via XML" files? Because I can assure you than feeding rando.odt into $(libreoffice -pdf $TMPDIR/ohgawd) is 100% not the same as $(fop -fo $TMPDIR/my.fo -pdf $TMPDIR/out.pdf)
There are a lot of APIs out there that are still XML-based, especially from enterprise suppliers.
Equifax and Experian’s APis immediately come to mind as documents that generate complex results that people often want to turn into some type of visual representation with XSLT.
I see a lot of XML APIs and formats around me, it is true. But it is machine-machine formats or complex configuration files formats which doesn't need visualization. It needs schema support and tooling, but not visualization or transformation. It is more serialization formats for complex object trees, and all processing is done on these object trees, not XML itself.
Nice work. Xpath is a beast. Obvious why paligo would be interested too. Must be a lot of commercial documentation out there where the best representation they can get looks a bit XMLish.
I yearn for the day when people will stop considering the main advertising bullet point feature that their software was written in Rust. Rust 1.0 was released a decade ago, plenty of time for its alleged technical advantages to become apparent.
It's like a handbag whose main claim to being a premium product isn't workmanship or materials, but that it has Gucci on its side.
> It's like a handbag whose main claim to being a premium product isn't workmanship or materials, but that it has Gucci on its side.
Knockoffs aside, the latter is intended to serve as a proxy for the former. I too will be happy when Rust is the boring everyday choice, but in 2025 we still see new buffer overflows every day. And if I'm picking a library, I still want to know if it's in the same language as the app it's going into.
An xpath/xslt engine is something you might want to include in other software, the programming language used might be an important information for this purpose.
Does it preserve whitespace? Something that I always found asinine about XSLT is that it wipes out whitespace when transforming. Imagine you have thousands of corporate XML files in source control, and you want to tranform them all, performing some simple mutation. XSLT claims to be fit for this job, but in practice your diff is going to be full of unintentional whitespace mangling.
XSLT will perform the transformations that you instruct it to do. It does not wipe out whitespace just on its own. Do you mean that you'd like facilities to nicely reindent the output?
> It does not wipe out whitespace just on its own.
Sounds nice but doesn't match my lived experience with both Chrome's built-in XSLT processor and `xsltproc`. (I was using XSLT 1.0, for legacy reasons, so maybe this is an XSLT 1.0 issue?)
> Do you mean that you'd like facilities to nicely reindent the output?
No, I do mean preserve whitespace (i.e., formatting), such as between elements and between attributes.
It's interesting to see the slow rehabilitation of XML and its tooling now that there's a new generation of developers who have not grown up in the shadow of XML's prime in the late 90s / early 2000s, and who have not heard (or did not buy into) the anti-XML crowd's ranting --- even though some of their criticisms were legitimate.
I've always liked XML, and especially XPath, and even though there were a large number of missteps in the heyday of XML, I feel it has always been unfairly maligned. Look at all the people who reinvent XML tooling but for JSON, but not nearly as well. Luckily, people who value XML can still use it, provided the fit is right. But it's nice to see the tides turning.
It’s the “slope of enlightenment” phase of the Gartner hype cycle, where people are able to make sober assessments of technologies without undue influence from hype or its backlash. We’re long past the days where XML is used for everything, even when it’s inappropriate, and we’re also past the “trough of disillusionment” phase where people sought alternatives to XML.
I think XML is good for expressing document formats and for configuration settings. I prefer JSON for data serialization, though.
I made extensive use of XPath and XSL(T) back in their heyday and in general was fine with them but the architect astronauts who love showing off how clever they are with artificial complexity had a tendency to make use of XML tech to complicate things unnecessarily. Think that might be where many people's dislike of it came from, especially those whose first exposure wasn't learning through simple structures when XML was new but were thrown into the type of morass that develops when a tech is climbing the maturity curve.
I manage a team of business analysts and accountants who use XSLT for generating reports for banks, XSLT is usually their first experience programming outside some linkedin learning courses. Not once has one of them ever complained about namespaces, or verbosity or anything like it, this is something I only see on HN or the programming subreddits.
The vast vast majority of Devs only experience of XML is what they hear second hand, I'm sure a lot more would like it if they tried it.
My complaints about XML remain pretty much unchanged since 10 years ago.
- Not including self-closing tags, there should only be one close tag: </>
- Elements are for data. Attributes are evil
- XPath indexing should be 0-based
- Documents without a schema should not make your tools panic or complain
- An xml document shouldn't have to waste it's time telling you it's an xml document in xml
I maintain that one of the reasons JSON got so popular so quickly is because it does all of the above. The problem with JSON is that you lose the benefits of having a schema to validate against.
Microsoft seems to be especially obsessed with making as much as possible into attributes. Makes me wonder if there is some hidden historical reason for that like an especially powerful evangelist inside the company that loved attributes during the early days of adopting XML.
This is like, your opinion, man... ;-) You can devise your schema any way you want. Attributes are great, and they exist in HTML in the form of datasets, which, as usual, are a poorly-specified and ill-designed rethinking of XML attributes
> Documents without a schema should not make your tools panic or complain
They don't. You absolutely don't need a schema. If you declare a schema, it should exist. If not, no problem?
There have been proposals a long time ago, including by Tim Bray, for an XML 2.0 that would remove some warts. But there was no appetite in the industry to move forward.
XML/XPath are very useful but I've definitely lived through their abuses. Still abusus non tollit usam and I've had many positive experiences with XPath especially. XmlStarlet has been especially useful, also xmllint. I welcome more tooling like this. The major downside to XML is the verbosity and cognitive load. Tooling that manages that is a godsend.
XML is still a huge mistake for most stuff. It's fine for _documents_ but not as a data storage solution. Bloat, ambiguities, virtually impossible to canonicalise.
XPath is cute, but if you don't mind bloat, text-only and lack of ergonomics, anyways then Conjunctive Regular Path Queries and RDF are miles ahead of XML as a data storage solution. (Not serialised as XML please xD)
Curiously, one of the driving forces behind renewed interest in XML is that language models seem to handle large XML documents better than JSON. I suspect this has something to do with it being more redundant - e.g. closing tags including the element name - making it easier for the model to keep track of structure.
XML, and other X[x] standards, are just horrible to read. On top of that, XML was made 10x worse by wrapping things in SOAP and the like over the wire, back in the day.
XSD, XPath, XSLT are all domains where I'd argue that reading/reasoning about are way more important.
When troubleshooting an issue, I don't mind scanning XML for a few data points so I can confirm what values are being communicated, but when I need to figure out how/why a specific value came to be, I don't want the logic spread throughout a giant text file wrapped in attribute value strings, and other non-debuggable "code". I'd rather it just be in a proper programming language.
The specifications are certainly not easy to read, and I wouldn't recommend them to learn about XML. But from the perspective of someone implementing them they are quite useful!
As someone who has used many programming languages and who went through the process of implementing this one I have many opinions about XPath and XSLT as programming languages. I myself am more interested in implementing them for others who value using them than using them myself. I do recognize there is a sizeable community of people who do use these tools and are passionate about them - and that's interesting to see and more power to them!
That depends on what I'm doing. Most what what I'm doing is simple and so xml is just way to complex for the task. However when I need something complex xml can handle things that the others cannot - at the expense of being really complex to work with.
Maybe similar reason as people deploy 100 requests a week micro service on multiple kubernetes clusters across 3 AZs to make sure it is highly available.
or like watching someone lovingly restore a fax machine with carbon fiber casing and a neural net to optimize transmission speed. I’m torn between admiration and existential despair.
Great to see that somebody else creates a true open source XSLT 3 and XPATH 3 implementation!
I worked on projects which refused to use anything more modern than XSLT & XPATH 1.0 because of lack of support in the non Java/Net World (1.0 = tech from 1999). Kudos to Saxon though, it was and is great but I wished there were more implementations of XSLT 2.0 & XPATH 2.0 and beyond in the open source World... both are so much more fun and easier to use in 2.0+ versions. For that reason I've never touched XSLT 3.0 (because I stuck to Saxon B 9.1 from 2009). I have no doubt it's a great spec but there should be other ways than only Saxon HE to run it in an open source way.
It's like we have an amazing modern spec but only one browser engine to run it ;)
Well, it's not as if this is the first free alternative. Here is a wonderful, incredibly powerful tool, not written in Java, but in Free Pascal, which is probably too often underestimated: Xidel[1]. Just have a look at the features and check its Github page[2]. I've often been amazed at its capabilities and, apart from web scraping, I mainly use it for XQuery executions - so far the latest version 0.9.9 has also implemented XPath/XQuery 3.1 perfectly for my requirements. Another insider tip is that XPath/XQuery 3.1 can also be used to transform JSON wonderfully - JSONiq is therefore obsolete.
[1] https://www.videlibri.de/xidel.html
[2] https://github.com/benibela/xidel
Forget to add, for latest XQuery up to 4.0, there is also BaseX [1] — this time a Java program. It has a great GUI/IDE for XQuery rapid prototyping.
[1] https://basex.org/basex/xquery/
2 replies →
interesting, did not know about that one! Thanks. (Small) but XSLT is not covered by it which is my main usage of XPATH unfortunately.
I will do some experiments with using newer XPATH on JSON... that could be interesting.
I've worked on archive projects with complex TEI xml files (which is why when people say xml is bad and it should be all json or whatever, I just LOL), and fortunately, my employer will pay for me to have an editor (Oxygen) that includes the enterprise version of Saxon and other goodies. An open-source xml processing engine that wasn't decades out of date would be a big deal in the digital humanities world.
I don't think people realize just how important XML is in this space (complex documentary editing, textual criticism, scholarly full-text archives in the humanities). JSON cannot be used for the kinds of tasks to which TEI is put. It's not even an option.
Nothing could compel me to like XSLT. I admire certain elements of its design, but in practice, it just seems needlessly verbose. But I really love XPath, though.
23 replies →
My hope is that we can get a little collective together that is willing to invest in this tooling, either with time or money. I didn't have much hope, but after seeing the positive response today more than before.
Oxygen was such a clunky application back when I used it for DH. But very powerful and the best tool in the game. Would love to see a modern tool that doesn't get in the way for all those poorly paid, overworked DH research assistants caffeinated in the dead of night banging out the tedious, often very manual, TEI-XML encoding work...
> I worked on projects which refused to use anything more modern than XSLT & XPATH 1.0 because of lack of support in the non Java/Net World (1.0 = tech from 1999).
For some things that may just be down to how uses are specified. For YANG, the spec calls out XPath 1.0 as the form in which constrains (must and when statements) must be expressed.
So one is forced to learn and use XPath 1.0.
There are many humongous XML sources. E.g. the Wikipedia archive is 42GB of uncompressed text. Holding a fully parsed representation of it in memory would take even more, perhaps even >100GB which immediately puts this size of document out of reach.
The obvious solution is streaming, but streaming appears to not be supported, though is listed under Challenging Future Ideas: https://github.com/Paligo/xee/blob/main/ideas.md
How hard is it to implement XML/XSLT/XPATH streaming?
Anything could be supported with sufficient effort, but streaming hasn't been my priority so far and I haven't explored it in detail. I want to get XSLT 3.0 working properly first.
There's a potential alternative to streaming, though - succinct storage of XML in memory:
https://blog.startifact.com/posts/succinct/
I've built a succinct XML library named Xoz (not integrated into Xee yet):
https://github.com/Paligo/xoz
The parsed in memory overhead goes down to 20% of the original XML text in my small experiments.
There's a lot of questions on how this functions in the real world, but this library also has very interesting properties like "jump to the descendant with this tag without going through intermediaries".
> I want to get XSLT 3.0 working properly first
May I ask why? I used to do a lot of XSLT in 2007-2012 and stuck with XSLT 2.0. I don't know what's in 3.0 as I've never actually tried it but I never felt there was some feature missing from 2.0 that prevented me to do something.
As for streaming, an intermediary step would be the ability to cut up a big XML file in smaller ones. A big XML document is almost always the concatenation of smaller files (that's certainly the case for Wikipedia for example). If one can output smaller files, transform each of them, and then reconstruct the initial big file without ever loading it in full in memory, that should cover a huge proportion of "streaming" needs.
2 replies →
0.2x of the original size would certainly make big documents more accessible. I've heard of succinct storage, but not in the context of xml before, thanks for sharing!
2 replies →
"How hard is it to implement XML/XSLT/XPATH streaming?"
It's actually quite annoying on the general case. It is completely possible to write an XPath expression that says to match a super early tag on an arbitrarily-distant further tag.
In another post in this thread I mention how I think it's better to think of it as a multicursor, and this is part of why. XPath doesn't limit itself to just "descending", you can freely cursor up and down in the document as you write your expression.
So it is easy to write expressions where you literally can't match the first tag, or be sure you shouldn't return it, until the whole document has been parsed.
I think from a grammar side, XPath had made some decisions that make it really hard to generally implement it efficiently. About 10 years ago I was looking into binary XML systems and compiling stuff down for embedded systems realizing that it is really hard to e.g. create efficient transducers (in/out pushdown automata) for XSLT due to complexity of XPath.
2 replies →
Would it be possible to transform a large XML document into something on-disk that could be queried like a database by the XPath evaluator?
4 replies →
100GB doesn't sound that out of reach. It's expensive in a laptop, but in a desktop that's about $300 of RAM and our supported by many consumer mainboards. Hetzner will rent me a dedicated server with that amount of ram for $61/month.
If the payloads in question are in that range, the time spent to support streaming doesn't feel justified compared to just using a machine with more memory. Maybe reducing the size of the parsed representation would be worth it though, since that benefits nearly every use case
I just pulled the 100GB number out of nowhere, I have no idea how much overhead parsed xml consumes, it could be less or it could be more than 2.5x (it probably depends on the specific document in question).
In any case I don't have $1500 to blow on a new computer with 100GB of ram in the unsubstantiated hope that it happens to fit, just so I can play with the Wikipedia data dump. And I don't think that's a reasonable floor for every person that wants to mess with big xml files.
5 replies →
I used to work for a NoSQL company that was more or less an XQUERY engine. One of the things we would complain about is we did use wikipedia as a test data set, so the standing joke was for those of us dealing with really big volumes we'd complain about 'only testing Wikipedia' sized things. Good times.
StackExchange also, not necessarily streamable but records are newline delimited which makes it easier to sample (at least the last time I worked with the Data Dump).
Is that all in one big document?
We regularly parse ~1GB XML documents at work, and got laughed at by someone I know who worked with bulk invoices when I called it a large XML file.
Not sure how common 100GB files are but I can certainly image that being the norm in certain niches.
This, thirty years later, is the best pitch for XML I’ve read. Essentially, it’s a slow moving, standards-based approach to data interoperability.
I hated it the minute I learned about it, because it missed something I knew I cared about, but didn’t have a word for in the 90s - developer ergonomics. XML sucks shit for someone who wants to think tersely and code by hand. Seriously, I hate it with a fiery passion.
Happily to my mind the economics of easier-for-creators -> make web browsers and rendering engines either just DEAL with weird HTML, or else force people to use terse data specs like JSON won out. And we have a better and more interesting internet because of it.
However, I’m old enough now to appreciate there is a place for very long-standing standards in the data and data transformation space, and if the XML folks want to pick up that banner, I’m for it. I guess another way to say it is that XML has always seemed to be a data standard which is intended to be what computers prefer, not people. I’m old enough to welcome both, finally.
> XML has always seemed to be a data standard which is intended to be what computers prefer, not people.
On one hand, you aren't wrong: XML has in fact been used for machine-to-machine communication mostly. OTOH, XML was just introduced as a subset of SGML doing away with the need of vocabulary-specific markup declarations for mere parsing in favor of always requiring explicit start- and end-element tags. Whereas HTML is chock full of SGMLisms such as tag inference (for example inferring paragraph ends on block elements), empty ("self-closing") elements and enumerated ("boolean") attributes driven by per-element declarations.
One can argue to death whether the web should work as a mere document delivery network with rigid markup a la XML, or that browsers should also directly support SGML authoring idioms such as the above shortform mechanisms. SGML also has text macros/shared fragments (entities) and even allows defining own parsing tokens for markdown, math, CSV, or custom syntaxes. HTML leans towards SGML in that its documentation portrays HTML as an authoring language, but browsers are lacking even in basic SGML features such as entities.
That’s a flame war that’s been raging for decades for sure.
I do wonder what web application markup would look like today if designed from scratch. It is kind of amazing that HTML and CSS can be used for creating beautiful documents viewable on pretty much any device with a screen AND also for creating dynamic applications with pixel-perfect rendering, special effects, integrations with the device’s hardware, and even external peripherals.
If there was ever scope creep in a project this would be it. And given the recent discussion on here of curses based interfaces it reminded me just how primitive other GUI application layout tools can be while still achieving amazing results. Even something like GTK does not need the intense level of layout engine support and yet is somehow considered richer in some ways and probably more performant for a lot of stuff that’s done with it.
So I am curious what web application development would look like today if it wasn’t for HTML being “good enough”.
6 replies →
"This, thirty years later, is the best pitch for XML I’ve read."
I wish someone would write "XML - The Good Parts".
Others might argue that this is JSON but I'd disagree:
- No comments is a non-starter
- No proper integers
- No date format
- Schema validation is a primitive toy compared what we had for XML
- Lack of allowed trailing commas
YAML ain't better. I hated whitespace handling in XML, it's a miracle how YAML could make it even worse.
XML is from era long past and I certainly don't want to go back there, but it had its good parts and I feel we have not really learned a lot from its mistakes.
In the end maybe it is just that developer ergonomics is largely a matter of taste and no language will ever please everyone.
It's funny to hear people in the comments here talk about XML in the past tense.
I know it's passé in the web dev world, but in my work we still work with XML all the time. We even have work in our queue to add support for new data sources built on XML (specifically QIF https://qifstandards.org/).
It's fine with me... I've come to like XML. It's nice to have a standard, easy way to do seschemas, validators, processors, queries, etc. It can be overdone and it's not for every use case, but it's pretty good at what it does.
2 replies →
Developer ergonomics is drastically underappreciated, even in modern times. Since we're talking about textual data formats, I'll go out on a limb here and say that I hate YAML. Double checking exactly how many spaces are present on each line is tedious. It manages to make a simple task like copy-pasting something from a different file (at a different indentation level) into an error-prone process. I'll take angle brackets any day.
You haven’t felt hate until you’ve counted spaces in your Helm templates in order to know what value to put after `nindent`. The punchline is that k8s doesn’t even speak yaml, the protocol is all json and it’s the tooling that inflicts yaml on us. I can live with yaml as a config format, but once logic starts creeping in, give me anything else.
1 reply →
Working with large YAML documents is incredibly annoying and shows the benefit of closing tags.
2 replies →
JSON5 is a real sweet spot for me. Closing brackets, but I don't have to type every tag twice. Comments and trailing commas.
5 replies →
> Developer ergonomics is drastically underappreciated, even in modern times.
When was the last time you had an editor that wouldn't just auto close the current tag with "</" ? I mean it's a god-send for knowing where you are at in large structure. You aren't scrolling to the top to find which tag you are in.
>XML has always seemed to be a data standard which is intended to be what computers prefer, not people
Interesting take, but I'm always a little hesitant to accept any anthropomorphizing of computer systems.
Isn't it always about what we can reason and extrapolate about what the computer is doing? Obviously computers have no preference so it seems like you're really saying
"XML is a poor abstraction for what it's trying to accomplish" or something like that.
Before jQuery, chrome, and web 2.0, I was building xslt driven web pages that transformed XML in an early nosql doc store into html and it worked quite beautifully and allowed us to skip a lot of schema work that we definitely were ready or knowledgeable enough to do.
EDIT: It was the perfect abstraction and tool for that job. However the application was very niche and I've never found a person or team who did anything similar (and never had the opportunity to do anything similar myself again)
I did this for many years at a couple different companies. As you said it worked very well especially at the time (early 2000’s). It was a great way to separate application logic from presentation logic especially for anything web based. Seems like a trivial idea now but at the time I loved it.
In fact the RSS reader I built still uses XSLT to transform the output to HTML as it’s just the easiest way to do so (and can now be done directly in the browser).
Re xslt based web applications - a team at my employer did the same circa 2004. It worked beautifully except for one issue: inefficiency. The qps that the app could serve was laughable because each page request went through the xslt engine more than once. No amount of tuning could fix this design flaw, and the project was killed.
Names withheld to protect the guilty. :)
1 reply →
> developer ergonomics
That was a huge reason JSON took over.
Another reason was the overall XML ecosystem grew unwieldy and difficult to navigate: XPath, XSLT, SOAP, WSDL, Xpointer, XLink, SOAP, XForms... They all made sense in their own way, but it was difficult to master them all. That complexity, plus the poor ergonomics, is what paved the way for JSON to become preferred.
I quite liked it when it first came out, I'd been dealing with a ton of bespoke formats up until then. Pretty much every one was ambiguous and painful to deal with. It was a step forward being able to push people towards a standard for document transfer.
I suspect it was SOAP and WSDL that killed it for a lot of people though. That was a typical example of a technical solution looking for a problem and complete overkill for most people.
The whole namespace thing was probably a step too far as well.
You should try using a LISP like Racket for XML. Because XML can be expressed directly as S-expressions, XML and LISP go together like peanut butter and jelly.
In my experience, at least with Clojure, it's much more convenient to serialize XML into a map-like structure. With your example, the data structure would look like so.
Some people use namespaced keywords (e.g. :xml/tag) to help disambiguate keys in the map. This kind of data structure tends to be more convenient than dealing with plain sexps or so-called "Hiccup syntax". i.e.
The above syntax is convenient to write, but it's tedious to manipulate. For instance, one needs to dispatch on types to determine whether an element at some index is an attribute map or a child. By using the former data structure, one simply looks up the :attrs or :content key. Additionally, the map structure is easier to depth-first search; it's a one-liner with the tree-seq function.
I've written a rudimentary EPUB parser in Clojure and found it easier to work with zippers than any other data structure to e.g. look for <rootfile> elements with a <container> ancestor.
Zippers are available in most programming languages, thankfully, so this advantage is not really unique to Clojure (or another Lisp). However, I will agree that something like sexps (or Hiccup) is more convenient than e.g. JSX, since you are dealing with the native syntax of the language rather than introducing a compilation step and non-standard syntax.
1 reply →
This looks like it loses the distinction between attributes and nested tags?
As in, I don't see a difference between `(attr "val")` which expresses an attribute key/value pair and `(thing "world")` which expresses a tag/content relationship. Even if I thought the rule might be "if the first element of the list is a list itself then it should be interpreted as a set of attribute key value pairs" then I would still be ambiguous with:
which could serialize to either:
or:
In fact, this ambiguity between attributes and children has always been one of the head scratching things for me about XML. Well, the thing I've always disliked the most is namespaces but that is another matter.
5 replies →
a lisp... like dsssl ? ;-)
I used to do a lot of XSLT coding, by hand, in text editors that weren't proper IDEs, and frankly it wasn't very hard to do.
There's something very zen-like with this language; you put a document in a kind of sieve and out comes a "better" document. It cannot fail; it can be wrong, full of errors, of course (although if you're validating the result against a schema it cannot be very wrong); but it will almost never explode in your face.
And then XSLT work kind of disappeared; I miss it a lot.
I'm gonna be honest, I find terseness to be highly overrated by programmers. I value it in moderation, but for a lot of people they say things like "this language is verbose" like that is a problem unto itself. If verbosity is gaining you something (generally clarity), then I think that's a reasonable cost to pay. Terseness is not, in my opinion, a goal unto itself (though many programmers certainly treat it as such). It's something you should seek only to the extent that it makes a language easier to use.
And not only does the XML format have bad developer ergonomics, most XML parsers are equally terrible to use. There are many things I like about XML: name spaces, schemas, XPath, to some degree even XSLT. But the typical XML developer experience is terrible on every layer
XML is a big improvement over YAML.
There, I said it.
YAML is great. For simple configuration files. For anything more complex it gets gnarly quick, but honestly? If I need a config file for a script I'm writing I will reach for YAML every time. It really is amazing for that use case.
1 reply →
CSV encoded in EBCDIC is an improvement over YAML. God what an awful format...
> XML sucks shit for someone who wants to think tersely and code by hand. Seriously, I hate it with a fiery passion.
At the risk of glibly missing the main point of your comment, take a look at KDL. Unlike JSON/TOML/YAML, it features XML-style node semantics. Unlike XML, it's intended to be human-readable and writeable by hand. It has specifications for both a query language and a schema language as well as implementations in a bunch of languages. https://kdl.dev/
[dead]
The main thing I hate about XML (apart from the tedious syntax and terrible APIs - who thought SAX was a sane idea?) is that the data model is wrong for 99% of use cases.
XML gives you an object soup where text objects can be anywhere and data can be randomly stored in tags or attributes.
It just doesn't at all match the object model used by basically all programming languages.
I think that's a big reason JSON is so successful. It's literally the object model used by JavaScript. There's no weird impedance mismatch between the data represented on disk and in your program.
Then someone had to go and screw things up with YAML...
JSON5 is the way.
Fun fact: XSLT still enjoys broad support across all major browsers: https://caniuse.com/?search=xslt
I can’t say this with certainty, but I have some reason to suspect I might be partially to blame for this fun fact!
A couple years ago, I stumbled on a discussion considering deprecation/removal of XSLT support in Chrome. At some point in the discussion, they mentioned observing a notable uptick in usage—enough of an uptick (from a baseline of approximately zero) that they backed out.
The timing was closely correlated with work I’d done to adapt a library, which originally used XSLT via native Node extensions, to browser XSLT APIs. The project isn’t especially “popular” in the colloquial sense of the term, but it does have a substantial niche user base. I’m not sure how much uptake the browser adaptation of this library has had since, but some quick napkin math suggested it was at least plausible that the uptick in usage they saw might have been the onslaught of automated testing I used to validate the change while I was working on it.
And this kids is one more reason for us to use testing while developing
This is true only of XSLT 1.0. The current standard is 3.0.
Oh, a shame. Is there any way to track browser version adoption on caniuse, or any other site?
Also, is it up to browser implementations, or does WHATWG expect browsers to stay at version XSLT 1?
9 replies →
Being interested in archaic technologies, I built a website using XML/XSLT not that long ago. The site was an archive of a band I was in, which made it fundamentally data oriented: We recorded multiple albums, with different tracks, and a different lineup of musicians each time. There's lots of different databases I could built a static site generator around, but what if the browser could render the page straight from the data? That's what's cool about XML/XSLT. On paper, I think it's actually a pretty nice idea: The browser starts by loading the actual data, and then renders it into HTML according to a specific stylesheet. Obviously the history of browser tech forked in a different direction, but the idea remains good. What if there was native browser support for styling JSON into HTML?
The fact it could be compiled in WASM is a good thing, given the Chrome team was considering removing libxml and XSLT support a few years back. The reasons cited were mostly about security (and share of users).
It's another proof that working on fundamental tools is a good thing.
not WASM but there is also https://www.npmjs.com/package/saxon-js
This is pretty slow compared to libxslt.
Very cool! I recently wrote an XSLT 2 transpiler for js (https://github.com/egh/xjslt) - it's nice to see some options out there! Writing the xpath engine is probably the hard part (I relied on fontoxpath). I'm going to be looking into what you have done for inspiration!
What problems are {elegantly, neatly, best} solved by using XPath and XSLT today that would make them reasonable choices over alternatives?
XPath is a very nice language for querying over XML. Most places pitch it as a "declarative" syntax, but as I am quite skeptical of "declarative" as a concept, you can also look at the vast majority of the XPath standard as a way to imperatively drive a multicursor over an XML document, diving in out and out nodes and extracting bits of text and such, without having to write the equivalent code in your language to do so, which will be inevitably quite a bit more verbose. When you need it, it's really useful.
In my very opinionated opinion, XPath is about 99% of the value of XSLT, and XSLT itself is a misfire. Embedding an XML language in XML, rather than being an amazing value proposition, is actually a huge and really annoying mistake, in much the same way and for much the same reason as anyone who has spent much time around shell scripting has found trying to embed shell strings in shell strings (and, if the situation is particularly dire, another third or fourth level of such nesting) is quite unpleasant. Imagine trying to deal with bash, except you have to first quote all the command lines as bash strings like you're using bash -c, all the time. I think "XPath + your favorite language" has all the power of XSLT and, generally, better ergonomics and comprehensibility. Once you've got the selection of nodes in hand, a general-purpose programming language is a better way to deal with their contents then what XSLT provides. Hence why it has always languished.
XQuery is the best of both worlds - you get almost all the benefits of XSLT like e.g. the ability to define your own functions, but with non-XML-based syntax that is a superset of XPath.
Basically the only thing it's missing in XQuery vs XSLT is template rules and their application; but IMO simple ones are just as easy to write explicitly, and complex rulesets are hard to reason about and maintain anyway.
It’s been a while since I’ve had to deal with XML, but I remember finding it fairly convenient to restructure XML documents with XSLT. Modifying the data in those documents, much less so. I think there’s a sweet spot.
To someone who hasn’t worked much with XML, this seems like a reasonable take!
For cases where a host system wants to execute user-defined data transformations safely, XSLT seems like it might be useful. When they mature, maybe WASM and WASI will fill the same niche with better developer ergonomics?
Interesting take about XSLT. But I agree... XSLT could be something much more simple (and non XML initself) and combined with XPATH. It feels like a lot of boiler code to write XSLT.
XPATH+XSLT is SQL for XML, declarative selection and transformation.
Using an XML library to iterate through an entire XML document without XPATH is like looping through entire database tables without a JOIN filter or a WHERE clause.
XSLT is the SELECT, transforming XML output with a new level of crazy for recursion.
XPath is a superb query language for XML (or anything that you can structure as a DOM) --- it is also, with some obscure exceptions, the only query language with serious adoption, so it's an easy choice and readily available in XML tools. The only caveat is there are various spec versions and most never added support for newer versions.
Let's look at JSON by comparison. Hmm, let's see: JSONPath, JMESPath, jq, jsonql, ...
JQ is the most feature-rich of the bunch. It's defacto standard and I usually just default to it because it offers so much - assignment, various builtins such as base64 encoding.
The disadvantage is that it's not easily embeddable in your own programs - so programs use JSONPath / Go templates often.
7 replies →
Recently discovered Jsonata thanks to AWS adding it to Step Functions. Feel free to add it to your enumeration
I manage a team who build and maintain trading data reports for brokers, we have everything generate in a fairly standard format and customize to those brokers exact needs with XSLT. Hundreds of reports, couldnt manage without it.
E.g. massive XML documents with complexity which you need to be transformed into other structured XML. Or if you need to parse complex XML. Some people hate XSLT, XPATH with a passion and would rather write much more complex lxml code. It has a steep learning curve but once you understand the fundamentals you can transform XML more easily and especially predictable and reliable than ever.
Another example: If you have very large XML you cannot fit even into memory you can still stream process them with XSLT.
It makes you the master of XML transformations and fetching information out of complex XML ;)
What alternatives exist for extracting structured data from the web? I have several ETL pipelines that use htmltidy to turn tag soup into something approximately valid and xmlstarlet to transform it into tabular data.
I have used it when using scraping some data from web pages using scrapy framework. It's reliable way to extract something from web pages compared to regex.
don't overlook the ability to mix and match them, because each "axis" is good at its own things
The .css() flavor gets complied down into .xpath() but there is no contest about their expressivity: https://github.com/scrapy/parsel/blob/v1.9.1/parsel/csstrans...
Love to see stuff outside the Java space since I really like thedoing stuff in XSLT. Question: Does this work on a textual XML representation or can you plug in different XML readers? I have had really great fun in the past using http://www.ananas.org/xi/ transforming arbitrarily for formated files using XSLT. Also it is today really important that XML Reader has error correction capabilities, since lots of tools don't write well-formed XML, which often is a showstopper for employing to transforms from my experience.
I wonder if this could perhaps some day be used in Wine, for the MSXML implementations. Maybe not, since those implementations need to be bug-compatible where applications depend on said bugs; but the current implementation(s) are also not fantastic. I believe it is still using libxml2.
(Aside: A long time ago, I had written an alternate XPath 1.1 implementation for Wine during GSoC, but rather shamefully, I never actually got it merged. Life became very hectic for me during that time period and I never really looped back to it. Still feel pretty bad about it all these years later.)
Nice ! I've a scrapper using XPath/XSLT extensively and 90% of the XPath selectors work like for years without a change. With CSS selectors I've had more problems...
CSS selectors have spent the last few decades reinventing XPath. XPath introduced right from the beginning the notion of axes, which allow you to navigate down, up, preceding, following, etc. as makes sense. XPath also always had predicates, even in version 1.0. CSS just recently started supporting :has() and :is(), in particular. Eventually, CSS selectors will match XPath's query abilities, although with worse syntax.
The problem with CSS selectors (at least in scrapers) is also that they change relatively often, compared to (html) document structure, thats why XPath last longer. But you are right, CSS selectors compared to 20 years old XPath are realy worse.
On the other hand:
- XPath literally didn't exist when CSS selectors were introduced
- XPath's flexibility makes it a lot more challenging to implement efficiently, even more so when there are thousands of rules which need to be dynamically reevaluated at each document update
- XPath is lacking conveniences dedicated to HTML semantics, and handrolling them in xpath 1.0 was absolutely heinous (go try and implement a class predicate in xpath 1.0 without extensions)
1 reply →
> CSS selectors have spent the last few decades reinventing XPath
YES! This is so true! And ridiculous! It's a mystery why we didn't simply reuse XPath for selectors... it's all in there!!
2 replies →
I will definitely try this out!
I have a service that extracts <meta> tags in webpages and to do that I'm currently using (and need) three different dependencies: html5ever, markup5ever_rcdom, markup5ever. I don't like those to be honest, the documentation is quite bad and it was difficult to understand how I should have used the libraries to achieve such a simple task.
XPath on the other hand makes this extremely easy in comparison, I wonder how this will perform compared to my current solution.
Thanks!
Unfortunately at this point there's no HTML parser frontend for Xee (and its underlying library Xot) yet (HTML 5 parser serialization is supported at least in code). It shouldn't be too hard to add at least HTML 5 support using something like html5ever.
I always hate it when license files have "yes, but" language in them because if the license file differs in some non-obvious way, now I have to pay lawyers to interpret it
https://github.com/Paligo/xee/blob/xee-v0.1.5/COPYRIGHT
And that goes double for when there is a separate LICENSE file in the repo https://github.com/Paligo/xee/blob/xee-v0.1.5/LICENSE-MIT
Doesn’t look like “yes, but” language to me. Looks like the code is plain old MIT and the author is doing their due diligence with respect to vendored content in the repository subject to different licensing. Seems like they are being paid by a company to work on this, so it’s not surprising that they actually pay attention to copyright.
The fact that many project maintainers forget about vendored content and haphazardly slap the MIT license (or whatever) verbatim into a LICENSE file doesn’t actually give you a get-out-of-paying-lawyers-free card! If anything, Xee’s COPYRIGHT file gives me more confidence in my legal footing than an unadulterated LICENSE file would. It indicates the maintainer at least has a basic understanding of how copyright applies to their project.
Nice! I tried using XQuery (superset of XPath 3) for a while through the BaseX implementation. It's pretty nice, but you have to face XML problems like namespaces, document order, attributes vs nodes, you don't know if you can have 0, 1 or more nodes, etc. Something I wish was more readily available would be to run XPath against JSON, yaml, etc. It's a nicer language than say jq, but its ties to XML sometimes make it hard to transfer.
Another pain point with XML is the lack of inline schema, so the languages around like XPath have to work with arbitrary structures unlike say JSON where you at least have basic primitives like map/dict, numbers, bool, etc
I recently had the pleasure of using XSLT after never having seen it before. I used it to transform a huge 130K line XML manifest with MAPI property metadata into C# source code. It was so simple, readable, and intuitive to use.
I learnt XSLT in university back in the early/mid part of the first decade of this century. I didn't much enjoy it. I've never used it, but all my career I've had to deal with terrible ad hoc templating languages. I recently had total freedom to choose what terrible ad hoc templating language to use, and I chose XSLT. I actually totally liked it: and it seemed to have everything I've needed. In previous jobs, there was always tickets that amounted to "make a fork of the terrible ad hoc templating language and hack it until it does this", but I reckon XSLT could do everything and then some.
This is fantastic to see! I've used XML off and on since it was the red hot tech of the early 2000s. I wouldn't choose it today for a green field project, but it's still around in so many places, so we definitely need a high-performance, high-quality library written in Rust for this.
This could become a great foundation for a typed, (mostly) etree-compatible, python library built on top of this. I've used lxml for years and it's still my goto, but there are lots of places where it could be modernized.
This is great, I’ve been looking for performant and safe XML processing to replace IBM stuff (websphere/datapower) that we really only keep around for hw accelerated payload processing. At our scale, lxml and others + BYO gateway tech has a similar run cost even considering IBM licensing. I hate running their crap, which requires k8s at a version that’s some hair-thin slice above the minimum supported EKS version, it’s almost like they want us to live in 24/7 fear of being OOS.
I miss the declarative purity of XSLT as an HTML templating layer. I'd love to know if there is a similar system for more popular/current web stack.
This is really good news, I am looking forward to trying it out! Is XQuery also planned as an additional frontend? By the way, there is also χrust, a rust project working towards pretty similar goals (XPath 3.1, XQuery 3.1 and XSLT 3.0). At first glance, the architecture also seems quite similar, it is not as far along, though. Have you had any contact with them?
Fun fact: A decade ago the designer of HAML and Sass created a modern alternative to XSLT. https://en.wikipedia.org/wiki/Tritium_(programming_language)
> XML is now niche technology, but it's a bigger niche than you might think, and it's not going to go away any time soon.
When you consider that .docx, .pptx, and .xlsx files are zipped XML archives, "niche" seems a misnomer.
especially .xlsx which is some "hold my beer" for someone trying to encode a dataframe in .xml :-(
Openpyxl is a great library.
Just want to say that Microsoft has some sort of implementation of an xml application using Microsoft word or Ms word. But I have struggled to find examples I can use, but for a long time I have been trying to convert an office repository of corporate resumes to xml.
XSLT is great for nerd cred, when someone selects "view source" on your page and there's not an HTML tag in sight. I did this once.
Maybe it's good for compression, but probably not by a factor much bigger than gzip/brotli/zstd.
I miss XHTML and XSL times. Time, when Web would have been more prepared for the AI consumption, less dynamic nonsense, and more focus on the actual content. Time shows all these Flash and Java gimmicks died off.
NCBI still emits XML from their most prominent databases (e.g., PubMed). I'm looking forward to adopting this library into some of my production code that interfaces with PubMed!
Does XSLT still used in a new projects? I have impression, that it was not popular even when XML was.
For example, apache HTTPD never has official module to serve XML via XSLT transformation.
And XSL:FO looks even more obscure.
XSL:FO is dead for all practical purposes.
XSLT was not popular for its original intended application - which is to say, serving XML data from web servers and translating it to HTML (or XSL:FO, or ...) on the client as needed. However, it was used plenty for XML processing outside of that particular niche.
New projects these days rarely have to process complicated XML to begin with. But when you do, I'd say XSLT (or perhaps better yet, XQuery) is a very useful tool to have in your toolbox.
syntext serna was such an engineering marvel. a wysiwyg XML editor that used xslt to fo to specify your rendering. was built in the context of docbook and dita but did work for any xsd with a xslt to fo. amazing technology. ahead of its time. and then came json :-(
> XSL:FO is dead for all practical purposes.
As opposed to what for cooking "PDF via XML" files? Because I can assure you than feeding rando.odt into $(libreoffice -pdf $TMPDIR/ohgawd) is 100% not the same as $(fop -fo $TMPDIR/my.fo -pdf $TMPDIR/out.pdf)
1 reply →
There are a lot of APIs out there that are still XML-based, especially from enterprise suppliers.
Equifax and Experian’s APis immediately come to mind as documents that generate complex results that people often want to turn into some type of visual representation with XSLT.
I see a lot of XML APIs and formats around me, it is true. But it is machine-machine formats or complex configuration files formats which doesn't need visualization. It needs schema support and tooling, but not visualization or transformation. It is more serialization formats for complex object trees, and all processing is done on these object trees, not XML itself.
But of course, I see only a part of the picture.
Schematron / Peppol / electronic invoices inside the eu (since some stuff is based on as2 / as4 , stuff like saml but for sending invoices)
Nice work. Xpath is a beast. Obvious why paligo would be interested too. Must be a lot of commercial documentation out there where the best representation they can get looks a bit XMLish.
I hope this will be packaged into shared libraries at some point so that languages that isn't rust will get access to it.
The author mentions Python bindings in the post.
I yearn for the day when people will stop considering the main advertising bullet point feature that their software was written in Rust. Rust 1.0 was released a decade ago, plenty of time for its alleged technical advantages to become apparent.
It's like a handbag whose main claim to being a premium product isn't workmanship or materials, but that it has Gucci on its side.
> It's like a handbag whose main claim to being a premium product isn't workmanship or materials, but that it has Gucci on its side.
Knockoffs aside, the latter is intended to serve as a proxy for the former. I too will be happy when Rust is the boring everyday choice, but in 2025 we still see new buffer overflows every day. And if I'm picking a library, I still want to know if it's in the same language as the app it's going into.
An xpath/xslt engine is something you might want to include in other software, the programming language used might be an important information for this purpose.
Personally I consider the programming language used for a piece of software to be similar to the materials used for a handbag.
This sounds fantastic! Thank you for your work. Now I gotta go learn Rust :-).
Throwback shoutout to Steve Muench and his genius method of grouping elements in XSLT 1.0.
So good it has its own Wikipedia page!
https://en.wikipedia.org/wiki/XSLT/Muenchian_grouping
I mean, talk about hacker cred.
eXcellent, it's good to see new work on XSLT, reviled bysome it's actually great tech and useful in all sorts of places.
Does it preserve whitespace? Something that I always found asinine about XSLT is that it wipes out whitespace when transforming. Imagine you have thousands of corporate XML files in source control, and you want to tranform them all, performing some simple mutation. XSLT claims to be fit for this job, but in practice your diff is going to be full of unintentional whitespace mangling.
XSLT will perform the transformations that you instruct it to do. It does not wipe out whitespace just on its own. Do you mean that you'd like facilities to nicely reindent the output?
> It does not wipe out whitespace just on its own.
Sounds nice but doesn't match my lived experience with both Chrome's built-in XSLT processor and `xsltproc`. (I was using XSLT 1.0, for legacy reasons, so maybe this is an XSLT 1.0 issue?)
> Do you mean that you'd like facilities to nicely reindent the output?
No, I do mean preserve whitespace (i.e., formatting), such as between elements and between attributes.
1 reply →
It's interesting to see the slow rehabilitation of XML and its tooling now that there's a new generation of developers who have not grown up in the shadow of XML's prime in the late 90s / early 2000s, and who have not heard (or did not buy into) the anti-XML crowd's ranting --- even though some of their criticisms were legitimate.
I've always liked XML, and especially XPath, and even though there were a large number of missteps in the heyday of XML, I feel it has always been unfairly maligned. Look at all the people who reinvent XML tooling but for JSON, but not nearly as well. Luckily, people who value XML can still use it, provided the fit is right. But it's nice to see the tides turning.
Most fashions really are cyclical.
It’s the “slope of enlightenment” phase of the Gartner hype cycle, where people are able to make sober assessments of technologies without undue influence from hype or its backlash. We’re long past the days where XML is used for everything, even when it’s inappropriate, and we’re also past the “trough of disillusionment” phase where people sought alternatives to XML.
I think XML is good for expressing document formats and for configuration settings. I prefer JSON for data serialization, though.
For phone users https://en.m.wikipedia.org/wiki/Gartner_hype_cycle
I made extensive use of XPath and XSL(T) back in their heyday and in general was fine with them but the architect astronauts who love showing off how clever they are with artificial complexity had a tendency to make use of XML tech to complicate things unnecessarily. Think that might be where many people's dislike of it came from, especially those whose first exposure wasn't learning through simple structures when XML was new but were thrown into the type of morass that develops when a tech is climbing the maturity curve.
Oh, and just to pile on to my own post:
If you like React's JSX; enjoy its strictures and clean, readable "HTML"; then good news, you're writing XML (but without namespacing).
See also: ECMAScript for XML (E4X) https://ecma-international.org/wp-content/uploads/ECMA-357_2...
I manage a team of business analysts and accountants who use XSLT for generating reports for banks, XSLT is usually their first experience programming outside some linkedin learning courses. Not once has one of them ever complained about namespaces, or verbosity or anything like it, this is something I only see on HN or the programming subreddits.
The vast vast majority of Devs only experience of XML is what they hear second hand, I'm sure a lot more would like it if they tried it.
My complaints about XML remain pretty much unchanged since 10 years ago.
- Not including self-closing tags, there should only be one close tag: </>
- Elements are for data. Attributes are evil
- XPath indexing should be 0-based
- Documents without a schema should not make your tools panic or complain
- An xml document shouldn't have to waste it's time telling you it's an xml document in xml
I maintain that one of the reasons JSON got so popular so quickly is because it does all of the above. The problem with JSON is that you lose the benefits of having a schema to validate against.
Microsoft seems to be especially obsessed with making as much as possible into attributes. Makes me wonder if there is some hidden historical reason for that like an especially powerful evangelist inside the company that loved attributes during the early days of adopting XML.
1 reply →
> Elements are for data. Attributes are evil
This is like, your opinion, man... ;-) You can devise your schema any way you want. Attributes are great, and they exist in HTML in the form of datasets, which, as usual, are a poorly-specified and ill-designed rethinking of XML attributes
> Documents without a schema should not make your tools panic or complain
They don't. You absolutely don't need a schema. If you declare a schema, it should exist. If not, no problem?
4 replies →
There have been proposals a long time ago, including by Tim Bray, for an XML 2.0 that would remove some warts. But there was no appetite in the industry to move forward.
So how do I specify the font of a word without attributes?
XML/XPath are very useful but I've definitely lived through their abuses. Still abusus non tollit usam and I've had many positive experiences with XPath especially. XmlStarlet has been especially useful, also xmllint. I welcome more tooling like this. The major downside to XML is the verbosity and cognitive load. Tooling that manages that is a godsend.
XML is still a huge mistake for most stuff. It's fine for _documents_ but not as a data storage solution. Bloat, ambiguities, virtually impossible to canonicalise.
XPath is cute, but if you don't mind bloat, text-only and lack of ergonomics, anyways then Conjunctive Regular Path Queries and RDF are miles ahead of XML as a data storage solution. (Not serialised as XML please xD)
Curiously, one of the driving forces behind renewed interest in XML is that language models seem to handle large XML documents better than JSON. I suspect this has something to do with it being more redundant - e.g. closing tags including the element name - making it easier for the model to keep track of structure.
XML, and other X[x] standards, are just horrible to read. On top of that, XML was made 10x worse by wrapping things in SOAP and the like over the wire, back in the day.
XSD, XPath, XSLT are all domains where I'd argue that reading/reasoning about are way more important.
When troubleshooting an issue, I don't mind scanning XML for a few data points so I can confirm what values are being communicated, but when I need to figure out how/why a specific value came to be, I don't want the logic spread throughout a giant text file wrapped in attribute value strings, and other non-debuggable "code". I'd rather it just be in a proper programming language.
The specifications are certainly not easy to read, and I wouldn't recommend them to learn about XML. But from the perspective of someone implementing them they are quite useful!
As someone who has used many programming languages and who went through the process of implementing this one I have many opinions about XPath and XSLT as programming languages. I myself am more interested in implementing them for others who value using them than using them myself. I do recognize there is a sizeable community of people who do use these tools and are passionate about them - and that's interesting to see and more power to them!
It's only a sample of one but I'm really unhappy with the issues and limitations that JSON and YAML have, and I welcome XML if it has good tools.
That depends on what I'm doing. Most what what I'm doing is simple and so xml is just way to complex for the task. However when I need something complex xml can handle things that the others cannot - at the expense of being really complex to work with.
[dead]
[flagged]
Maybe similar reason as people deploy 100 requests a week micro service on multiple kubernetes clusters across 3 AZs to make sure it is highly available.
or like watching someone lovingly restore a fax machine with carbon fiber casing and a neural net to optimize transmission speed. I’m torn between admiration and existential despair.
> I was at XML Prague, an XML conference
There's an XML conference?!
There are at least 5:
https://www.xmlprague.cz/ https://www.balisage.net/ https://declarative.amsterdam/ https://markupuk.org/ https://xmlsummerschool.org/
There's some interesting papers in the balisage archives.
XML is the what OOP is for programming languages. Often overcomplicated, hard to follow, full of footguns.