Comment by faassen
5 months ago
Anything could be supported with sufficient effort, but streaming hasn't been my priority so far and I haven't explored it in detail. I want to get XSLT 3.0 working properly first.
There's a potential alternative to streaming, though - succinct storage of XML in memory:
https://blog.startifact.com/posts/succinct/
I've built a succinct XML library named Xoz (not integrated into Xee yet):
The parsed in memory overhead goes down to 20% of the original XML text in my small experiments.
There's a lot of questions on how this functions in the real world, but this library also has very interesting properties like "jump to the descendant with this tag without going through intermediaries".
> I want to get XSLT 3.0 working properly first
May I ask why? I used to do a lot of XSLT in 2007-2012 and stuck with XSLT 2.0. I don't know what's in 3.0 as I've never actually tried it but I never felt there was some feature missing from 2.0 that prevented me to do something.
As for streaming, an intermediary step would be the ability to cut up a big XML file in smaller ones. A big XML document is almost always the concatenation of smaller files (that's certainly the case for Wikipedia for example). If one can output smaller files, transform each of them, and then reconstruct the initial big file without ever loading it in full in memory, that should cover a huge proportion of "streaming" needs.
XSLT has been a goal of this project from the start, as my customer uses it. XSLT 3.0 simply as that's the latest specification. What tooling do you use for XSLT 2.0?
Saxon's free version, which IIRC only implemented 2.0.
0.2x of the original size would certainly make big documents more accessible. I've heard of succinct storage, but not in the context of xml before, thanks for sharing!
I myself actually had no idea succinct data structures existed until last December , but then I found a paper that used them in the context of XML. Just to be clear: it's 120% of the original size; as it stands this library still uses more memory than the original document, just not a lot of overhead. Normal tree libraries, even if the tree is immutable, take a parent pointer, and a first child pointer and next and previous sibling pointers per node. Even though some nodes can be stored more compactly it does add up.
I suspect with the right FM-Index Xoz might be able to store huge documents in a smaller size than the original, but that's an experiment for the future.
Would you be able to parse it in a streaming fashion and just store the structure of the document in memory, with just offsets for all of the string locations, and then re-read those from disk as needed?
With modern SSDs and disk cache, that's likely enough to be plenty performant without having to store the whole document in memory at once.