Comment by tomnicholas1
8 hours ago
The generalized form of this range-request-based streaming approach looks something like my project VirtualiZarr [0].
Many of these scientific file formats (HDF5, netCDF, TIFF/COG, FITS, GRIB, JPEG and more) are essentially just contiguous multidimensional array(/"tensor") chunks embedded alongside metadata about what's in the chunks. Efficiently fetching these from object storage is just about efficiently fetching the metadata up front so you know where the chunks you want are [1].
The data model of Zarr [2] generalizes this pattern pretty well, so that when backed by Icechunk [3], you can store a "datacube" of "virtual chunk references" that point at chunks anywhere inside the original files on S3.
This allows you to stream data out as fast as the S3 network connection allows [4], and then you're free to pull that directly, or build tile servers on top of it [5].
In the Pangeo project and at Earthmover we do all this for Weather and Climate science data. But the underlying OSS stack is domain-agnostic, so works for all sorts of multidimensional array data, and VirtualiZarr has a plugin system for parsing different scientific file formats.
I would love to see if someone could create a virtual Zarr store pointing at this WSI data!
[0]: https://virtualizarr.readthedocs.io/en/stable/
[1]: https://earthmover.io/blog/fundamentals-what-is-cloud-optimi...
[2]: https://earthmover.io/blog/what-is-zarr
[3]: https://earthmover.io/blog/icechunk-1-0-production-grade-clo...
[4]: https://earthmover.io/blog/i-o-maxing-tensors-in-the-cloud
Sounds like an approach that would also work for ML model weights files — just another kind of multidimensional array with metadata.
I wonder what exactly the big multi-model AI companies are doing to optimize model cold-start latency, and how much it just looks like Zarr on top of on-prem object storage.
People have literally used Zarr for this - at one point Gemini used Zarr for checkpointing model weights. Not sure what the current fashion in that space is though.
It's definitely one of many fields that see convergent evolution towards something that just looks like Zarr. In fact you can use VirtualiZarr to parse HuggingFace's "SafeTensors" format [0].
[0]: https://github.com/zarr-developers/VirtualiZarr/pull/555
Thanks for sharing! I agree that newer scientific formats will need to deeply think about how they are deciphered directly from cloud storage.
IMO Zarr is that newer format. It abstracts over the features of all these other formats so neatly that it can literally subsume them.
I feel that we no longer really need TIFF etc. - for scientific use cases in the cloud Zarr is all that's needed going forwards. The other file formats become just archival blobs that either are converted to Zarr or pointed at by virtual Zarr stores.
thanks for sharing !
> Many of these scientific file formats (HDF5, netCDF, TIFF/COG, FITS, GRIB, JPEG and more) are essentially just contiguous multidimensional array(/"tensor") chunks
Yeah, a recurring thought is that these should condense into Apache Arrow queried by DuckDB but there must be some reason for this not to have already happened.