Comment by HelloNurse
5 years ago
It's primarily a problem of inflexibility handicapping performance, not of "cache misses" and clever algorithms.
For example, imagine a word processing program opening a document and showing you the first page: you could load 50MB of kitchen sink XML and 250 embedded images from a zip file and then start doing something with the resulting canonical representation, or you could load the bare minimum of metadata (e.g. page size) from the appropriate tables and the content that goes in the first page from carefully indexed tables of objects. Which variant is likely to load faster? Which one is guaranteed to load useless data? Which one can save the document more quickly and efficiently (one paragraph instead of a whole document or a messy update log) when you edit text?
ah okay, incremental loading seems essential and I hadn't considered it. Thanks for explaining :)