← Back to context

Comment by Filligree

9 hours ago

It does add complexity, and the optimal solution is probably not to use it. Consider what happens if a 4kB page has only a single unique word in it—you’d still need to load it to memory to read the string, it just isn’t accounted against your process (maybe).

I would have expected something like this:

- Scan the file serially.

- For each word, find and increment a hash table entry.

- Sort and print.

In theory, technically, this does require slightly more memory—but it’s a tiny amount more; just a copy of each unique word, and if this is natural language then there aren’t very many. Meanwhile, OOP’s approach massively pressures the page cache once you get to the “print” step, which is going to be the bulk of the runtime.

It’s not even a full copy of each unique word, actually, because you’re trading it off against the size of the string pointers. That’s… sixteen bytes minimum. A lot of words are smaller than that.

That is a valid solution, but what IO block size should you use for the best performance? What if you end up reading half a word at the end of a chunk?

Handling that is in my opinion way more complex than letting the kernel figure it out via mmap. The kernel knows way more than you about the underlying block devices, and you can use madvise with MADV_SEQUENTIAL to indicate that you will read the whole file sequentially. (That might free pages prematurely if you keep references into the data rather than copy the first occurance of each word though, so perhaps not ideal in this scenario.)