← Back to context

Comment by griffindor

10 hours ago

Nice!

> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

I wish I knew the input size when attempting to estimate, but I suppose part of the challenge is also estimating the runtime's startup memory usage too.

> Compute the result into a hash table whose keys are string views, not strings

If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM. Is that included in the memory usage figures?

Nonetheless, it's a nice optimization that the kernel chooses which hash table keys to keep hot.

The other perspective on this is that we sought out languages like Python/Ruby because the development cost was high, relative to the hardware. Hardware is now more expensive, but development costs are cheaper too.

The take away: expect more push towards efficiency!

>> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

At this point I'd make two observations:

- how big is the text file? I bet it's a megabyte, isn't it? Because the "naive" way to do it is to read the whole thing into memory.

- all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte. It gets more interesting when the file doesn't fit into RAM at all.

The state of the art here is : https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times... , wherein our hero finds the terrible combination of putting the whole file in a single string and then running strlen() on it for every character.

  • > all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte.

    I have to disagree. Bad performance is often a result of a death of a thousands cuts. This function might be one among countless similarly inefficient library calls, programs and so on.

    • If you're not putting a representative amount of data through the test, you have no idea if the resource usage you're seeing scales with the amount of data or is just a fixed overhead if the runtime.

> If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM.

Not so much, because you only need some fraction of that memory when the program is actually running; the OS is free to evict it as soon as it needs the RAM for something else. Non-file-backed memory can only be evicted by swapping it out and that's way more expensive,