← Back to context

Comment by tomcam

1 year ago

I've been wondering something for ages. Where did you get the 24 byte number, and how does it compare in Unicode terms? That is, did you analyze a large corpus and determine that 24 bytes was right for the largest number of strings? And does it come out to, say, 10 Unicode characters? Whenever I think about designing a new language, this very issue pops up.

To add some more detail to sibling's answer:

The optimal size will depend on the application. It's certainly reasonable that in some applications, many/most strings would be under 24 bytes and thus the small string optimization in many of these implementations would be beneficial. Perhaps in some other application strings are closer to 32 bytes (or something else) and then a larger stack size would be warranted. And in yet other applications, strings are large and no small string optimizations will make any difference, and if anything will slow the application down with unnecessary bookkeeping.

I do find it surprising that none of the implementations in the various comments linked in this thread seem to provide user-tunable sizes; or at least I haven't seen it. Because I can certainly imagine cases where the optimal size is > 24.

This style of small string optimization tries to take up the same amount of space on the stack as a "normal" heap-allocated string. On 64-bit platforms that is 24 bytes: 8 bytes for the pointer to the heap allocation, 8 bytes for the number of characters/bytes in the string, and 8 bytes for the allocation capacity.

It's quite possible to make the small string buffer larger, but that comes at the cost of the large string representation taking up more space than necessary on the stack. IIRC libstdc++ does this, which makes its std::string take up 32 bytes on the stack.

  • > It's quite possible to make the small string buffer larger, but that comes at the cost of the large string representation taking up more space than necessary on the stack. IIRC libstdc++ does this, which makes its std::string take up 32 bytes on the stack.

    Though to follow through on that, 24 bytes is also more than necessary. You don't have to be very clever to shrink your string size to 16 bytes (6 for the pointer, 6 for the size, 4 to store either capacity or spare capacity as floating point).

  • Oh! So the string itself is still on the heap? I assumed it was all on the stack.

    • No, let me try to explain differently. If `compact_str` was not used, then your normal `String` would take 24 bytes of stack space (regardless of the string size) + heap space. What `compact_str` is trying to do is not use heap when string content is less than 24.

      1 reply →

Not sure why the downvotes? This is a sincere question.

  • For a lot of people it will seem obvious that String (Rust's standard library growable string type) is 24 bytes. CompactString is 24 bytes so that it is exactly the same size as String, that's the main idea. That's why they may not have believed you were sincere in asking.

    Why is Rust's String 24 bytes? Well, this is a growable string type. So we need to store 1) where on the heap our growable buffer is and 2) how big that heap space is ("Capacity"), and 3) how much of it we're using already to store the string ("Size" or "Length"). On modern 64-bit computers it's reasonable to use a 64-bit (8 byte) value for all three of these facts, 3 times 8 = 24.

    In fact Rust (unlike C++) insists on doing this the simplest and fastest way possible so the String type is more or less literally Vec<u8> - a growable array of bytes, plus a guarantee that those bytes are UTF-8 encoded text. Vec<u8> is likewise 24-bytes.

    The rationale is that being simple is a good fundamental design, and (as CompactString illustrates) people can build more sophisticated types if they want to unlock specific optimisations which may suit their application.