Very often if you have text, which this does, you can make huge savings by being intelligent with the text.
Rust intentionally provides the simplest possible growable string buffer String, which is literally (under the hood, you can't poke this legitimately) Vec<u8> plus the promise that this is UTF-8 text.
But you might find your needs better served by one (or several) of:
Box<str> -- you don't need capacity, so, don't store it => length == capacity
CompactString -- use the entire 24 bytes for SSO, up to 24 bytes of UTF-8 inline, obviously doesn't make sense if all or the vast majority of your strings are 25 bytes or longer
ColdString -- same idea but for 8 bytes, and also not storing capacity, this only makes sense over Box<str> if you have plenty of <= 8 byte strings
There's really an endless list of these optimizations. A few I've used (though not necessarily in rust):
Atoms: Each string can be referenced with a single u32 or even u16, and they're inherently deduplicated.
Bump allocator: your strings are &str, allocation is super fast with limited fragmentation.
Single pointer strings (this has a name, I can't think of it right now): you store the length inside the allocation instead of in each reference, so your strings are a single pointer.
> There's really an endless list of these optimizations.
These aren't really optimizations. They are specialized implementations that introduce design and architectural tradeoffs.
For example, Rust's Atom represents a string that has been interned, and it's actually an implementation of a design pattern popular in the likes of Erlang/Elixir. This is essentially a specialized implementations of the old Flyweight design pattern, where managing N independent instances of an expensive read-only object is replaced with a singleton instance that's referenced through a key handle.
I would hardly call this an optimization. It actually represents a significant change to a system's architecture. You have to introduce a set of significant architectural constraints into your system to leverage a specific tradeoff. This isn't just a tweak that makes everything run magically leaner and faster.
CompactStr doesnt have any additional runtime overhead iirc right? So in theory you can drop it in everywhere even when you expect > 25 chars. Maybe an extra branch in the >25 char case?
SSO does have overhead. Firstly, on every access you have a branch. Secondly, and more severely, the "most general" umbrella type that all string methods are defined on is a string slice, and whereas conversion from `String` to `&str` is literally a no-op, SSO strings require work to be done to convert them to string slices. Furthermore, note that in the (surprisingly common) case where the string is zero-length, String already skips the allocation, same as an SSO string.
Box<T> used to be ~T early on in rust… (then it became a `box` keyword, before being removed entirely.) They got rid of it because they wanted to move more things into libraries and have a less opinionated compiler.
I think I agree though, especially with Option. Swift’s option syntax (and kotlin’s which is similar) is so much better, a simple question mark in the type. Options are important enough that dedicated syntax makes so much sense. Rust blew their chance here with ? meaning “maybe early return”, it would have been a lot more useful as an Option indicator.
If anyone's doing this kind of optimization, dhat-rs is worth a look, it shows you exactly which fields and call sites are eating memory, instead of just a total. Saves a lot of guessing about where to start.
So there are now two ways to represent the same state: None or Some(struct whose fields are all None). Even though one of these representations is never produced by the deserialization routine, anyone could construct it if the constructor is public. And even if they don't, the different representations will show up in pattern matching as separate paths for every access to the field. This looks like a good opportunity to make these types (optimized for storage) private, and to define public view objects/accessors (optimized for usage) on top of them that merge equivalent representations.
tbh "trait" feels like a very problematic name for that type, for this kind of educational purpose - `trait` is already an established concept and keyword: https://doc.rust-lang.org/book/ch10-02-traits.html
It's especially problematic because traits don't have memory behaviors like this article in most cases - by default they're unsized, because it's a description of behavior, not data, and you can't even use them as a struct field without extra work.
Like, replace "trait" in here with "box" and see how confusing it would be to be describing how you saved memory by boxing your box, because option doesn't box like many other languages do.
Are there any tools that help finding these kinds of things? Like a profiler that says "80% of the allocated bytes are objects of this type, with 95% of those having that field set to None"
It would be super useful since I think this is pretty likely to be surprising to many users. But the profiler would need to be a particularly-specific refinement of even that: you need to make it obvious that it's not "95% of your Option<Thing>s are None, and your Option<Things> are using X bytes", but that "95% of the bytes used for your Option<Thing>s are used for None versions." Otherwise you could just assume that your non-None ones are just that chunky, or you have that many of them... I haven't seen a profiler with that level of insight, unfortunately.
Perhaps because this feels like a fairly rust-specific gotcha. Especially if you're coming from languages where there's often not much syntactical distinction made between "this is a pointer because I don't want to be copying it" and "this is a pointer because it's optional."
For instance, it's not until now that I actually understood what the sibling comment about the Enum type size discrepancy lint meant: "This lint obviously cannot take the distribution of variants in your running program into account. It is possible that the smaller variants make up less than 1% of all instances, in which case the overhead is negligible and the boxing is counter-productive. Always measure the change this lint suggests." I had always accidentally read this backwards, thinking it meant something more to the effect of "if most of the instances are actually small, then it's not a problem here, but be aware that some of them are much larger so some of your calls to things with this could end up passing much larger types."
The closest I am aware of is clippy (`cargo clippy` in a standard Rust project will run it with default configurations).
Clippy is essentially a linter; and one of its checks catches cases where different enum variants have a significantly different size; with a suggestion to Box the larger variant.
Since this is just a linter, it doesn't actually have any knowledge of how frequently each variant is actually used. It also doesn't address the situation in the article at all.
I've personally found heaptrack[1] pretty good for this task, very straightforward to use and the info is detailed enough. Though, it'll only tell you where they are happening (e.j. allocation rate for Box::new), but not exactly what type they are given that info isn't available at runtime. Usually that kind of thing would be reserved to GC-based languages where they keep track of counts for each object.
> a lot of boxes means a fragmented heap. In such case it's not a problem but this might be worth keeping in mind.
A good malloc will be able to handle this without issue due to various optimizations specifically that inherently fight fragmentation. Default Linux malloc (glibc) may have issues but I did say good malloc (and even glibc generally shouldn’t struggle with the pattern described I think).
Box<str> is still two words (length and pointer). That's better than the 3 words (length, pointer, capacity) for strings, but Box<String> is one word (not including the heap allocation).
I wonder from time to time whether you can decide the best “schema shape” beforehand, ie before you can run real workloads that stress the memory implications of such things. This can be very useful if you are trying to decide the boundary of some public facing API, but for whatever reason can’t run benchmarks (lack of impl, data, time, etc).
Without that, if you try to suggest a transformation like this when the schema is first conceived, it will likely be considered premature optimization.
Very often if you have text, which this does, you can make huge savings by being intelligent with the text.
Rust intentionally provides the simplest possible growable string buffer String, which is literally (under the hood, you can't poke this legitimately) Vec<u8> plus the promise that this is UTF-8 text.
But you might find your needs better served by one (or several) of:
Box<str> -- you don't need capacity, so, don't store it => length == capacity
CompactString -- use the entire 24 bytes for SSO, up to 24 bytes of UTF-8 inline, obviously doesn't make sense if all or the vast majority of your strings are 25 bytes or longer
ColdString -- same idea but for 8 bytes, and also not storing capacity, this only makes sense over Box<str> if you have plenty of <= 8 byte strings
There's really an endless list of these optimizations. A few I've used (though not necessarily in rust):
Atoms: Each string can be referenced with a single u32 or even u16, and they're inherently deduplicated.
Bump allocator: your strings are &str, allocation is super fast with limited fragmentation.
Single pointer strings (this has a name, I can't think of it right now): you store the length inside the allocation instead of in each reference, so your strings are a single pointer.
Atoms: is this similar to interned strings?
1 reply →
> There's really an endless list of these optimizations.
These aren't really optimizations. They are specialized implementations that introduce design and architectural tradeoffs.
For example, Rust's Atom represents a string that has been interned, and it's actually an implementation of a design pattern popular in the likes of Erlang/Elixir. This is essentially a specialized implementations of the old Flyweight design pattern, where managing N independent instances of an expensive read-only object is replaced with a singleton instance that's referenced through a key handle.
I would hardly call this an optimization. It actually represents a significant change to a system's architecture. You have to introduce a set of significant architectural constraints into your system to leverage a specific tradeoff. This isn't just a tweak that makes everything run magically leaner and faster.
1 reply →
> String, which is literally (under the hood, you can't poke this legitimately) Vec<u8>
`String::as_vec_mut` kinda implies that, since it gives you access to that underlying `Vec` which must then exist somewhere.
I looked it up: https://doc.rust-lang.org/std/string/struct.String.html#meth...
In case anyone else was wondering it, yes, it's "unsafe".
2 replies →
CompactStr doesnt have any additional runtime overhead iirc right? So in theory you can drop it in everywhere even when you expect > 25 chars. Maybe an extra branch in the >25 char case?
SSO does have overhead. Firstly, on every access you have a branch. Secondly, and more severely, the "most general" umbrella type that all string methods are defined on is a string slice, and whereas conversion from `String` to `&str` is literally a no-op, SSO strings require work to be done to convert them to string slices. Furthermore, note that in the (surprisingly common) case where the string is zero-length, String already skips the allocation, same as an SSO string.
1 reply →
I wish Box and Option got specialized specialized shorthand syntax in Rust say `^`/ `? or something like that.
Box<T> used to be ~T early on in rust… (then it became a `box` keyword, before being removed entirely.) They got rid of it because they wanted to move more things into libraries and have a less opinionated compiler.
I think I agree though, especially with Option. Swift’s option syntax (and kotlin’s which is similar) is so much better, a simple question mark in the type. Options are important enough that dedicated syntax makes so much sense. Rust blew their chance here with ? meaning “maybe early return”, it would have been a lot more useful as an Option indicator.
If anyone's doing this kind of optimization, dhat-rs is worth a look, it shows you exactly which fields and call sites are eating memory, instead of just a total. Saves a lot of guessing about where to start.
thanks for this :>
So there are now two ways to represent the same state: None or Some(struct whose fields are all None). Even though one of these representations is never produced by the deserialization routine, anyone could construct it if the constructor is public. And even if they don't, the different representations will show up in pattern matching as separate paths for every access to the field. This looks like a good opportunity to make these types (optimized for storage) private, and to define public view objects/accessors (optimized for usage) on top of them that merge equivalent representations.
tbh "trait" feels like a very problematic name for that type, for this kind of educational purpose - `trait` is already an established concept and keyword: https://doc.rust-lang.org/book/ch10-02-traits.html
It's especially problematic because traits don't have memory behaviors like this article in most cases - by default they're unsized, because it's a description of behavior, not data, and you can't even use them as a struct field without extra work.
Like, replace "trait" in here with "box" and see how confusing it would be to be describing how you saved memory by boxing your box, because option doesn't box like many other languages do.
Are there any tools that help finding these kinds of things? Like a profiler that says "80% of the allocated bytes are objects of this type, with 95% of those having that field set to None"
It would be super useful since I think this is pretty likely to be surprising to many users. But the profiler would need to be a particularly-specific refinement of even that: you need to make it obvious that it's not "95% of your Option<Thing>s are None, and your Option<Things> are using X bytes", but that "95% of the bytes used for your Option<Thing>s are used for None versions." Otherwise you could just assume that your non-None ones are just that chunky, or you have that many of them... I haven't seen a profiler with that level of insight, unfortunately.
Perhaps because this feels like a fairly rust-specific gotcha. Especially if you're coming from languages where there's often not much syntactical distinction made between "this is a pointer because I don't want to be copying it" and "this is a pointer because it's optional."
For instance, it's not until now that I actually understood what the sibling comment about the Enum type size discrepancy lint meant: "This lint obviously cannot take the distribution of variants in your running program into account. It is possible that the smaller variants make up less than 1% of all instances, in which case the overhead is negligible and the boxing is counter-productive. Always measure the change this lint suggests." I had always accidentally read this backwards, thinking it meant something more to the effect of "if most of the instances are actually small, then it's not a problem here, but be aware that some of them are much larger so some of your calls to things with this could end up passing much larger types."
"You have 400 megabytes of zeros in <this type>" is probably a pretty easy heuristic to add.
2 replies →
The closest I am aware of is clippy (`cargo clippy` in a standard Rust project will run it with default configurations).
Clippy is essentially a linter; and one of its checks catches cases where different enum variants have a significantly different size; with a suggestion to Box the larger variant.
Since this is just a linter, it doesn't actually have any knowledge of how frequently each variant is actually used. It also doesn't address the situation in the article at all.
Specifically this lint: https://rust-lang.github.io/rust-clippy/master/index.html#la...
I've personally found heaptrack[1] pretty good for this task, very straightforward to use and the info is detailed enough. Though, it'll only tell you where they are happening (e.j. allocation rate for Box::new), but not exactly what type they are given that info isn't available at runtime. Usually that kind of thing would be reserved to GC-based languages where they keep track of counts for each object.
1: https://github.com/kde/heaptrack
I'm a huge fan of perfetto. It requires some manual steps to get working with Linux, but it's a great tool: https://perfetto.dev/docs/data-sources/native-heap-profiler#...
I think the number of instances should be a clue that you need to look at the layout.
Small correction:
> a lot of boxes means a fragmented heap. In such case it's not a problem but this might be worth keeping in mind.
A good malloc will be able to handle this without issue due to various optimizations specifically that inherently fight fragmentation. Default Linux malloc (glibc) may have issues but I did say good malloc (and even glibc generally shouldn’t struggle with the pattern described I think).
I quite often have this issue with async. You get a state machine that is huge because of how Rust builds it.
This clippy lint does a good job of warning you when this might happen: https://rust-lang.github.io/rust-clippy/master/index.html?se...
std::alloc::Allocator when?
[Edit:deleted]
Box<str> is still two words (length and pointer). That's better than the 3 words (length, pointer, capacity) for strings, but Box<String> is one word (not including the heap allocation).
I wonder from time to time whether you can decide the best “schema shape” beforehand, ie before you can run real workloads that stress the memory implications of such things. This can be very useful if you are trying to decide the boundary of some public facing API, but for whatever reason can’t run benchmarks (lack of impl, data, time, etc).
Without that, if you try to suggest a transformation like this when the schema is first conceived, it will likely be considered premature optimization.
[dead]
[dead]
TLDR: use a nullable pointer instead of fields in nested structs to save memory.
[dead]
[dead]