Comment by pron
19 hours ago
> Forcing function to avoid use-after-free
Doesn't reusing memory effectively allow for use-after-free, only at the progam level (even with a borrow checker)?
19 hours ago
> Forcing function to avoid use-after-free
Doesn't reusing memory effectively allow for use-after-free, only at the progam level (even with a borrow checker)?
Yes, kind of. In the same sense that Vec<T> in Rust with reused indexes allows it.
Notice that this kind of use-after-free is a ton more benign though. This milder version upholds type-safety and what happens can be reasoned about in terms of the semantics of the source language. Classic use-after-free is simply UB in the source language and leaves you with machine semantics, usually allowing attackers to reach arbitrary code execution in one way or another.
That what happens can be reasoned about in the semantics of the source language as opposed to being UB doesn't necessarily make the problem "a ton more benign". After all, a program written in Assembly has no UB and all of its behaviours can be reasoned about in the source language, but I'd hardly trust Assembly programs to be more secure than C programs [1]. What makes the difference isn't that it's UB but, as you pointed out, the type safety. But while the less deterministic nature of a "malloc-level" UAF does make it more "explosive", it can also make it harder to exploit reliably. It's hard to compare the danger of a less likely RCE with a more likely data leak.
On the other hand, the more empirical, though qualitative, claim made by by matklad in the sibling comment may have something to it.
[1]: In fact, take any C program with UB, compile it, and get a dangerous executable. Now disassemble the executable, and you get an equally dangerous program, yet it doesn't have any UB. UB is problematic, of course, partly because at least in C and C++ it can be hard to spot, but it doesn't, in itself, necessarily make a bug more dangerous. If you look at MITRE's top 25 most dangerous software weaknesses, the top four (in the 2025 list) aren't related to UB in any language (by the way, UAF is #7).
> In fact, take any C program with UB, compile it, and get a dangerous executable. Now disassemble the executable, and you get an equally dangerous program, yet it doesn't have any UB.
I'd put it like this:
Undefined behavior is a property of an abstract machine. When you write any high-level language with an optimizing compiler, you're writing code against that abstract machine.
The goal of an optimizing compiler for a high-level language is to be "semantics-preserving", such that whatever eventual assembly code that gets spit out at the end of the process guarantees certain behaviors about the runtime behavior of the program.
When you write high-level code that exhibits UB for a given abstract machine, what happens is that the compiler can no longer guarantee that the resulting assembly code is semantics-preserving.
1 reply →
>If you look at MITRE's top 25 most dangerous software weaknesses, the top four (in the 2025 list) aren't related to UB in any language (by the way, UAF is #7).
FWIW, I don't find this argument logically sound, in context. This is data aggregated across programming languages, so it could simultaneously be true that, conditioned on using memory unsafe language, you should worry mostly about UB, while, at the same time, UB doesn't matter much in the grand scheme of things, because hardly anyone is using memory-unsafe programming languages.
There were reports from Apple, Google, Microsoft and Mozilla about vulnerabilities in browsers/OS (so, C++ stuff), and I think there UB hovered at between 50% and 80% of all security issues?
And the present discussion does seem overall conditioned on using a manually-memory-managed language :0)
1 reply →
There's some reshuffling of bugs for sure, but, from my experience, there's also a very noticeable reduction! It seems there's no law of conservation of bugs.
I would say the main effect here is that global allocator often leads to ad-hoc, "shotgun" resource management all other the place, and that's hard to get right in a manually memory managed language. Most Zig code that deals with allocators has resource management bugs (including TigerBeetle's own code at times! Shoutout to https://github.com/radarroark/xit as the only code base I've seen so far where finding such bug wasn't trivial). E.g., in OP, memory is leaked on allocation failures.
But if you manage resources manually, you just can't do that, you are forced to centralize the codepaths that deal with resource acquisition and release, and that drastically reduces the amount of bug prone code. You _could_ apply the same philosophy to allocating code, but static allocation _forces_ you to do that.
The secondary effect is that you tend to just more explicitly think about resources, and more proactively assert application-level invariants. A good example here would be compaction code, which juggles a bunch of blocks, and each block's lifetime is tracked both externally:
* https://github.com/tigerbeetle/tigerbeetle/blob/0baa07d3bee7...
and internally:
* https://github.com/tigerbeetle/tigerbeetle/blob/0baa07d3bee7...
with a bunch of assertions all other the place to triple check that each block is accounted for and is where it is expected to be
https://github.com/tigerbeetle/tigerbeetle/blob/0baa07d3bee7...
I see a weak connection with proofs here. When you are coding with static resources, you generally have to make informal "proofs" that you actually have the resource you are planning to use, and these proofs are materialized as a web of interlocking asserts, and the web works only when it is correct in whole. With global allocation, you can always materialize fresh resources out of thin air, so nothing forces you to do such web-of-proofs.
To more explicitly set the context here: the fact that this works for TigerBeetle of course doesn't mean that this generalizes, _but_, given that we had a disproportionate amount of bugs in small amount of gpa-using code we have, makes me think that there's something more here than just TB's house style.
That's an interesting observation. BTW, I've noticed that when I write in Assembly I tend to have fewer bugs than when I write in C++ (and they tend to be easier to find). That's partly because I'm more careful, but also because I only write much shorter and simpler things in Assembly.