Comment by achierius

5 months ago

The other 15/16 attempts would crash though, and a bug that unstable is not practically usable in production, both because it would be obvious to the user / send diagnostics upstream and because when you stack a few of those 15/16s together it's actually going to take quite a while to get lucky.

Typically 14/15 since a tag is normally reserved for metadata, free data, etc. Linux kernel reserves multiple for the internal kernel usage since it was introduced upstream as more of a hardware accelerated debugging feature even though it's very useful for hardening.

  • It's more complicated than that, so I just use 15/16 to gesture at the general idea. E.g. some strategies for ensuring adjacent tags don't collide can include splitting the tags-range in half and tagging from one or the other based on the parity of an object within its slab allocation region. But even 1/7 is still solid.

I get that. That’s why I’m adding the caveat that this doesn’t protect you against attackers that are in a position to try multiple times

  • Detection is 14/15ths of the battle. Forcing attackers to produce a brand new exploit chain every few weeks massively increases attack cost which could make it uneconomical except for national security targets.

    • It will be really interesting to see how well that part of the story works out!

      What we're essentially saying is that evading detection is now 14/15 of the battle, from the attacker's perspective. Those people are very clever