← Back to context

Comment by indolering

3 days ago

CHERI is undeniably on the rise. Adapting existing code generally only requires rewriting less than 1% of the codebase. It offers speedups for existing as well as new languages (designed with the hardware in mind). I expect to see it everywhere in about a decade.

There's a big 0->1 jump required for it to actually be used by 99% of consumers -- x86 and ARM have to both make a pretty fundamental shift. Do you see that happening? I don't, really.

  • Tbh I can imagine this catching on if one of the big cloud providers endorses it. Including hardware support in a future version of AWS Graviton, or Azure cloud with a bunch of foundational software already developed to work with it. If one of those hyper scalers puts in the work, it could get to the point where you can launch a simple container running Postgres or whatever, with the full stack adapted to work with CHERI.

    • CHERI on its own does not fix many of the side-channels, which would need something like "BLACKOUT : Data-Oblivious Computation with Blinded Capabilities", but as I understand it, there is no consensus/infra on how to do efficient capability revocation (potentially in hardware), see https://lwn.net/Articles/1039395/.

      On top of that, as I understand it, CHERI has no widespread concept of how to allow disabling/separation of workloads for ulta-low latency/high-throughput/applications in mixed-critical systems in practical systems. The only system I'm aware of with practical timing guarantees and allowing virtualization is sel4, but again there are no practical guides with trade-offs in numbers yet.