← Back to context

Comment by codelion

2 days ago

Interesting point. It's almost a form of "design by verifiability" – prioritizing architectures that lend themselves to easier reasoning, even if other architectures might offer marginal performance gains.

The cool thing is that you can sometimes "retcon" an existing system using this approach

So in that case you're not even making any performance concessions; the actual running code isn't changing. You're just imposing an abstraction on top of it for the purpose of analysis. The actual running code doesn't have to change for that analysis to bear fruit (e.g., quashing a bug or just increasing understanding/confidence). That's what it sounds like Hillel did with the A,B,C example in the blog post.

Another sort of funny point is that this basic trick is most often used when performance isn't acceptable, as a way of providing a modicum of stability/debugging to an ostensibly uber-optimized system which, once deployed, turns out to be kinda fragile.