Comment by CuriouslyC

1 day ago

Formal verification is a nice idea but it's a big hill to climb from where we're at. Most people can't even get agents to robustly E2E QA code, which is a much smaller hill to climb for (probably) larger benefits. I'm sure this area will improve over time though, since it is an eventual unlock for fully autonomous engineering.

I think for most complex systems, robust E2E QA is a waste of money. A small handful of E2E smoke tests and thoughtful application of smaller tests is usually enough. Though to be fair, agent aren't good at that either.

I think the section on AI from Zero to QED (a proofs in Lean/lang guide) gives a sober path forward from the perspective of market-makers and trading:

"Imagine market infrastructure where agents must prove, before executing, that their actions satisfy regulatory constraints, risk limits, fairness properties, and eventually machine-checkable proofs of Pareto efficiency of market mechanisms. This is a big, hairy, ambitious goal. Not “we reviewed the code” but “the system verified the proof.” The agent that cannot demonstrate compliance cannot act."

https://sdiehl.github.io/zero-to-qed/20_artificial_intellige...

  • I dream of a future where before any software is released we can predict 100 years into the future what effect it will have on every living thing and not release it if unhappiness delta for some living thing falls below a certain threshold.

    • That would require making the behavior of the living things part of the formal specification, which isn’t really possible.