← Back to context

Comment by userbinator

1 day ago

That is something a good static analyser or even optimising compiler can find ("opaque predicate detection") without the need for AI, and belongs in the category of "warning" and nowhere near "exploitable". In fact a compiler might've actually removed the unreachable code completely.

Well yeah, it’s a toy example to illustrate a point in an HN discussion :).

Imagine “silly mistake” is a parameter, and rename it “error_code” (pass by reference), put a label named “cleanup” right before the if statement, and throw in a ton of “goto cleanup” statements to the point the control flow of the function is hard to follow if you want it to model real code ever so slightly more.

It will be interesting to see the bugs it’s actually finding.

It sounds like they will fall into the lower CVE scores - real problems but not critical.

  • That's what I'm saying; a static analyser will be able to determine whether the code and/or state is reachable without any AI, and it will be completely deterministic in its output.

    • You cannot tell if code is actually reachable if it depends on runtime input.

      Those really evil bugs are the ones that exist in code paths that only trigger 0.001% of the time.

      Often, the code path is not triggerable at all with regular input. But with malicious input, it is, so you can only find it through fuzzing or human analysis.

    • Why hasn't it then? The Linux kernel must be asking the most heavily-audited pieces of software in existence, and yet these bugs were still there.