← Back to context

Comment by falcor84

18 hours ago

As I see it, the focus should not be about the coding, but about the testing, and particularly the security evaluation. Particularly for critical infrastructure, I would want us to have a testing approach that is so reliable that it wouldn't matter who/what wrote the code.

I dont think that will ever be possible.

At some point security becomes - the program does the thing the human wanted it to do but didn't realize they didn't actually want.

No amount of testing can fix logic bugs due to bad specification.

  • AI as advanced fuzz-testing is ridiculously helpful though - hardly any bug you can in this sort of advanced system is a specification logic bug. It's low-level security-based stuff, finding ways to DDOS a local process, or work around OS-level security restrictions, etc.

    • I'm kind of doubtful that AI is all that great at fuzz testing. Putting that aside though, we are talking about web browsers here. Security issues from bad specification or misunderstanding the specification is relatively common.

    • Re-read the thread you are replying to.

      Each of the last 4 comments in your thread (including yours) are conflating what they mean by AI.

  • Well, yes, agreed - that is the essential domain complexity.

    But my argument is that we can work to minimize the time we spend on verifying the code-level accidental complexity.

    • Sure, but that is what we've been doing since the early 2000s (e.g. aslr, read only stacks, static analysis, etc).

      And we've had some succeses, but i wouldn't expect any game changing breakthroughs any time soon.

I have been thinking about that lately and isn't testing and security evaluation way harder problem than designing and carefully implementing new features? I think that vibecoding automates easiest step in SW development while making more challenging/expensive steps harder. How are we suppose to debug complex problems in critical infrastructure if no one understands code? It is possible that in future agents will be able to do that but it feels to me that we are not there yet.

I disagree. Thorough testing provides some level of confidence that the code is correct, but there's immense value in having infrastructure which some people understand because they wrote it. No amount of process around your vibe slop can provide that.

  • That's just status quo, which isn't really holding up in the modern era IMO.

    I'm sure we'll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won't be making any bets on slow infrastructure.

  • I somewhat agree, but even then would argue that the proper level at which this understanding should reside is at the architecture and data flow invariants levels, rather than the code itself. And these can actually be enforced quite well as tests against human-authored diagrammatical specs.

    • If you don't fully understand the code how do you know it implements your architecture exactly and without doing it in a way that has implications you hadn't thought of?

      As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn't explain well enough perhaps but how was I to know I failed without reading the code?

      1 reply →