Comment by ethbr1
5 hours ago
'Anthropic is / isn't lying about Mytho's capabilities' is the less interesting conversation.
The more interesting one is:
1. Assuming even incremental AI coding intelligence improvements
2. Assuming increased AI coding intelligence enables it to uncover new zero day bugs in existing software
3. Then open source vs closed source and security/patch timelines will all need to fundamentally change
Whether or not Mythos qualifies as (1), as long as (2) is true then it seems there will eventually be a model with improvements, which leads to (3) anyway.
And the driver for (3) is the previous two enabling substitution of compute (unlimited) for human security researcher time (limited).
Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?), whether model rollouts now need to have a responsible disclosure time built in before public release, and how geopolitics plays into this (is Mythos access being offered to the Chinese government?).
It'll be curious what happens when OpenAI ships their equivalent coding model upgrade... especially if they YOLO the release without any responsible disclosure periods.
> Which begs questions about whether closed source will provide any protection (it doesn't appear so, given how able AI tools already are at disassembly?)
Disassembly implies that you're still distributing binaries, which isn't the case for web-based services. Of course, these models can still likely find vulnerabilities in closed-source websites, but probably not to the same degree, especially if you're trying to minimize your dependency footprint.
You're still at the point that any known or unknown disclosure of your binary puts you at risk. At best it's a false sense of security.
> it doesn't appear so, given how able AI tools already are at disassembly?
If that's your concern, shareware industry developed tools to obfuscate assembly even from the most brilliant hackers.
That's not true, they did do obfuscation but the main sneaky thing they did was to make hackers think that they had found all of the checks, and then hide checks that would only trigger half way through the game. That kind of obfuscation is also not relevant to security vulnerabilities.
AI is already superhuman at reading and understanding assembly and decompilation output, especially for obfuscated binaries. I have tried giving the same binary with and without heavy control flow obfuscation to the same model, and it was able to understand the obfuscated one just fine.