Comment by jfkimmes
17 hours ago
They hint at their AI-augmented reversing methodology, which demonstrates one of the core strengths of current LLM agents. These models, trained extensively on code, can immensely speed up the process of understanding complex system internals.
Security research historically has two difficult components that build on one another: 1. Understanding complex system internals: uncovering the inner workings hidden by abstractions or interfaces 2. Finding vulnerabilities in these uncovered mechanisms
Sometimes both steps are equally hard. But often, finding the vulnerability is trivial once the real mechanisms are uncovered, rather than relying on assumptions about inner workings.
CVE-2026-3854 is a case where the vulnerability is not plainly obvious after understanding the internals. Still, I am confident that this command injection would have been found quickly had it been exposed to a more traditional or accessible attack surface.
Yep, there was a signal to help reverse engineer c++, as it could have been good at helping c++ mass porting to plain and simple C.
But recently this signal got somewhat scrambled, or being sabotage by c++ fan boys (those coding AIs would help getting rid of dev/vendor lock-ing using c++ syntax complexity)