Comment by malwrar
3 days ago
We should assume sophisticated attackers, AI-enabled or otherwise, as our time with computers goes on, and no longer give leeway to organizations who are unable to secure their systems properly or keep customers safe in the event that they are breached. Decades of warnings from the infosec community have fallen upon the deaf ears of "it doesn't hurt so I'm not going to fix it" of those whose opinions have mattered in the places that count.
I remember once a decade or so ago talking to a team at defcon of _loose_ affiliation where one guy would look for the app exploit, another guy would figure out how to pivot out of the sandbox to the OS, and another guy would figure out how to get root, and once they all got their pieces figured out they'd just smash it (and variants) together for a campaign. I hadn't heard of them before meeting them, and haven't heard about them since since, and they put a face for me though on a silent coordinated adversary model that must be increasing in prevalence as more and more folks out there realize the value of computer knowledge and gain access to it through once means or another.
Open source tooling enables large-scale participation in security testing, and something about humans seems to generally result in a distribution where some nuts use their lighters to burn down forests but most use them to light their campfires. We urgently need to design systems that can survive in the era of advanced threats, at least to the point where the best adversaries can achieve is service disruption. I'd rather live in a world where we can all work towards a better future than one where we hope that limiting access will prevent catastrophe. Assuming such limits can even be maintained, and that allowing architects to pretend that fires can never happen in their buildings means that they don't have to obey fire codes or install alarms & marked exits.
Would you say the same about all people being responsible for safeguarding their own reputations against reputational attacks at scale, all communities have to protect against advanced persistent threats infiltrating them 24/7, and all people’s immune systems have to protect against designer pathogens by AI-assisted terrorists?
I think our full understanding of the spectrum of these threats will lead to the construction of robust safeguards against them. Reputational attacks at scale are a weakness of the current platforms within which we consume news, form community, and build trust. Computer attacks described in the article are caused by sloppy design/implementation brought into existence by folks whose daily incentives are less about making safe code and more about delivering features. "Designer pathogens" have been described as an accessible form of terrorism since far before AI has existed. All of these threats and similar have existed since before AI, and will continue to exist if AI is snapped out of existence right now. The excuse for not preventing/addressing them has always been about knowledge and development resources, which current generative AI tech addresses.