Comment by stanfordkid
2 years ago
Regulatory capture in action. The real immediate risks of AI is in privacy, bias, data leakage, fraud, control of infrastructure/medical equipment etc. not manufacturing biological weapons. This seems like a classic example of government doing something that looks good to the public, satisfies incumbents and does practically nothing.
Current AI is already capable of designing toxic molecules.
Dual use of artificial-intelligence-powered drug discovery
https://www.nature.com/articles/s42256-022-00465-9.epdf
Interview with the lead author here: "AI suggested 40,000 new possible chemical weapons in just six hours / ‘For me, the concern was just how easy it was to do’"
https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
Chemical weapons are already a solved problem. By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.
Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.
I agree in general. However much like how the rise of 'script kiddies' meant that inexperienced, sometimes underage kids get involved with hacking, one can worry the same can happen with AI-enabled activities.
I've spent enough time in the shady parts of the internet to realize that people that spend significant time learning about niche/dangerous hobbies _tend_ to realize the seriousness of it.
My fear with bio-weapons would be some 13-year-old being given step-by-step instructions with almost 0 effort to create something truly dangerous. It lowers the bar quite a bit for things that tended to be pretty niche and extreme.
12 replies →
A lot of knowledge is locked up in the chemical profession. The intersection between qualified chemists and crazy people is, absolutely, a small number. If regular people start to get access to that knowledge it could be a problem.
13 replies →
> Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.
That doesn't seem right. Surely, making it easier for non-state actors to do things that state actors only fail to do because they agreed to treaties banning it, can only increase the risk that non-state actors may do those things?
Laser blinding weapons are banned by treaty, widespread access to lasers lead to scenes like this a decade ago during the Arab Spring: https://www.bbc.com/news/av/world-middle-east-23182254
> this presents additional risk from non-state actors, but there's no fundamentally new risk here.
This is splitting hairs for no real purpose. Additional risk is new risk.
> By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.
Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.
Once we lost that advantage, by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.
We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.
7 replies →
Given how fast AI has improved in recent years, can we be certain no malicious group will discover a way to engineer biological weapons or pandemic-inducing pathogens using near-future AI?
Moreover, once an AI with such capability is open source, there's practically no way to put it back into Pandora's box. Implementing proper and judicious regulations will reduce the risks to everyone.
> but there's no fundamentally new risk here
This is incredibly naive. These models unlock capabilities for previously unsophisticated actors to do extremely dangerous things in almost undetectable ways.
1 reply →
As someone who has worked on ADMET risk for algorithmically designed drugs, this is a nothing burger.
"Potentially lethal molecules" is a far cry away from "molecule that can be formulated and widely distributed to a lethal effect." It is as detached as "potentially promising early stage treatment" is from "manufactured and patented cure."
I would argue the Verge's framing is worse. "Potentially lethal molecule" captures _every_ feasible molecule, given that anyone who has worked on ADMET is aware of the age-old adage: the dose makeths the poison. At a sufficiently high dose, virtually any output from an algorithmic drug design algorithm, be it combinatorial or 'AI', will be lethal.
Would a traditional, non-neural net algorithm produce virtually the same results given the same objective function and apriori knowledge of toxic drug examples? Absolutely. You don't need a DNN for that, we've had the technology since the 90s.
A grad student in Systems Biology and 20k in funding is capable of generating much more "interesting" things than toxic molecules. (Such things are banned by Asilomar's 1975 convention though)
It's true that immediate problems with AI are different, but we hope to be able to solve those problems and to have time for that. The risks addressed in the article could leave us less time and ability to properly solve when they grow to the obvious size, so that requires thinking ahead.
How does providing research grants to small independent researchers satisfying incumbents?
Doesn't it mention all those things?
Inclined to agree. Clearly Biden doesn't know the first thing about it (I would say the same about any president BTW). So who really wrote the regulations he is announcing, and who are they listening to?