Comment by nopinsight

2 years ago

Current AI is already capable of designing toxic molecules.

Dual use of artificial-intelligence-powered drug discovery

https://www.nature.com/articles/s42256-022-00465-9.epdf

Interview with the lead author here: "AI suggested 40,000 new possible chemical weapons in just six hours / ‘For me, the concern was just how easy it was to do’"

https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...

Chemical weapons are already a solved problem. By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.

Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.

  • I agree in general. However much like how the rise of 'script kiddies' meant that inexperienced, sometimes underage kids get involved with hacking, one can worry the same can happen with AI-enabled activities.

    I've spent enough time in the shady parts of the internet to realize that people that spend significant time learning about niche/dangerous hobbies _tend_ to realize the seriousness of it.

    My fear with bio-weapons would be some 13-year-old being given step-by-step instructions with almost 0 effort to create something truly dangerous. It lowers the bar quite a bit for things that tended to be pretty niche and extreme.

    • how is a 13-year old going to get access to a DNA synthesizer, incubators, growth media, and numerous kits for replicating and transfecting bacteria with a plasmid, or to incubate some virus, along with all the assays and such needed?

      even if this 13-year old somehow found herself alone in a fully-equipped BSL-3 laboratory, it's still a fuck-ton of work. far from "almost 0 effort."

      not knowing what to do is not the bottleneck.

      1 reply →

    • I don't think the "how to make $DANGEROUS_SUBSTANCE" is any easier with AI than with a search engine. However I could see it adding risk with evasion of countermeasures: "How do I get _____ on a plane?" "How do I obtain $PRECURSOR_CHEMICAL?"

      9 replies →

  • A lot of knowledge is locked up in the chemical profession. The intersection between qualified chemists and crazy people is, absolutely, a small number. If regular people start to get access to that knowledge it could be a problem.

    • I think as most of us are software people, in mind if not profession, it gives a misleading perception on where the difficulty in many things is. The barrier there is not just knowledge. In fact, there are countless papers available with quite detailed information on how to create chemical weapons. But knowledge is just a starting point. Technical skill, resources, production, manufacturing, and deployment are all major steps where again the barrier is not just knowledge.

      For instance there's a pretty huge culture around building your own nuclear fusion device at home. And there are tremendous resources available as well as step by step guides on how to do it. It's still exceptionally difficult (as well as quite dangerous), because it's not like you just get the pieces, put everything together like legos, flick on the switch, and boom you have nuclear fusion. There's a million things that not only can but will go wrong. So in spite of the absolutely immense amount of information about there, it's still a huge achievement for any individual or group to achieve fusion.

      And now somebody trying to do any of these sort of things with the guidance of... chatbots? It just seems like the most probable outcome is you end up getting yourself killed.

      3 replies →

    • >If regular people start to get access to that knowledge it could be a problem.

      so when are we going to start regulating and restricting the sale of education/text books?

      a knowledge portal isn't a new concept.

      6 replies →

  • > Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.

    That doesn't seem right. Surely, making it easier for non-state actors to do things that state actors only fail to do because they agreed to treaties banning it, can only increase the risk that non-state actors may do those things?

    Laser blinding weapons are banned by treaty, widespread access to lasers lead to scenes like this a decade ago during the Arab Spring: https://www.bbc.com/news/av/world-middle-east-23182254

  • > this presents additional risk from non-state actors, but there's no fundamentally new risk here.

    This is splitting hairs for no real purpose. Additional risk is new risk.

    > By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.

    Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.

    Once we lost that advantage, by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.

    We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

    • Wiki has a pretty nice article on what went into the sarin attack. [1] A brief quote:

      ---

      "The Satyan-7 facility was declared ready for occupancy by September 1993 with the capacity to produce about 40–50 litres (11–13 US gal) of sarin, being equipped with 30-litre (7.9 US gal) capacity mixing flasks within protective hoods, and eventually employing 100 Aum members; the UN would later estimate the value of the building and its contents at $30 million.[23]

      Despite the safety features and often state-of-the-art equipment and practices, the operation of the facility was very unsafe – one analyst would later describe the cult as having a "high degree of book learning, but virtually nothing in the way of technical skill."[24]"

      ---

      All of those hundreds of workers, countless experts working for who knows how many man hours, and just massive scale development culminated in a subway attack carried out on 3 lines, during rush hour. It killed a total of 13 people. Imagine if they just bought a bunch of cars and started running people over.

      Many of these things sound absolutely terrifying, but in practice they are not such a threat except when carried out at a military level of scale and development.

      [1] - https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

    • >We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

      I mean, you can make chlorine gas by mixing bleach and vinegar.

    • > by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.

      How does actual and potential harm from these incidents compare to harm from common traffic accidents / common health issues / etc? Perhaps legislation / government intervention should be based on harm / benefit? Extreme harm for example might be caused by a large asteroid impact etc so preparing for that could be worthwhile...

    • > We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

      They’d probably end up killing fewer people with a lot more effort. Chemical weapons are not really all that effective.

      2 replies →

    • >Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.

      How much death and destruction has been brought by state actors vs aggrieved civilians?

  • Given how fast AI has improved in recent years, can we be certain no malicious group will discover a way to engineer biological weapons or pandemic-inducing pathogens using near-future AI?

    Moreover, once an AI with such capability is open source, there's practically no way to put it back into Pandora's box. Implementing proper and judicious regulations will reduce the risks to everyone.

  • > but there's no fundamentally new risk here

    This is incredibly naive. These models unlock capabilities for previously unsophisticated actors to do extremely dangerous things in almost undetectable ways.

As someone who has worked on ADMET risk for algorithmically designed drugs, this is a nothing burger.

"Potentially lethal molecules" is a far cry away from "molecule that can be formulated and widely distributed to a lethal effect." It is as detached as "potentially promising early stage treatment" is from "manufactured and patented cure."

I would argue the Verge's framing is worse. "Potentially lethal molecule" captures _every_ feasible molecule, given that anyone who has worked on ADMET is aware of the age-old adage: the dose makeths the poison. At a sufficiently high dose, virtually any output from an algorithmic drug design algorithm, be it combinatorial or 'AI', will be lethal.

Would a traditional, non-neural net algorithm produce virtually the same results given the same objective function and apriori knowledge of toxic drug examples? Absolutely. You don't need a DNN for that, we've had the technology since the 90s.

A grad student in Systems Biology and 20k in funding is capable of generating much more "interesting" things than toxic molecules. (Such things are banned by Asilomar's 1975 convention though)