← Back to context

Comment by yabones

2 years ago

Chemical weapons are already a solved problem. By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.

Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.

I agree in general. However much like how the rise of 'script kiddies' meant that inexperienced, sometimes underage kids get involved with hacking, one can worry the same can happen with AI-enabled activities.

I've spent enough time in the shady parts of the internet to realize that people that spend significant time learning about niche/dangerous hobbies _tend_ to realize the seriousness of it.

My fear with bio-weapons would be some 13-year-old being given step-by-step instructions with almost 0 effort to create something truly dangerous. It lowers the bar quite a bit for things that tended to be pretty niche and extreme.

  • how is a 13-year old going to get access to a DNA synthesizer, incubators, growth media, and numerous kits for replicating and transfecting bacteria with a plasmid, or to incubate some virus, along with all the assays and such needed?

    even if this 13-year old somehow found herself alone in a fully-equipped BSL-3 laboratory, it's still a fuck-ton of work. far from "almost 0 effort."

    not knowing what to do is not the bottleneck.

  • I don't think the "how to make $DANGEROUS_SUBSTANCE" is any easier with AI than with a search engine. However I could see it adding risk with evasion of countermeasures: "How do I get _____ on a plane?" "How do I obtain $PRECURSOR_CHEMICAL?"

    • AI guided step-by-steps can fill in for a lack of rudimentary knowledge, as long as one can follow instructions.

      Conversational interfaces definitely increase the accessibility of knowledge.

      And critically, SaaS AI platforms increase the availability of AI. E.g. the person who wouldn't be able to set up and run a local model, but can click a button on a website.

      It seems reasonable to preclude SaaS platforms from making it trivial to produce the worse societal harms. E.g. prevent stable diffusion services from returning celebrities or politicians, or LLMs from producing political content.

      Sure, it's still possible. But a knee high barrier at least keeps out those who aren't smart enough to step over it.

      8 replies →

A lot of knowledge is locked up in the chemical profession. The intersection between qualified chemists and crazy people is, absolutely, a small number. If regular people start to get access to that knowledge it could be a problem.

  • I think as most of us are software people, in mind if not profession, it gives a misleading perception on where the difficulty in many things is. The barrier there is not just knowledge. In fact, there are countless papers available with quite detailed information on how to create chemical weapons. But knowledge is just a starting point. Technical skill, resources, production, manufacturing, and deployment are all major steps where again the barrier is not just knowledge.

    For instance there's a pretty huge culture around building your own nuclear fusion device at home. And there are tremendous resources available as well as step by step guides on how to do it. It's still exceptionally difficult (as well as quite dangerous), because it's not like you just get the pieces, put everything together like legos, flick on the switch, and boom you have nuclear fusion. There's a million things that not only can but will go wrong. So in spite of the absolutely immense amount of information about there, it's still a huge achievement for any individual or group to achieve fusion.

    And now somebody trying to do any of these sort of things with the guidance of... chatbots? It just seems like the most probable outcome is you end up getting yourself killed.

    • What story about home made nuclear devices would be complete without a mention of David Hahn, aka the "Nuclear Boy Scout" who built a homemade neutron source at the age of seventeen out of smoke detectors. He did not achieve fusion, but he did get the attention of the FBI, the NRC, and the EPA. He didn't have anywhere near enough to make a dirty bomb, nor did he ever consider making a bomb in the first place*.

      Why do I bring up David Hahn if he never achieved fusion and wasn't a terrorist? Because of how far he got as a seventeen year old. A fourty year old with a FAANG salary with the ideological bent of Theodore Kaczynski could do stupid amounts of damage. First would be to not try and build a nuclear fusion device. The difficult of building one doesn't seem so important if you're a sociopath when trying to be being a terrorist if every sociopath can go out and buy a gun and head to the local mall. There were two major such incidents in the past weeks, with 12 more mass shootings from Friday to Sunday over this past Halloween weekend**. Instead of worrying about the far-fetched, we would do better addressing something that killed 18 people in Maine and 19 in Texas, and 11 more across the country.

      * https://www.pbs.org/newshour/science/building-a-better-breed...

      ** https://www.npr.org/2023/10/29/1209340362/mass-shootings-hal...

      2 replies →

  • >If regular people start to get access to that knowledge it could be a problem.

    so when are we going to start regulating and restricting the sale of education/text books?

    a knowledge portal isn't a new concept.

> Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.

That doesn't seem right. Surely, making it easier for non-state actors to do things that state actors only fail to do because they agreed to treaties banning it, can only increase the risk that non-state actors may do those things?

Laser blinding weapons are banned by treaty, widespread access to lasers lead to scenes like this a decade ago during the Arab Spring: https://www.bbc.com/news/av/world-middle-east-23182254

> this presents additional risk from non-state actors, but there's no fundamentally new risk here.

This is splitting hairs for no real purpose. Additional risk is new risk.

> By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.

Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.

Once we lost that advantage, by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.

We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

  • Wiki has a pretty nice article on what went into the sarin attack. [1] A brief quote:

    ---

    "The Satyan-7 facility was declared ready for occupancy by September 1993 with the capacity to produce about 40–50 litres (11–13 US gal) of sarin, being equipped with 30-litre (7.9 US gal) capacity mixing flasks within protective hoods, and eventually employing 100 Aum members; the UN would later estimate the value of the building and its contents at $30 million.[23]

    Despite the safety features and often state-of-the-art equipment and practices, the operation of the facility was very unsafe – one analyst would later describe the cult as having a "high degree of book learning, but virtually nothing in the way of technical skill."[24]"

    ---

    All of those hundreds of workers, countless experts working for who knows how many man hours, and just massive scale development culminated in a subway attack carried out on 3 lines, during rush hour. It killed a total of 13 people. Imagine if they just bought a bunch of cars and started running people over.

    Many of these things sound absolutely terrifying, but in practice they are not such a threat except when carried out at a military level of scale and development.

    [1] - https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

  • >We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

    I mean, you can make chlorine gas by mixing bleach and vinegar.

  • > by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.

    How does actual and potential harm from these incidents compare to harm from common traffic accidents / common health issues / etc? Perhaps legislation / government intervention should be based on harm / benefit? Extreme harm for example might be caused by a large asteroid impact etc so preparing for that could be worthwhile...

  • > We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

    They’d probably end up killing fewer people with a lot more effort. Chemical weapons are not really all that effective.

    • What you're saying is true but needs context. Chemical weapons aren't very effective in war because you need high concentrations spread over large areas, the wind is your enemy, full body clothing is common and and gas masks are cheap.

      But if your target is an unsuspecting small population in an enclosed space who's spending a lot time there the calculus changes a bit. Sarin for example is odorless and colorless, mustard gas can also be colorless, doesn't hit you immediately and unlikely to be detected by smell.

      It actually happened in Iran and it's lucky the people responsible either didn't know what they were doing or were actively trying to not kill people because they easily could have.

  • >Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.

    How much death and destruction has been brought by state actors vs aggrieved civilians?

Given how fast AI has improved in recent years, can we be certain no malicious group will discover a way to engineer biological weapons or pandemic-inducing pathogens using near-future AI?

Moreover, once an AI with such capability is open source, there's practically no way to put it back into Pandora's box. Implementing proper and judicious regulations will reduce the risks to everyone.

> but there's no fundamentally new risk here

This is incredibly naive. These models unlock capabilities for previously unsophisticated actors to do extremely dangerous things in almost undetectable ways.