Comment by BowBun

2 years ago

I agree in general. However much like how the rise of 'script kiddies' meant that inexperienced, sometimes underage kids get involved with hacking, one can worry the same can happen with AI-enabled activities.

I've spent enough time in the shady parts of the internet to realize that people that spend significant time learning about niche/dangerous hobbies _tend_ to realize the seriousness of it.

My fear with bio-weapons would be some 13-year-old being given step-by-step instructions with almost 0 effort to create something truly dangerous. It lowers the bar quite a bit for things that tended to be pretty niche and extreme.

how is a 13-year old going to get access to a DNA synthesizer, incubators, growth media, and numerous kits for replicating and transfecting bacteria with a plasmid, or to incubate some virus, along with all the assays and such needed?

even if this 13-year old somehow found herself alone in a fully-equipped BSL-3 laboratory, it's still a fuck-ton of work. far from "almost 0 effort."

not knowing what to do is not the bottleneck.

I don't think the "how to make $DANGEROUS_SUBSTANCE" is any easier with AI than with a search engine. However I could see it adding risk with evasion of countermeasures: "How do I get _____ on a plane?" "How do I obtain $PRECURSOR_CHEMICAL?"

  • AI guided step-by-steps can fill in for a lack of rudimentary knowledge, as long as one can follow instructions.

    Conversational interfaces definitely increase the accessibility of knowledge.

    And critically, SaaS AI platforms increase the availability of AI. E.g. the person who wouldn't be able to set up and run a local model, but can click a button on a website.

    It seems reasonable to preclude SaaS platforms from making it trivial to produce the worse societal harms. E.g. prevent stable diffusion services from returning celebrities or politicians, or LLMs from producing political content.

    Sure, it's still possible. But a knee high barrier at least keeps out those who aren't smart enough to step over it.

    • I suppose you're right, I think the resistance I feel is rooted in not wanting to believe the average person is so stupid that getting a "1-2-3" list from a GPT interface will make them successful vs an Anarchist Cookbook (that's been in publication for 52 years) or online equivalent that merely requires a web search and a bit of navigation. Another factor is "second-order effects" (might not be the right word, maybe "network effects"), where one viral vid or news article says "someone made _____ and $EXTRAORDINARY_THING_HAPPENED" might cause a million people to imitate and begin with searching "how to make _____". Then the media spins their controversy of "should we ban AI from teaching about ______" which causes even more people to search for it (streisand). who knows whats going to happen, I don't see much good coming out of it (this topic specifically).

      1 reply →

    • > Conversational interfaces definitely increase the accessibility of knowledge.

      Shouldn't increasing the accessibility of knowledge be a good thing but yet your tone seems to imply the opposite?

      4 replies →