Comment by WarmWash

6 days ago

While scary, information like this has been pretty accessible for 20-30 years now.

In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).

I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.

Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.

AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.

It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.

On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.

On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/

  • Do you know how to build a nuke? You might know the technicaly details of how a nuke is made, but do you know everything that's required, all the parameters and pressure values that are required? I find that unlikely, but AI seems to be increasingly more capable of providing such instructions from cross referenced data.

    • That's based on a silly belief (that's becoming more obvious with AI, but is silly in general) : just because you can read about something it means you learned it.

      Even if I gave you exact instructions on how to use even basic stuff like power tools - if you had no experience using stuff like grinders/saws/routers and I gave you full detailed instructions on how to do something non-trivial - you're more likely to cut off body parts than achieve what you intended. There's so much fundamental stuff that you must internalize subconsciously/through trial and error - before you can have enough mental capacity to think about the higher level objectives.

      Actually AI demonstrates this perfectly - once they get RL harness for programming they start to get better at it. Without experimentation they can ingest all source code/tutorials/books in the world and still produce shit.

      1 reply →

    • Even if sources have been lying to me, which is certainly possible, I believe I understand enough to determine cross sections by experiment and from that to determine critical masses; for isotopic enrichment I know about the calutron, which is meh but works and can be designed from scratch with things I know (though caveat have not memorised, just that I know the keywords "proton mass" and "Lorentz force" and what to use them for); for trigger, I would pick a gun-type design rather than implosion, again this is meh but works and is easy.

      A few tens of millions of USD mostly spent on electricity, a surprisingly large quantity of natural uranium (because the interesting isotope is a very small percentage), and a few years, and I expect most people on this forum could make a Little Boy type bomb.

Well the real issue is that it knocks down the knowledge barrier, giving your step by step guides and reinterating what parts will kill you is the important part.

Understanding and staying alive while producing neuro chemicals are the biggest challenges here.

A depressed person with no prior knowledge could possibly figure out a way to make these chemicals without killing themselves and that's the problem.

  • A Michelin chef can give you their recipe, and give you their ingredients, but you still will fail miserably trying to match their dish.

    It's the same with drugs, whose instructions and ingredient lists have been a google search away for decades now. Yet you still need a master chemist to produce anything. By the time AI can hand hold an idiot through the synthesis of VX agents (which would require an array of sensors beyond a keyboard and camera), we will likely have bigger issues to worry about.

    • That is completely wrong.

      Food preparation, like pharmaceutical drug fabrication, is inherently scientific and methodologically controllable.

      Look no further than the Four Thieves Vinegar Collective. Original synthesis line construction is hard. But the exact formula "add this", "turn on stir bar", "do you see particulate? Yes for +10m at stir", etc.

      And if their results are replicated, theyre seeing 99.9% yields, compared to commercial practices of 99% (Solvaldi)

      2 replies →

  • They can do that by jailbreaking models but is that really easier and less work than getting it from Wikipedia?

    • We will only really know if (or when) it will happen. We can do a sample group of people attempting to create such chemicals under supervision and comparing how helpful they truly are.

> been pretty accessible for 20-30 years now.

There was this book 20 years ago: "Secret of Methamphetamine Manufacturing" by Uncle Fester

https://www.amazon.de/-/en/Uncle-Fester-ebook/dp/B00305GTWU

(Actually, 8th edition :-D)

  • I am convinced the Uncle Fester books are some kind of performance art. "Practical LSD Manufacture" basically starts with "go find some ergot in fields" and step two is "plant and grow a plot of wheat."

    • I have no doubt. He hails from the fine countercultural tradition of literary civil disobedience, a.k.a., writing and distributing information about subjects "the government doesn't want you to know" that strongly influenced early hacker subculture.

      C.f., e.g., William Powell (The Anarchist Cookbook), Abbie Hoffman (Steal This Book; my personal favorite, while much of the information is outdated, the style is charming, and where else can you find information about phone phreaking, hitchhiking, shoplifting, street fighting, cooking (food, not drugs), panhandling, explosives, camping, firearms, birth control, welfare fraud, and Henry Kissinger's home phone number between the same two covers? At its core, it's a book of "life hacks", disreputable and otherwise, written decades before the term was coined.).

Consider two dictionaries, one in which the entries are alphabetized as usual and one in which they're randomized. Both support random access: you can turn to any page, and read any entry. Therefore both are "accessible". Only one actually supports useful, quick word lookup.

Much longer than that, and was available way before an internet. I graduated STEM high school in St. Petersburg in 1981, and I had several classmates who were big funs of chemistry. That they were able to create from textbooks, school lab ingredients, and understanding:

WWI era poison gas, tear gas, potassium cyanide, and bunch of explosives like acetone peroxide.

LLMs have all of that knowledge in training data

My username is a reference to the successor to totse. Totse was the first board I spent a lot of time on

  • Heh, I'm sure there are a few more wondering around here. Zoklet never clicked for me, but totse was my home for years.

Many of these forums exist now. Let's not enumerate them as they are one of the treasures of the internet.

I categorize this kind of stuff as "Crisis of accessibility" . AI is not alone in this territory, happens all over the place. Basically it's a problem that's existed for ages but the barrier to entry was high enough we didn't care.

Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print.

You could always find info about how to make a bomb or whatnot but you had to like, find and open a book or read a pdf, now an LLM will spoon-feed it to you step by step lowering the barrier.

"Crisis of accessibility" is simultaneously legitimate concern but also in my mind an example of "security by obscurity". that relying on situational friction to protect you from malfeasance is a failure to properly address the core issue.

  • > Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print

    There were hundreds of mass shootings in America in 2025 alone [1]. None of them involved a 3D-printed weapon.

    To my knowledge, there has been one confirmed shooting with a 3D-printed gun, and it didn't uniquely enable the crime.

    [1] https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_...

    • That's mostly because they suck (for now, who knows when we'll get home metal printing), also it's easy to get real guns. also crises of accessibility could be predicate on merely the perception that the barrier is now too low rather than actual harm.

      I don't really think photoshop, flat bed scanners and half decent inkjets really facilitated a lot of counterfeit currency but there was the same panic back then and "protections" put in place.

> Execution though has still steadily required experts.

Where experts = the government.