Comment by ben_w

6 days ago

Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.

AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.

It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.

On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.

On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/

Do you know how to build a nuke? You might know the technicaly details of how a nuke is made, but do you know everything that's required, all the parameters and pressure values that are required? I find that unlikely, but AI seems to be increasingly more capable of providing such instructions from cross referenced data.

  • That's based on a silly belief (that's becoming more obvious with AI, but is silly in general) : just because you can read about something it means you learned it.

    Even if I gave you exact instructions on how to use even basic stuff like power tools - if you had no experience using stuff like grinders/saws/routers and I gave you full detailed instructions on how to do something non-trivial - you're more likely to cut off body parts than achieve what you intended. There's so much fundamental stuff that you must internalize subconsciously/through trial and error - before you can have enough mental capacity to think about the higher level objectives.

    Actually AI demonstrates this perfectly - once they get RL harness for programming they start to get better at it. Without experimentation they can ingest all source code/tutorials/books in the world and still produce shit.

    • Aye, that's the kind of thing I had in mind for the difference between information and competency.

      (It's also why I wrote that I know what a calutron is and not claimed that I could build one today; the "and a few years" was not decorative).

  • Even if sources have been lying to me, which is certainly possible, I believe I understand enough to determine cross sections by experiment and from that to determine critical masses; for isotopic enrichment I know about the calutron, which is meh but works and can be designed from scratch with things I know (though caveat have not memorised, just that I know the keywords "proton mass" and "Lorentz force" and what to use them for); for trigger, I would pick a gun-type design rather than implosion, again this is meh but works and is easy.

    A few tens of millions of USD mostly spent on electricity, a surprisingly large quantity of natural uranium (because the interesting isotope is a very small percentage), and a few years, and I expect most people on this forum could make a Little Boy type bomb.