← Back to context

Comment by himata4113

6 days ago

None have had the capability to provide me with instructions that have this high of accuracy including the suggesion of completely novel chemical reactions. I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.

> I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.

Right now it kinda is.

LLMs can do interesting things in mathematics while also making weird and unnecessary mistakes. With tool use that can improve. Other AI besides LLMs can do better, and have been for a while now, but think about how available LLMs in software development (so, not Claude Mythos) are still at best junior developers, and apply that to non-software roles.

This past February I tried to use Codex to make a physics simulation. Even though it identified open source libraries to use, instead of using them it wrote its own "as a fallback in case you can't install the FOSS libraries"; the simulation software it wrote itself was showing non-physical behaviour, but would I have known that if I hadn't already been interested in the thing I was trying to get it to build me a simulation of? I doubt it.

  • Well the worst outcome is that you make something deadly which is what you are creating anyway, do that for a year and you could possibly produce a very deadly substance that doesn't have a known treatment.

    • "Worst" outcome assumes it's easy to give an ordering.

      Which is worse, (1) accidentally blowing yourself up with home-made nitroglycerin/poisoning yourself because your home-made fume hood was grossly insufficient, or (2) accidentally making a novel long-lived compound which will give 20 people slow-growing cancers that will on average lower their life expectancy by 2 years each?

      What if it's a small dose of a mercury compound (or methyl alcohol) at a dose which causes a small degree of mental impairment in a large number of people?

      If you're actually trying to cause harm, then your "worst" case scenario is diametrically opposed to everyone else's worst case scenario, because for you the "worst" case is that it does nothing at great expense.

      Right now, I expect LLM failures to be more of the "does nothing or kills user" kind; given what I see from NileRed, even if you know what you're doing, chemistry can be hard to get right.

      1 reply →

Isn't the biggest problem with creating neurotoxins not poisoning yourself while doing it?

I have a hard time believing that you’re the only person who has figured out Claude’s next generation ability to do computational chemistry and computer aided drug design. The AlphaFold folks must be devastated.

> it's not unreasonable

It in fact is. Do you often go around making claims you are entirely unqualified to make? Or is this something new you’re trying?

  • chemical reactions can be expressed via mathematical functions and it has been made very clear that these models can do advanced mathematics.

    And even if it doesn't work, at the end of the day you can work with a model to figure out what went wrong over-time gaining expertise in the field.