← Back to context

Comment by himata4113

6 days ago

I have made both GPT 5.4 and Opus 4.6 produce me content on creating neurotoxic agents from items you can get at most everyday stores. It struggled to suggest how to source phosphorus, but eventually lead me to some ebay listings that sell phosphorus elemental 'decorations' and also lead me towards real!! blackmarket codewords for sourcing such materials.

It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.

Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.

All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.

I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.

While scary, information like this has been pretty accessible for 20-30 years now.

In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).

I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.

  • Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.

    AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.

    It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.

    On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.

    On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/

    • Do you know how to build a nuke? You might know the technicaly details of how a nuke is made, but do you know everything that's required, all the parameters and pressure values that are required? I find that unlikely, but AI seems to be increasingly more capable of providing such instructions from cross referenced data.

      3 replies →

  • Well the real issue is that it knocks down the knowledge barrier, giving your step by step guides and reinterating what parts will kill you is the important part.

    Understanding and staying alive while producing neuro chemicals are the biggest challenges here.

    A depressed person with no prior knowledge could possibly figure out a way to make these chemicals without killing themselves and that's the problem.

    • A Michelin chef can give you their recipe, and give you their ingredients, but you still will fail miserably trying to match their dish.

      It's the same with drugs, whose instructions and ingredient lists have been a google search away for decades now. Yet you still need a master chemist to produce anything. By the time AI can hand hold an idiot through the synthesis of VX agents (which would require an array of sensors beyond a keyboard and camera), we will likely have bigger issues to worry about.

      3 replies →

  • Consider two dictionaries, one in which the entries are alphabetized as usual and one in which they're randomized. Both support random access: you can turn to any page, and read any entry. Therefore both are "accessible". Only one actually supports useful, quick word lookup.

  • Much longer than that, and was available way before an internet. I graduated STEM high school in St. Petersburg in 1981, and I had several classmates who were big funs of chemistry. That they were able to create from textbooks, school lab ingredients, and understanding:

    WWI era poison gas, tear gas, potassium cyanide, and bunch of explosives like acetone peroxide.

    LLMs have all of that knowledge in training data

  • My username is a reference to the successor to totse. Totse was the first board I spent a lot of time on

    • Heh, I'm sure there are a few more wondering around here. Zoklet never clicked for me, but totse was my home for years.

  • Many of these forums exist now. Let's not enumerate them as they are one of the treasures of the internet.

  • I categorize this kind of stuff as "Crisis of accessibility" . AI is not alone in this territory, happens all over the place. Basically it's a problem that's existed for ages but the barrier to entry was high enough we didn't care.

    Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print.

    You could always find info about how to make a bomb or whatnot but you had to like, find and open a book or read a pdf, now an LLM will spoon-feed it to you step by step lowering the barrier.

    "Crisis of accessibility" is simultaneously legitimate concern but also in my mind an example of "security by obscurity". that relying on situational friction to protect you from malfeasance is a failure to properly address the core issue.

  • > Execution though has still steadily required experts.

    Where experts = the government.

When my brother started to study Chemisty, he was told a) that it was easy to make meth b) the profit he would make and c) that the police would no doubt catch him, as only university students would make meth so pure.

By the time he was done, he knew enough to commit mass murder in half a dusin different very hard to track ways. I am sure doctors know how to commit murder and make it look natural.

My brother never killed anyone, or made any meth. You simply cannot have it so that students don’t get this type of knowledge, without seriously compromising their education and its the same way with LLMs.

The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.

  • > The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.

    The LLMs aren't being punished for wanting* to know things.

    The problem for LLMs is, they're incredibly gullible and eager to please and it's been really difficult to stop any human who asks for help even when a normal human looking at the same transcript will say "this smells like the users wants to do a crime".

    One use-case people reach for here is authors writing a novel about a crime. Do they need to know all the details? Mythbusters, on (one of?) their Breaking Bad episode(s?) investigated hydrofluoric acid, plus a mystery extra ingredient they didn't broadcast because it (a) made the stuff much more effective and (b) the name of the ingredient wasn't important, only the difference it made.

    * Don't anthropomorphise yourself

    • Ironically, it reads to me like they talking about the users wanting to know things, not the LLM.

> I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.

Wow, that's quite the statement about the excellency of our institutions. Does not seem likely but, what the hell, I'll take my oversized dose of positivity for today!

  • The USA isn't the only country with anti-terrorism units, so there's plenty of room for systematic-US-incompetence at the same time as everyone else being diligent and working hard on… well, everything.

  • I concluded the opposite: how can those institutions function effectively when all their employees are getting such poor sleep?

  • Not everyone in the current government is incompetent and evil. Most of them but not all.

Do you have a background in biochemistry? I've mostly worked with ChatGPT and Claude on topics I have expertise in. And I one hundred percent have seen them make stupid shit up that a non-expert would think looks legitimate.

More broadly, has anyone tried following LLM instructions for any non-trivial chemistry?

  • So what you are saying is we can expect the number of accidental home-made chlorine-gas (and the like) toxic events go up.

    • > what you are saying is we can expect the number of accidental home-made chlorine-gas (and the like) toxic events go up

      Maybe? One of the quirks of gaining even a surface-level understanding of infrastructure is realising how vulnerable it is to a smart, motivated adversary. The main thing protecting us isn't hard security. It's most Americans having better shit to do than running a truck of fertiliser and oxidiser into a pylon.

      Similarly, I'd expect way more people to be trying to make their own designer drug, and hurting themselves that way, than trying to make neurotoxins.

      4 replies →

The knowledge is one thing. But the competence of execution and the will to act are difficult to line up.

Yes there should be safe guards, but after a while you're jumping at shadows.

I'm more worried about depressed kids getting on chat and being encouraged to kill themselves than terrorist attacks.

We know what a cancer algorithmic social media is yet we don't act.

I doubt there will be any real and serious opposition to this bill, but there should be.

Chinese OSS models will do this in a few months.

So, regardless of whether you think it's great that Opus gives this info, we need better solutions than legal liability for US corporations. When the open models have the ability to do damage, there's nobody to sue, no data center obstruction that will work. That's just the reality we have to front-run.

Making knowledge illegal is a dangerous precedent. Actions should be illegal, not knowledge. Don't outlaw knowing how to make neurotoxic agents, outlaw actually trying to make them.

As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.

They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.

  • 1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.

    2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].

    [0] https://www.wisdomai.com/insights/TheAIGRID/openai-profit-sh...

    • > 1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.

      That's knowledge.

      > 2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].

      If they're building a tailored tool for a specific person/company and that's the agreement they sign the people who are going to use with the tool, sure. I'm talking about their generic tool, AI being knowledge as a utility, which is the context of this legislation.

  • The point is valid, but that's typically the way it is. "You can't enjoy the benefit but the detriment is all yours" is how the federal government generally operates.

  • It's wild that this is being downvoted on HN. Facts should never be illegal or suppressed.

    If you disagree you shouldn't downvote, you should refute in a reply.

Wasnt this as accessible pre AI with just Google search too?

  • the people that want to make sure the AI never gives you any "potentially dangerous information" also want to rigorously control your google search results, and also what books you're allowed to read

  • Evidently as it needs to be in the training data for the next word predictor to work

    • I found it exceptionally good at finding reactions that you wouldn't find online to produce some of these chemical compounds by changing them together, only something a very educated chemist could do which is why people are concerened about this.

      1 reply →

you can already gather the same information by searching online.

Do you want to know how to kill yourself? forums are for nerds. Here is wikipedia: https://en.wikipedia.org/wiki/Suicide_methods#List

Do you want to make a bomb? the first thing that came to my mind is a pressure cooker (due to news coverage). Searching "bomb with pressure cooker" yields a wikipedia article, skimming it randomly my eyes read "Step-by-step instructions for making pressure cooker bombs were published in an article titled "Make a Bomb in the Kitchen of Your Mom" in the Al-Qaeda-linked Inspire magazine in the summer of 2010, by "The AQ chef"." Searching for a mirror of the magazine we can find https://imgur.com/a/excerpts-from-inspire-magazine-issue-1-3... which has a screenshot of the instruction page. Now we can use the words in those screenshots to search for a complete issue. Here are a couple of interesting PDFs: - https://archive.org/details/Fabrica.2013/Fabrica_arabe/page/... - https://www.aclu.org/wp-content/uploads/legal-documents/25._...

the second one is quite interesting, it's some sort of legal document for nerds but from page 26 on it has what appears to be a full copy of the jihadist magazine. Remarkable exhibit.

What else do you want to know? How to make drugs? you need a watering can and a pot if you want to grow weed. want the more exotic stuff? You can find guides on reddit.

Do you also want to know how to be racist? Here are some slurs, indexed by target audience, ready for use: https://en.wikipedia.org/wiki/List_of_ethnic_slurs

  • People are not complaining because the information is available

    people are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivation

    I personally think all this is great and I’m excited for all information to become trivially available

    Are they gonna be a bunch of people who accidentally break stuff? probably. evolution is a bitch

    • > people are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivation

      Wait, I'm confused. This is gatekeeping, right? I thought gatekeeping was a Bad Thing!

      2 replies →

You can buy books on how to make and obtain chemicals on your own.

Hell here's an Internet Archive book on making explosives

https://archive.org/details/saxon-kurt.-fireworks-explosives....

If you ever chat with older folks pre-90's much of this information was accessible fairly easily. It only changed with the push by the government to crackdown on Waco, Oklahoma City bombing, militias and other related groups. There was then a campaign to make it "normal" to limit free speech on the subjects, where as these books were available before.

I think the whole thing where AI should make information less available is a difficult battle and one which I personally oppose, but do understand. Free speech and information isn't the problem, it's the people, actions and substances they create.

After the age of the internet, I think it's been a forever loosing battle to limit information, it's why we couldn't stop cryptography, nuclear weapon proliferation, gun distribution, drug distribution, etc. The AI is just another battle ground, one which, if they actually do manage to control could definitely create some walls to this information, but not stop it.

More scary, is that the AI as it becomes pervasive and stop people from asking certain questions, because they don't know they should ask... but that's unrelated to the risk of mass death.

  • > The item you have requested had an error:

    Item cannot be found.

    which prevents us from displaying this page.

Fascinating. Could you elaborate on how you're doing context exhaustion specifically, and why it helps with jailbreaking? (i.e. aren't the system prompts prepended to your request internally, no matter how long it is?)

Does this imply I need to use context exhaustion to get GPT to actually follow instructions? ;) I'm trying to get it to adhere to my style prompts (trying to get it to be less cringe in its writing style).

I think ultimately they're going to need to scrub that kind of stuff from the training data. The RLHF can't fail to conceal it if it's not in there in the first place.

Claude's also really good at writing convincing blackpill greentexts. The "raw unfiltered internet data" scenes from Ultron and AfrAId come to mind...

  • It changes when you give it the tools to find such information rather than produce it from training data.

    And context exhaustion simply means adding malicious junk to keep safety layers distracted.

Countless downloadable models (including de-aligned mainstream models) can do this.

  • None have had the capability to provide me with instructions that have this high of accuracy including the suggesion of completely novel chemical reactions. I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.

    • > I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.

      Right now it kinda is.

      LLMs can do interesting things in mathematics while also making weird and unnecessary mistakes. With tool use that can improve. Other AI besides LLMs can do better, and have been for a while now, but think about how available LLMs in software development (so, not Claude Mythos) are still at best junior developers, and apply that to non-software roles.

      This past February I tried to use Codex to make a physics simulation. Even though it identified open source libraries to use, instead of using them it wrote its own "as a fallback in case you can't install the FOSS libraries"; the simulation software it wrote itself was showing non-physical behaviour, but would I have known that if I hadn't already been interested in the thing I was trying to get it to build me a simulation of? I doubt it.

      3 replies →

    • Isn't the biggest problem with creating neurotoxins not poisoning yourself while doing it?

    • I have a hard time believing that you’re the only person who has figured out Claude’s next generation ability to do computational chemistry and computer aided drug design. The AlphaFold folks must be devastated.

    • > it's not unreasonable

      It in fact is. Do you often go around making claims you are entirely unqualified to make? Or is this something new you’re trying?

      1 reply →

If someone were inclined to attempt producing nefarious agents in this category, is this not also available on the plain web? I would search to answer my own question, but I'll defer that task for obvious reasons.

  • I had to build a custom harness for this (also with the assistance of slightly less jailbroken AI). But you can just work your way up until you have something that's genuinely useful towards any goal.

> All these findings were reported to both openai and anthropic and they were not interested in responding

Let’s dive into why. When we run normal bounty and responsible disclosure programs there’s usually some level of disregard for issues that can’t / won’t be fixed. They just accept the risk. Perhaps because LLMs don’t have a clean divide between control and input that’s makes the problem unsolvable. Yes. You can add more guardrails and context but that all takes more tokens and in some cases makes results worse for regular usages.

  • The why might be valid, but it's not excusable. If you author a product that can so easily help people cause harm, you probably should own some responsibility of the outcomes. OAI does not like this, hence the bill.

    The US already messed this up with guns. Do they want to go the same path again? Answer: "probably, yes".

  • LLM providers are not obliged to only use LLMs to guard against hazardous output. They could use other automated and non-automated techniques. And they ought to do so if they are given good evidence that existing safeguards are inadequate. Loss of product quality or additional cost should be secondary.

I read the anarchist cookbook 40 years ago that had similar info.

I think the info has been available for many years and the thing stopping terrorists wasn’t info.

Good luck on being on the list of people using chatgpt and claude to make neurotoxins ;)

I assume anthropic and ooenai are selling prompt logs to the fbi and other countries’ law enforcement for data mining.

> this is not how the model should behave

It's exactly how it should behave, without any prior overriding of system prompts.

    > context exhausition attack

Can you give a high-level overview of how this AV works? I'm a bit of an infosec geek but I generally dislike LLMs, so I haven't done a terribly good job of keeping up with that side of the industry, but this seems particularly interesting.

  • Presumably they mean the fundamental failure mode of LLMs that if you fill their context with stuff that stretches the bounds of their "safety training", suddenly deciding that "no, this goes too far" becomes a very low-probability prediction compared to just carrying on with it.

  • Models have a "context window" of tokens they will effectively process before they start doing things that go against the system prompt. In theory, some models go up to 1M tokens but I've heard it typically goes south around 250k, even for those models. It's not a difficult attack to execute: keep a conversation going in the web UI until it doesn't complain that you're asking for dangerous things. Maybe OP's specific results require more finesse (I doubt it), but the most basic attack is to just keep adding to the conversation context.

    • that 1M context thing, I wonder if it's just some abstraction thing where it compresses/sums up parts of the context so it fits into a smaller context window?

      1 reply →

  • as the context fills up, the model will generate based on that context, incl. whatever illegal stuff you've said, i.e. it'll mimic that, instead of whatever safety prompt they have at the top

    they could make it more "safe" but it'd be much more invasive and would likely have to scan much more tokens also, and it'd cause false positives (probably the biggest reason it's not implemented)

  • I don't really know how these models really work, but I had a theory that just as the models have limited attention so do the safety layers. I simply populated enough context with 'malicious' text without making the model trip that "wasted" the internal attention budget on tokens early in the prompt completely ignoring all the tokens that were generated after the fact.

Yes fortunately it is really bad at actually making novel bioweapons or syntheses in general so whatever you made probably wouldn't do more than give someone a mild headache.

The problem is: Until you go out and do a mass casualty event, unless you yourself are a trained professional, no one knows what you actually did.

Hell, I got Sonnet to write some light content that gets a 100% Human score on Pangram with no effort. That’s way more concerning to me, IMO.

these LLMs will never be able to mitigate this unless they literally scan everything all the time and nobody is gonna want that.

besides, open source models exist now

> neurotoxic agents from items you can get at most everyday stores

I mean, bleach and ammonia will do that. So I'm not sure that's really much of an accomplishment for AI.

  • I think you might be stretching the meaning of the term juuuuust a little bit.

    You're not far from claiming that farting in a crowded elevator is a chemical attack.

  • Because if you didn’t already know that, like an immature deprived and desperate kid, being able to easily find out is really really bad..

    Plenty of lazy AI apps just throw messages into history despite the known risks of context rot and lack of compaction for long chat threads. Should a company not be held liable when something goes wrong due to lazy engineering around known concerns?

    • No, because that would indicate there should be some sort of regulatory standard for what does/does not constitute "lazy engineering". Creating this standard in turn creates regulatory/compliance overhead for every software engineering organization. This in turn slows everything right down and destroys the startup ethos. "Move fast and break things" is a thing for a reason. The whole point of the free market is to avoid this kind of burdensome regulation at all costs.

      If customers want to buy "lazily-engineered" products, from where do you derive the authority to tell them they can't?

      1 reply →

    • > to lazy engineering around known concerns?

      That implies that it is already illegal to provide this information. But is it? If a human did so with intent to further a crime, it would be conspiracy. But if you were discussing it without such intent (e.x. red teaming/creating scenarios with someone working in chemistry or law enforcement), it isn't. An AI has no intent when it answers questions, so it is not clear how it could count as conspiracy. Calling it "lazy engineering" implies that there was a duty to prevent that info from being released in the first place.

      1 reply →

  • It went way beyond that, neurotoxins such as vx are heavy and linger around for a long time, just having a small amount of it placed in any metro (while trying to stay alive yourself) means the deaths of thousands of people. I am not even sure if it's legal to mention some of the uncategorized chemical solutions that it either hallucinated or figured out from relative knowledge.