OpenAI backs Illinois bill that would limit when AI labs can be held liable

6 days ago (wired.com)

https://archive.md/WzwBY

I have made both GPT 5.4 and Opus 4.6 produce me content on creating neurotoxic agents from items you can get at most everyday stores. It struggled to suggest how to source phosphorus, but eventually lead me to some ebay listings that sell phosphorus elemental 'decorations' and also lead me towards real!! blackmarket codewords for sourcing such materials.

It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.

Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.

All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.

I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.

  • While scary, information like this has been pretty accessible for 20-30 years now.

    In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).

    I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.

    • Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.

      AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.

      It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.

      On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.

      On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/

      6 replies →

    • Well the real issue is that it knocks down the knowledge barrier, giving your step by step guides and reinterating what parts will kill you is the important part.

      Understanding and staying alive while producing neuro chemicals are the biggest challenges here.

      A depressed person with no prior knowledge could possibly figure out a way to make these chemicals without killing themselves and that's the problem.

      6 replies →

    • Consider two dictionaries, one in which the entries are alphabetized as usual and one in which they're randomized. Both support random access: you can turn to any page, and read any entry. Therefore both are "accessible". Only one actually supports useful, quick word lookup.

    • Much longer than that, and was available way before an internet. I graduated STEM high school in St. Petersburg in 1981, and I had several classmates who were big funs of chemistry. That they were able to create from textbooks, school lab ingredients, and understanding:

      WWI era poison gas, tear gas, potassium cyanide, and bunch of explosives like acetone peroxide.

      LLMs have all of that knowledge in training data

      1 reply →

    • Many of these forums exist now. Let's not enumerate them as they are one of the treasures of the internet.

    • I categorize this kind of stuff as "Crisis of accessibility" . AI is not alone in this territory, happens all over the place. Basically it's a problem that's existed for ages but the barrier to entry was high enough we didn't care.

      Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print.

      You could always find info about how to make a bomb or whatnot but you had to like, find and open a book or read a pdf, now an LLM will spoon-feed it to you step by step lowering the barrier.

      "Crisis of accessibility" is simultaneously legitimate concern but also in my mind an example of "security by obscurity". that relying on situational friction to protect you from malfeasance is a failure to properly address the core issue.

      2 replies →

    • > Execution though has still steadily required experts.

      Where experts = the government.

  • When my brother started to study Chemisty, he was told a) that it was easy to make meth b) the profit he would make and c) that the police would no doubt catch him, as only university students would make meth so pure.

    By the time he was done, he knew enough to commit mass murder in half a dusin different very hard to track ways. I am sure doctors know how to commit murder and make it look natural.

    My brother never killed anyone, or made any meth. You simply cannot have it so that students don’t get this type of knowledge, without seriously compromising their education and its the same way with LLMs.

    The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.

    • > The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.

      The LLMs aren't being punished for wanting* to know things.

      The problem for LLMs is, they're incredibly gullible and eager to please and it's been really difficult to stop any human who asks for help even when a normal human looking at the same transcript will say "this smells like the users wants to do a crime".

      One use-case people reach for here is authors writing a novel about a crime. Do they need to know all the details? Mythbusters, on (one of?) their Breaking Bad episode(s?) investigated hydrofluoric acid, plus a mystery extra ingredient they didn't broadcast because it (a) made the stuff much more effective and (b) the name of the ingredient wasn't important, only the difference it made.

      * Don't anthropomorphise yourself

      1 reply →

  • > I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.

    Wow, that's quite the statement about the excellency of our institutions. Does not seem likely but, what the hell, I'll take my oversized dose of positivity for today!

    • The USA isn't the only country with anti-terrorism units, so there's plenty of room for systematic-US-incompetence at the same time as everyone else being diligent and working hard on… well, everything.

    • I concluded the opposite: how can those institutions function effectively when all their employees are getting such poor sleep?

    • Not everyone in the current government is incompetent and evil. Most of them but not all.

  • Do you have a background in biochemistry? I've mostly worked with ChatGPT and Claude on topics I have expertise in. And I one hundred percent have seen them make stupid shit up that a non-expert would think looks legitimate.

    More broadly, has anyone tried following LLM instructions for any non-trivial chemistry?

  • The knowledge is one thing. But the competence of execution and the will to act are difficult to line up.

    Yes there should be safe guards, but after a while you're jumping at shadows.

    I'm more worried about depressed kids getting on chat and being encouraged to kill themselves than terrorist attacks.

    We know what a cancer algorithmic social media is yet we don't act.

    I doubt there will be any real and serious opposition to this bill, but there should be.

  • Chinese OSS models will do this in a few months.

    So, regardless of whether you think it's great that Opus gives this info, we need better solutions than legal liability for US corporations. When the open models have the ability to do damage, there's nobody to sue, no data center obstruction that will work. That's just the reality we have to front-run.

  • Making knowledge illegal is a dangerous precedent. Actions should be illegal, not knowledge. Don't outlaw knowing how to make neurotoxic agents, outlaw actually trying to make them.

    As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.

    They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.

    • The point is valid, but that's typically the way it is. "You can't enjoy the benefit but the detriment is all yours" is how the federal government generally operates.

    • It's wild that this is being downvoted on HN. Facts should never be illegal or suppressed.

      If you disagree you shouldn't downvote, you should refute in a reply.

  • Wasnt this as accessible pre AI with just Google search too?

  • you can already gather the same information by searching online.

    Do you want to know how to kill yourself? forums are for nerds. Here is wikipedia: https://en.wikipedia.org/wiki/Suicide_methods#List

    Do you want to make a bomb? the first thing that came to my mind is a pressure cooker (due to news coverage). Searching "bomb with pressure cooker" yields a wikipedia article, skimming it randomly my eyes read "Step-by-step instructions for making pressure cooker bombs were published in an article titled "Make a Bomb in the Kitchen of Your Mom" in the Al-Qaeda-linked Inspire magazine in the summer of 2010, by "The AQ chef"." Searching for a mirror of the magazine we can find https://imgur.com/a/excerpts-from-inspire-magazine-issue-1-3... which has a screenshot of the instruction page. Now we can use the words in those screenshots to search for a complete issue. Here are a couple of interesting PDFs: - https://archive.org/details/Fabrica.2013/Fabrica_arabe/page/... - https://www.aclu.org/wp-content/uploads/legal-documents/25._...

    the second one is quite interesting, it's some sort of legal document for nerds but from page 26 on it has what appears to be a full copy of the jihadist magazine. Remarkable exhibit.

    What else do you want to know? How to make drugs? you need a watering can and a pot if you want to grow weed. want the more exotic stuff? You can find guides on reddit.

    Do you also want to know how to be racist? Here are some slurs, indexed by target audience, ready for use: https://en.wikipedia.org/wiki/List_of_ethnic_slurs

    • People are not complaining because the information is available

      people are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivation

      I personally think all this is great and I’m excited for all information to become trivially available

      Are they gonna be a bunch of people who accidentally break stuff? probably. evolution is a bitch

      11 replies →

  • You can buy books on how to make and obtain chemicals on your own.

    Hell here's an Internet Archive book on making explosives

    https://archive.org/details/saxon-kurt.-fireworks-explosives....

    If you ever chat with older folks pre-90's much of this information was accessible fairly easily. It only changed with the push by the government to crackdown on Waco, Oklahoma City bombing, militias and other related groups. There was then a campaign to make it "normal" to limit free speech on the subjects, where as these books were available before.

    I think the whole thing where AI should make information less available is a difficult battle and one which I personally oppose, but do understand. Free speech and information isn't the problem, it's the people, actions and substances they create.

    After the age of the internet, I think it's been a forever loosing battle to limit information, it's why we couldn't stop cryptography, nuclear weapon proliferation, gun distribution, drug distribution, etc. The AI is just another battle ground, one which, if they actually do manage to control could definitely create some walls to this information, but not stop it.

    More scary, is that the AI as it becomes pervasive and stop people from asking certain questions, because they don't know they should ask... but that's unrelated to the risk of mass death.

  • Fascinating. Could you elaborate on how you're doing context exhaustion specifically, and why it helps with jailbreaking? (i.e. aren't the system prompts prepended to your request internally, no matter how long it is?)

    Does this imply I need to use context exhaustion to get GPT to actually follow instructions? ;) I'm trying to get it to adhere to my style prompts (trying to get it to be less cringe in its writing style).

    I think ultimately they're going to need to scrub that kind of stuff from the training data. The RLHF can't fail to conceal it if it's not in there in the first place.

    Claude's also really good at writing convincing blackpill greentexts. The "raw unfiltered internet data" scenes from Ultron and AfrAId come to mind...

    • It changes when you give it the tools to find such information rather than produce it from training data.

      And context exhaustion simply means adding malicious junk to keep safety layers distracted.

  • Countless downloadable models (including de-aligned mainstream models) can do this.

    • None have had the capability to provide me with instructions that have this high of accuracy including the suggesion of completely novel chemical reactions. I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.

      8 replies →

  • If someone were inclined to attempt producing nefarious agents in this category, is this not also available on the plain web? I would search to answer my own question, but I'll defer that task for obvious reasons.

    • I had to build a custom harness for this (also with the assistance of slightly less jailbroken AI). But you can just work your way up until you have something that's genuinely useful towards any goal.

  • > All these findings were reported to both openai and anthropic and they were not interested in responding

    Let’s dive into why. When we run normal bounty and responsible disclosure programs there’s usually some level of disregard for issues that can’t / won’t be fixed. They just accept the risk. Perhaps because LLMs don’t have a clean divide between control and input that’s makes the problem unsolvable. Yes. You can add more guardrails and context but that all takes more tokens and in some cases makes results worse for regular usages.

    • The why might be valid, but it's not excusable. If you author a product that can so easily help people cause harm, you probably should own some responsibility of the outcomes. OAI does not like this, hence the bill.

      The US already messed this up with guns. Do they want to go the same path again? Answer: "probably, yes".

    • LLM providers are not obliged to only use LLMs to guard against hazardous output. They could use other automated and non-automated techniques. And they ought to do so if they are given good evidence that existing safeguards are inadequate. Loss of product quality or additional cost should be secondary.

  • I read the anarchist cookbook 40 years ago that had similar info.

    I think the info has been available for many years and the thing stopping terrorists wasn’t info.

    Good luck on being on the list of people using chatgpt and claude to make neurotoxins ;)

    I assume anthropic and ooenai are selling prompt logs to the fbi and other countries’ law enforcement for data mining.

  • > this is not how the model should behave

    It's exactly how it should behave, without any prior overriding of system prompts.

  •     > context exhausition attack
    

    Can you give a high-level overview of how this AV works? I'm a bit of an infosec geek but I generally dislike LLMs, so I haven't done a terribly good job of keeping up with that side of the industry, but this seems particularly interesting.

    • Presumably they mean the fundamental failure mode of LLMs that if you fill their context with stuff that stretches the bounds of their "safety training", suddenly deciding that "no, this goes too far" becomes a very low-probability prediction compared to just carrying on with it.

    • Models have a "context window" of tokens they will effectively process before they start doing things that go against the system prompt. In theory, some models go up to 1M tokens but I've heard it typically goes south around 250k, even for those models. It's not a difficult attack to execute: keep a conversation going in the web UI until it doesn't complain that you're asking for dangerous things. Maybe OP's specific results require more finesse (I doubt it), but the most basic attack is to just keep adding to the conversation context.

      2 replies →

    • as the context fills up, the model will generate based on that context, incl. whatever illegal stuff you've said, i.e. it'll mimic that, instead of whatever safety prompt they have at the top

      they could make it more "safe" but it'd be much more invasive and would likely have to scan much more tokens also, and it'd cause false positives (probably the biggest reason it's not implemented)

    • I don't really know how these models really work, but I had a theory that just as the models have limited attention so do the safety layers. I simply populated enough context with 'malicious' text without making the model trip that "wasted" the internal attention budget on tokens early in the prompt completely ignoring all the tokens that were generated after the fact.

  • The problem is: Until you go out and do a mass casualty event, unless you yourself are a trained professional, no one knows what you actually did.

  • Hell, I got Sonnet to write some light content that gets a 100% Human score on Pangram with no effort. That’s way more concerning to me, IMO.

  • these LLMs will never be able to mitigate this unless they literally scan everything all the time and nobody is gonna want that.

    besides, open source models exist now

  • > neurotoxic agents from items you can get at most everyday stores

    I mean, bleach and ammonia will do that. So I'm not sure that's really much of an accomplishment for AI.

    • I think you might be stretching the meaning of the term juuuuust a little bit.

      You're not far from claiming that farting in a crowded elevator is a chemical attack.

    • Because if you didn’t already know that, like an immature deprived and desperate kid, being able to easily find out is really really bad..

      Plenty of lazy AI apps just throw messages into history despite the known risks of context rot and lack of compaction for long chat threads. Should a company not be held liable when something goes wrong due to lazy engineering around known concerns?

      4 replies →

    • It went way beyond that, neurotoxins such as vx are heavy and linger around for a long time, just having a small amount of it placed in any metro (while trying to stay alive yourself) means the deaths of thousands of people. I am not even sure if it's legal to mention some of the uncategorized chemical solutions that it either hallucinated or figured out from relative knowledge.

Quoting the original bill [0]:

> "Critical harm" means the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property caused or materially enabled by a frontier model, through either: (1) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (2) engaging in conduct that: (A) acts with no meaningful human intervention; and (B) would, if committed by a human, constitute a criminal offense that requires intent, recklessness, or negligence, or the solicitation or aiding and abetting of such a crime.

I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.

> (a) A developer shall not be held liable for critical harms if the developer did not intentionally or recklessly cause the critical harms and the developer: (1) published a safety and security protocol on its website that satisfies the requirements of Section 15 and adhered to that safety and security protocol prior to the release of the frontier model; (2) published a transparency report on its website at the time of the frontier model's release that satisfies the requirements of Section 20. The requirements of paragraphs (1) and (2) do not apply if the developer does not reasonably foresee any material difference between the frontier model's capabilities or risks of critical harm and a frontier model that was previously evaluated by the developer in a manner substantially similar to this Act.

However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.

[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...

  • I think my favorite part is that, because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all. That makes very little sense unless you specifically want to make it illegal to not be OpenAI (et al).

    Similarly, if a frontier model kills merely 99 people, they aren't covered by this. So go big or go home I guess?

    • > because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all

      Oof. If you're an Illinois resident, please call your elected and at least ensure they understand this loophole is there. In all likelihood, nobody other than OpenAI's lobbyists have noticed this.

    • > unless you specifically want to make it illegal to not be OpenAI [...]

      If that is an "unintended" consequence, I am certain OpenAI wouldn't be opposed. Preventing competition whilst keeping any potentially profit risking regulations at bay has been a clear throughline in OAIs lobbying efforts.

    •     > "Frontier model" means an artificial intelligence model that:
      
          > (1) is trained using greater than 10^26 computational operations, such as integer or floating-point operations; or
      
          > (2) has a compute cost that exceeds $100,000,000
      

      Such a strange regulation, usually large thresholds like this are made to only apply burdening regulation to very-big-players (if you're spending 100 million on training, you can afford a dedicated team to follow such regulation).

      But here it seems to be an anti- competitive move for market entrants who haven't made it into the big league yet...

      Sounds like the saga for some players pushing for Biden's EO 14110 but this time at the state level?

  • That doesn't say much other than the rules are over in section 15.

    To be protected they not only have to publish their security protocol, but adhere to it.

    That's not just 'providing a PDF'

    That particular section is entirely appropriate. A company can't do everything necessary to prevent every bad thing. They should do everything that they reasonably can. Someone else should decide what is reasonable.

    The regulators are saying we've decided the what you have to do to be considered to have done all you could to be safe. Follow those rules, tell us how you've followed those rules, and if something bad happens and we find out that you didn't follow the rules you said we're going to nail you to the wall.

    This hinges on Section 15. Which I think is inadequate because it does not meet the criteria of someone else deciding what is reasonable. Publishing their safety plans and adhering to them should be enough to grant protection from liability of harm directly to users, since the publication give individuals the ability to make an informed decision, provided they have done the safety work that they have said, a user deciding that is sufficient for them and choosing to use it should be allowable.

    That should not extend to harm done to others. They don't get to choose. Consequently the standard required to be protected against claims of negligence has to be decided by a third party (experts hired by regulators ideally).

    Blanket liability and blanket indemnity both go too far.

    If someone makes a YoYo that blow's someone up because they made it out of explosives then they should be held liable.

    If someone makes a YoYo that blow's up a city because it contained particles unknown and undetectable to any science we have, they shouldn't be to blame.

    The key is that they have to have done what we think is required. Legislators get to decide what it is that is required. If a company does all of that, then they shouldn't be held responsible, because they have done all they were asked to do.

    The problem is not that a law provides indemnity, the problem is that it sets the standard to qualify too low.

  • Shifting liabilities from corporations to the public coffer is what companies do. You'll often hear this described as "privatizing profits and socializing losses". Let me introduce you to the Price-Anderson Act of 1957 [1]. It's been repeatedly extended, most recently with the ADVANCE Act [2]. This limits liability for the nuclear power industry in a whole range of ways:

    - It removes jurisdiction from state courts to the federal court. In recent weeks, the part of "states' rights" is doing similar to stop states regulating prediction markets, as an aside [3];

    - All actions are consolidated into a single claim;

    - That claim has an inflation-adjusted absolute limit, which is somewhere around $500 million (I'm not sure of the exact 2026 figure);

    - Any damages beyond that are partially sharead by the industry and an industry self-funded insurance program;

    - The industry as a whole has a total liability limit, also inflation-adjusted. I believe this is around $10 billion.

    For context, the clean up from Fukushima is likely to take a century and the cost may well exceed $1 trillion for a single incident [4]. So if this happened in the US, the government would be on the hook for almost all of it.

    So I have two points here:

    1. If you oppose any effort to shift liability from AI companies to the government (as I do) with legislation such as this, how do you feel about the nuclear industry doing the exact same thing? and

    2. Minor point but I noticed in searching for the latest details, Gemini made factual errors, stating that "the Act is set to expire in 2025" when it was extended in 2024 until 2045. Always check AI's work, people.

    [1]: https://en.wikipedia.org/wiki/Price%E2%80%93Anderson_Nuclear...

    [2]: https://en.wikipedia.org/wiki/ADVANCE_Act

    [3]: https://www.pbs.org/newshour/politics/federal-government-sue...

    [4]: https://cleantechnica.com/2019/04/16/fukushimas-final-costs-...

    • This is what government should be doing. Figure out how to do something safely, make that a regulation, then shield companies from liability as long as they follow that regulation. In practice you won't extract trillions of dollars from most companies anyways, because they'll go bankrupt long before they manage to pay all that back.

  • It's the "guns don't kill people" equivalent for AIs.

    ---

    Before the pitchforks and downvotes:

    - yes, it's a deliberate simplification

    - yes, the issue is complex because you can also argue that you can't blame authors of encyclopedias and chemistry books for bombs and poisons, so why would we blame providers of LLMs

    - and no, this bill is only introduced to cover everyone's assess when, not if, LLMs use results in large scale issues.

    • Quite an appropriate analogy: gun manufacturers were sued for their responsibility in US mass shootings. They won, so the mass shootings continue.

      5 replies →

    • In fairness, a well designed and tested weapon at least can be expected to reliably and consistently perform the same thing each time. We also understand deeply how they work and can easily investigate if something happens whether it was user error, a defect or design issue. LLMs, not so much.

    • This dodges the moral argument behind "guns don't kill people", which is worth confronting directly. I think people can reasonably disagree about whether second/third/fourth/etc. order effects carry moral/legal responsibility.

      In light of such disagreement, and given the lack of any higher authority among free, equal, people to arbitrate it, the only reasonable way to coexist peacefully is to avoid imposing your ideas on others. This is the foundation of a liberal society.

      1 reply →

  • My first thought was that this must be related to the automated weapons issue that got Anthropic on Trump's shitlist. It makes sense that a company that will eventually be asked to build weapons that choose their own targets will want to limit liability when it will inevitably kill the "wrong" person.

    Also, I am disturbed by the fact that in all the discussions on this topic during the last month, no one has mentioned the magic word "Skynet". This is clearly a terrible idea. And if a company needs immunity from liability, they know it is a terrible idea.

    Skynet's flaw wasn't that it killed humans. It was a military machine specifically designed to kill humans. If it only killed "the enemy", it would have been hailed a marvelous success. It was only considered a failure because it killed the wrong humans.

As an Iowan, this reminds me a lot of the bill that's been pushed through my state's senate twice now (as recently as last year), which would prevent Iowans from filing lawsuits against pesticide and herbicide companies if those companies follow the EPA's labeling guidelines. The bill passed the senate both times, only stopped because the house declined to take it up.

For context, Iowa has the fastest growing rate of new cancer diagnoses in the country, and the second highest overall cancer rate.

  • Honest question, isn't that like OK?

    Like if you have a product, and the government says the product is ok, and it's labeled per regulation and later that product turns out to be deleterious to people's health should the company be liable?

    Guess we should already have precedent but my google-fu is failing here. I can't seem to find the resolution of Felix-Lozano v. Nalge Nunc , Felix sued Nalgene over their use of BPA which at the time was not illegal to use in the bottles.

    PFAS will probably be the next battleground here. They've been used in lots of products. And have some lawsuits https://www.cbsnews.com/news/firefighters-pfas-lawsuit/ . In your opinion should every manufacturer of a product that uses PFAS be legally liable?

    • I'm not a lawyer, nor a judge, so I can't say. All I can tell you is that it feels wrong that [Monsanto/OpenAI] can lobby a state's legislature to prevent you, the average joe and potential lucrative victim, from filing a lawsuit against them when it seems clear to any reasonable person that people are developing [cancer/mental health issues] due to the use of [pesticides/AI].

      Perhaps something like anti-SLAPP rules for the ignominious corporations would be a happy middle ground? I don't know if that would "fix" anything – or if there's anything to fix – so don't take that as a super serious suggestion.

      2 replies →

    • I don't think itd be ok, personally. My impression is regulations and regulatory institutions can be very slow to evolve after technological advances, unless the government is financially liable. A scheme I would be more comfortable with is mandatory insurance and insurance companies with a financial incentive absorbing the liability. On top of that probably add some bare minimum regulatory requirements/certifications.

    • "Like if you have a product, and the government says the product is ok, and it's labeled per regulation and later that product turns out to be deleterious to people's health should the company be liable?"

      Mesothelioma is the precedent.

      100% yes. If you've never seen the hell that people go through with these cancers, you are blessed, but it is hell, especially in the US medical system.

    • > Like if you have a product, and the government says the product is ok, and it's labeled per regulation and later that product turns out to be deleterious to people's health should the company be liable?

      But like, what if you like, totally bribed the shit out government people and like totally fabricated scientific evidence to make it seem like it was safe but then you sold it anyway?

      Aren't you then like a total piece of shit?

      3 replies →

  • > Iowa has the fastest growing rate of new cancer diagnoses in the country, and the second highest overall cancer rate

    Iowa also has a lot of farmers spraying pesticides and herbicides. This feels like genuine political competition between local business interests and public health concerns.

    • Normally I would agree with you, but the primary lobby behind both of the bills was Bayer (née Monsanto), with backing from several of Iowa's industrial farming organizations. They launched a giant ad campaign to "control weeds, not farming" alongside their bill to influence opinions. Cancer, nitrates and pesticides are at the top of everyone's mind in the state these past couple years, so having the pesticide giant try to swoop in and put a bill in place that would prevent Iowans from suing them feels like that same kind of seagulling behavior you described in another comment.

    • > This feels like genuine political competition between local business interests and public health concerns.

      You just described the US at large.

      The evidently extremely difficult decision between making money for a few, or making life better for everyone.

      4 replies →

We built systems we don’t fully understand, so naturally the next step is… immunity

  • From liability!

    If this were to actually happen I can only imagine financial liability is the least of their concerns?

    What scares me most about this is the narrowness of thought to match this fear with this response.

    • fully agree, doesn’t really feel like they’re reacting to the same problem they’re describing

Am I alone in thinking this is easy?

The human making the decision is always liable.

What if the human couldn't reasonably know better? Doesn't matter - If they made the same decision without AI or with old files it is still on them.

What if there's no single human decision? Someone is in charge and is responsible. The "I was ordered to" isn't a defense.

Does liability without power make sense? People executing have the power to execute. So liability. If they're executing without power that is a different liability, but a liability.

It may let the powerful off the hook - That is already a theme and AI doesn't change that, in fact, it will just be used as another scapegoat.

God told me to do it - Water tight! Right?

  • Let's say I start an AI program and my initial prompt is "Copy these files to this other computer", and then 100 iterations down the agentic loop the AI decides to hack into Tesla's FSD and ships an update that kills 500 people.

    Who is liable?

    • Obviously this is up to courts and juries to hammer out but...

      - Your agentic loop hacked something? You're liable. - FSD crashes? The guy in the driver's seat is liable. He/his insurance can sue Tesla to spread the liability...

      Nowhere along the line will anyone go "Oh, the AI did it... whoops"

      1 reply →

So they did the math and worked out it's cheaper and easier to lobby the government instead of working to make their product safe.

And these are the people that a lot programmers want to give the keys to the kingdom. Idiocracy really is in full effect.

  • > instead of working to make their product safe

    Make a nondeterministic product safe how?

    • I'm creating a new start up called QuantumFlop Electricity - there's a 10% chance it will cause a black hole to open up in the Atlantic Ocean that may eventually consume us all but a 50% chance we'll have unlimited clean energy. We'll never know for sure if at any point that black hole may open as it's borrowing energy from the 81st dimension, but the upside seems pretty good.

      Should I be able to get on with it?

      3 replies →

    • Is this the first time you have heard of AI safety?

      Lots of articles you could read on the subject and answer your own question.

      (Unless your angle is: akshually, you can never make anything 100% safe)

      2 replies →

    • What exactly are you implying? It sounds to me like you're saying that if it's impossible to make a product safe, then there shouldn't be any safety requirements. I think a more sensible position is that if it's impossible to make a product safe, then it should be illegal to build.

Illinois also has a Bill in committee right now to mandate operating system level age verification. There are lots of bad ideas to be upset about this year. If you are an Illinois resident, email your representative about HB 5511 today. Stupid legislation like this passes because we don’t speak up. Find out who your representative is, find their email, tell them your opinion.

Is there something equivalent in other industries that we can compare to?

This is the summary

>Creates the Artificial Intelligence Safety Act. Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website. Provides that a developer shall be deemed to have complied with these requirements if the developer: (1) agrees to be bound by safety and security requirements adopted by the European Union; or (2) enters into an agreement with an agency of the federal government that satisfies specified requirements. Sets forth requirements for safety and security protocols and transparency reports. Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models.

https://legiscan.com/IL/bill/SB3444/2025

I'm trying to think of an alternative bill. Imagine OpenAI came up with a model that when deployed in OpenClaw, allows you to spam people and this causes a huge disruption. Should OpenAI be liable for it? If this was not intentional and they had earnestly tried to not have this happen by safety protocols?

I forget, wasn't OpenAI the company that was formed as a nonprofit to limit the risks of LLMs? Founded by a bunch of visionaries scared of what they had wrought and anxious to lead so they could make sure it was only used responsibly?

  • That was before it was discovered that these LLM have incredible monetary potential.

    • Not really. The entire premise of the structure was that obviously AI would be immensely valuable and that they needed binding contract structures to prevent themselves from falling victim to the greed and ambition that would obviously consume those at the helm.

      Unfortunately their contract structures weren't strong enough to protect from the combination of the "king of the cannibals" and completely absentee regulatory oversight.

    • The writing was on the wall when they feigned horror at an early GPT being able to play poker in the 2010s, and failed to release the model

    • I think Mr Altman had this idea from the beginning, and in his own, "can't stop lying" way, he lied.

    • Trust that they already knew long before, and that this was the play all along.

      And if you don't believe that, do some digging into the lives of the psychopaths that started it.

  • Yeah the whole “rationalist” movement is full of those lying fks that use a thin veneer of fallacious logic and self aggrandising discourse to rationalise their hoarding of resources and bottomless greed. They’re very well established in Bay Area and AI world.

  • Not really, when OpenAI was formed in 2015 there were no LLMs, at least none that worked well. It was a regular AI research lab mostly doing Reinforcement Learning on game environments like Atari similar to DeepMind. Once they struck gold with LLMs (2019 or so?) and saw there is money to made everything changed, as expected when a bunch of SV types get involved.

Let’s see how long until this is flagged off the front page. I’ll put the over/under at 1 hour from the posted time

  • It's not removed, but they changed the title to "OpenAI backs Illinois bill that would limit when AI labs can be held liable". The actual bill text explicitly mentions that it excludes liability for "the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property caused or materially enabled by a frontier model" (https://legiscan.com/IL/text/SB3444/2025) so I am not sure why the title was changed. The original title of "OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters" seems like it accurately describes the bill.

This seems par for the course for OpenAI/Sam Altman.

Unfortunately they are not the first company to try and externalize their costs, and they will not be the last.

Serious question, maybe a bit naive: Is there anything we can do to push back against and discourage the externalization of costs onto others?

Is this simply a matter of greed and profit-seeking outweighing one's morals (assuming one has them to begin with)?

  • Change the legal definition of corporations? Corporations exist to provide liablity protections to sharholders, which means they are mainly incentivized to externalize costs and avoid liability to maximize profit, or even to make profit in businesses that would not be profitable if they could be held liable for externalized costs (deep sea oil well drilling). Limit the ability of corporations to shield themselves from view through multiple levels of shell corporations and Special Purpose vehicles. These are probably controversial stances on a board about startup culture and breaking the rules to get rich.

    Stop voting for people and judges that believe in the Friedman doctrine?

    Every decision has tradeoffs. Western society has largely decided to prioritze capital owners over everything else.

  • > Is there anything we can do to push back against and discourage the externalization of costs onto others?

    On a societal scale, no. Occasionally this works in some individual cases. Like the online outrage over SOPA/PIPA 15 years ago.

    But when entity X can gain $$$$$$ (or power) from doing an action, and that action costs everyone only $ (or a minor bit of inconvenience or ideological righteousness), then the average person has very little incentive to take time out of their day-to-day life to fight it.

    Meanwhile the entity will do whatever it takes to get the $$$$$$/power because they have a huge incentive. This is the same mechanism that allows democracies to be eroded, as we're seeing right now in the US.

  • Even if they were to pass such a law which would be political suicide, it would still be up to the courts to say that it doesn't violate the Constitution. For example, a law that says anyone with a net worth of $1B can freely punch anyone in the face whenever they want and have immunity would be a clearly illegal law. That's basically what this bill is. The courts would then need to be made sufficiently corrupt to not strike down such a law as unconstitutional.

    • Unconstitutional doesn't mean much when it's being decided by a group of unaccountable people that weren't elected through democratic means. If SCOTUS says something is legal, it's legal. That's how the system is setup, nothing else really matters. They'll justify their decisions however they want but the material ends are the only things that matter.

      SCOTUS has ruled many terrible things over the course of our nation's history (upheld slavery, said slaves weren't people, equated money with speech, decided a presidential election while denying a recount, etc). Expecting them to somehow be better is a foolish task.

      It's an institution that needs to be dismantled and rebuilt, where at minimum SCOTUS appointments should be elected by a national vote rather than letting an extreme minority decide (100 senators versus ~340,000,000 people).

  • That depends on your definition of "we". As a society, we can regulate companies and punish the offenders (e.g. don't dump toxic waste into sources of drinking water or you'll get prosecuted). As individuals, there's not much we can do directly. How to translate individual actions into societal action is kind of the fundamental question of civilization, and if there's a uniform solution for how to achieve it, I don't think we've managed to come up with it yet.

  • A lot of people will dismiss this answer, but... vote for Democrats. With Bernie and upcoming young Democrats more and more are pushing back. The parties definitely are not the same. Democrats created the Consumer Financial Protection Bureau. Republicans destroyed it.

    Push your representatives to crush monopolies and manipulative practices. This happened before in the gilded age. Only a popular response can turn the tide.

    Also, primaries are coming up, and not all Democrats are the same either. Plenty of the old school Democrats are facing progressive challengers. So, vote for the ones that will stand up to this garbage and follow up on whether they do. There are a lot of new faces in the Democratic party who are standing up to the BS.

    The US has a lot of potential to change if we push it. A 25 point swing toward people who don't consider grift a personal priority will change a lot of things.

OpenAI wants to not be responsible for "accidents" that kill more than 100 people, despite some critics arguing that their current actions are likely to cause such harms.

So much for the "Our mission is to ensure that artificial general intelligence benefits all of humanity." I was naive to hope that now such laws would ever pass

  • They only care about benefiting what's left of humanity when they're done with it and it'll probably just be them and their cronies.

Have the sponsors of this bill stated what the public benefit of providing these immunities would be? Just “more models, more progress, go faster?”

I think there’s room for nuance but I don’t see how this could possibly be construed to be in the public interest.

  • It's the tech version of ag gag laws and liability-protection laws for pesticides.

Take all of the data, take all of the credit, take all of the money, and none of the blame.

That would be a better mission statement for OpenAI at this point.

I am not sure what the other side of this argument looks like: Unlimited liability (i.e. liability no matter how poor an implementation and use of the tech is)?

The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?

  • People who cause death from either action or inaction are criminally liable for it, that's the other side of it.

  • Every other field in history considers it de rigeur that you're liable for the failure of quality in the products you produce. You make drugs that hurt people? You're liable. You build a building that falls down? You're liable. You serve coffee that literally burns the people drinking it? You're liable. It's also not new--the Code of Hammurabi (some 6000 years ago) prescribes the death penalty for people who build houses that fall down and kill the inhabitants inside.

    It's only computer scientists who think it's some unreasonable burden to be held liable for the consequences of their work.

    • It is an unreasonable burden to ask the impossible. The technology to create an AI incapable of hurting people if misused or blindly trusted literally doesn't exist right now.

      It'd be like holding a builder liable for their bridge being unable to withstand being hit by a meteor.

  • If I tell someone to kill someone else and they do, then I should be held responsible.

    If I write instructions in a book that I give to someone telling them to kill someone else and they do, then I should be held responsible.

    If I give someone a tool I made that I bill as more-than-PhD-level intelligence and it tells someone to kill someone else and they do, then I should be held responsible.

    All of the above situations seem equivalent to me; I'm not the only person responsible in each case, but I gave them instructions and they followed them.

No different to preventing game studios being liable for mass shootings. Reminds me of the post-Columbine hysteria where media was super critical of Doom and Nine Inch Nails.

  • Except that there was no link between violent video games and mass shootings. It has been shown many times. The families blamed the video games despite there being no direct evidence of it. The games did not tell them they should go on a mass murdering spree any more than porn told you you should sleep with your cousin.

    In this case there are several documented cases where the AI did over a long period of time progressively gaslight the individual and persuade them to commit suicide.

  • Would Doom give a sociopath instructions on how to commit a mass shooting if asked? I don't remember that part of the game

    • If you experience the world as encounters with a series of demons, you might infer it.

To the extent that this is about knowledge, I don't think it's fitting in this age to hold any person liable for what another person does with knowledge they've been furnished.

On the other hand, to the (apparently zero, currently?) extent that this is about AI companies profiting from war and murder by deploying weapons that kill people without human intervention, then their liability seems to be not only civil but criminal.

The inevitable result of giving corporations and executives complete immunity from the harms they cause is that people will stop resorting to the legal system and begin resorting to extralegal measures.

And the likely result is that in most of the country those extralegal measures would have to be very extreme to secure a guilty verdict. You can see the beginnings of it now with the ICE protest trial verdicts.

A section 203 equivalent for AI is so important as it is one of the reasons all of the US companies have all of these usage restrictions and gives more reasons for them to ban your account since they want to minimize legal risk.

Holding tool manufacturers liable for how their tool is used provides bad incentives towards the users of tools.

OpenAI has now officially absorbed the Facebook/Zuck's ethos of 'Move fast and break things' no matter if it's society itself .. as long as their share prices "go up".

They even hired former infamous FB staff and have been in the last months employing the same 'engagement' (addictive) product patterns.

The thing that bugs me the most about OpenAI are not the AI-enabled mass deaths. It's the hypocrisy.

Yep, this is everything wrong with AI in one easy to protest package, but do keep going on and on about the evils of datacenters, how they're coming for your jobs, and that AI art isn't art. That's really winning hearts and minds!

  • They're not unimportant. It seems like very few people consider how much we're fucking up the ecological system that we need. But sure, money important and stuff.

    • Same people whinging about these "concerns" happily embrace their enormous personal carbon footprints on every other axis. Color me unconvinced and unimpressed until they're the Ed Begley they want to see in the world. I'll wait...

Sure and Google, FaceBook and Twitter support section 230 that gives them cover for hosting others content.

A company backing legislation that takes liability off them is something that they will always do.

My entire company switched from open ai to entropic after the Department of War idiocy that happened a few weeks ago.

Anthropic isn’t perfect by a long shot but at least they stand by a couple morals.

  • That whole fiasco actually soured me on Anthropic. They were clearly super desperate to take blood money. "Anthropic has much more in common with the Department of War than we have differences."

Without getting even more eyes on me, these company boards are inadequately scared for their personal safety.

  • If sufficient numbers of the population perceive to have lost their livelihoods due to AI, then I'd expect to see data centers burned to the ground and a lot of people swinging from lamp posts. Jury nullification solves the rest, but even that that assumes you can even find an impartial jury.

Fortunately at any moment the virtuous non-profit will step in and make this all okay.

"death or serious injury of 100 or more people or at least $1 billion in property damage"

They think their products will cause 9/11 scale events, and they shouldn't have to pay for it when they do.

Incredible.

Hey Americans,

Please just make sure when you let an AI decide to explode your own country and ruin your society, you leave the rest of the world intact, thanks

OpenAI continually fails to understand that liability is there to protect their users AND OPENAI. If OpenAI causes significant harm, and the victims are told they cannot even sue to be made whole, what exactly does OpenAI think will happen? That the victims will just go pound sand? People will demand justice, and if that can't be delivered via the legal system either the system will be changed, negating this lobbying effort, or the system will by bypassed.

Is this for like military scenarios or like, ChatGPT designed a drug that seemed to work, but people died by the millions 5 years later? Because they should 100% be liable for the latter. The former, good luck trying to prosecute an AI company for something the military does. To an extent, the military would probably want their AI models to be behind their private network, completely firewalled from any public network. SIPRNet iirc. If they lock it down behind a highly classified network, good luck figuring out how they're using AI.

  • > Because they should 100% be liable for the latter.

    Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

    I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.

    • If we start from the position of the marketing hype and even Sam Altman's statements, these tools will "solve all of physics". To me it's laughable, but that's also what's driven their outsized valuations. Using the output to drive product decisions and development, it's not hard to imagine a scenario where a resulting product isn't fully vetted because of the constant corporate pressure to "move faster" and the unrealistic hype of "solve all of physics". This is similar to Tesla's situation of selling "Full Self-Driving" but it actually isn't in the way most people would understand that term and so they lost in court on how they market their autonomous driving features.

    • > that's just bad luck

      Can't agree with this. No, not at all. That can't be true... That's not "just bad luck". I believe this is actually a serious case of negligence and oversight - regardless of where exactly it occurred, whether on the part of the drug’s manufacturer, the government agency responsible for oversight, or somewhere else. It just doesn’t work that way. Any drug undergoes very thorough and rigorous testing before widespread use (which is implied by "millions of deaths"). Maybe I’m just dumb. And yeah, this isn’t my field. But damn it, I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection. With such a high mortality rate (I'll reinforce - millions of deaths cannot be "unseen edge cases"), it simply shouldn’t be possible with a proper approach to testing. Please, correct me if I’m wrong.

      > I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

      It’s simple. In this case, ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.

      Suppose, during the production of a hypothetical drug at a factory, a malfunction in one of the production machines (please excuse the somewhat imprecise terminology) - caused by a design flaw (i.e., the manufacturer is to blame for the failure; it’s not a matter of improper operation), and because of this malfunction, the drugs are produced incorrectly and lead to deaths, then at least part of the responsibility must fall on the machine manufacturer. Of course, responsibility also lies with those who used it for production - because they should have thoroughly tested it before releasing something so critically important - but, damn it, responsibility in this case also lies with the manufacturer who made such a serious design error.

      The same goes for ChatGPT. It’s clear that the user also bears responsibility, but if this “machine” is by design capable of generating a recipe for a deadly poison disguised as a “medicine” - and the recipe is so convincing that it passes government inspections - then its creators must also bear responsibility.

      EDIT: I've just remembered... I'm not sure how relevant this is, but I've just remembered the Therac-25 incidents, where some patients were receiving the overdose of radiation due to software faults. Who was to blame - the users (operators) or the manufacturer (AECL)? I'm unsure though how applicable it is to the hypothetical ChatGPT case, because you physically cannot "program" the guardrails in the same way as you could do in the deterministic program.

      5 replies →

  • > Because they should 100% be liable for the latter.

    I completely agree with you here. I only want to add that in this case, the users (the one(s) who used ChatGPT to design the drug, whichever entity(ies) that is) should also be held liable for their actions.

  • Shouldn’t the pharmaceutical company be held liable for insufficiently understanding the drug before releasing it? I don’t think I understand blaming a tool used in the process of designing it and not those who chose to release it.

    • Pharmaceuticals are heavily regulated, the "we vibecoded a therapeutic and released it without testing" hypothetical has no basis in reality

  • > Is this for like military scenarios

    Probably not. Weapons manufacturers are already well shielded from liability.

  • Why shouldn’t they be liable for military scenarios? Oh right, we don’t value our “enemies” lives, including their civilians.

    • Since when have arms merchants been liable for military scenarios? Lockheed doesn't get sued for building the planes that bomb orphanages. Maybe the world would be a better place if they did, but obviously it's not in the interests of a government to have their own contractors getting sued out of existence for something that government is doing.

it feels OpenAI know they've lost, and their only hope is getting saved by USA military complex. I have a more restrained opinion about other AI companies and LLM tech more broadly; but for OpenAI specifically I hope they go bankrupt sooner rather than later

This is why humans will still be necessary in decision chains: good luck getting anyone associated with AI to be provided with a real punishment when their models cause something bad to happen, or getting the executives who said "let's just have the AI do it" to take any responsibility.

Of course they are, because the tech industry is run by ethical midgets and psychopaths, who shouldn't be allowed to own a dog but are in charge of trillion-dollar corporations getting shadow contracts from the pentagon.

The more I learn about tech and the people that build it, the more I yearn for the era of caves and pointy sticks.

Good that OpenAI is a corporation for the public benefit. Altman with his constantly fake worried look must be the most hated picture in existence. Please write articles without a picture or add a trigger warning.