← Back to context

Comment by giancarlostoro

6 days ago

Is this for like military scenarios or like, ChatGPT designed a drug that seemed to work, but people died by the millions 5 years later? Because they should 100% be liable for the latter. The former, good luck trying to prosecute an AI company for something the military does. To an extent, the military would probably want their AI models to be behind their private network, completely firewalled from any public network. SIPRNet iirc. If they lock it down behind a highly classified network, good luck figuring out how they're using AI.

> Because they should 100% be liable for the latter.

Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.

  • If we start from the position of the marketing hype and even Sam Altman's statements, these tools will "solve all of physics". To me it's laughable, but that's also what's driven their outsized valuations. Using the output to drive product decisions and development, it's not hard to imagine a scenario where a resulting product isn't fully vetted because of the constant corporate pressure to "move faster" and the unrealistic hype of "solve all of physics". This is similar to Tesla's situation of selling "Full Self-Driving" but it actually isn't in the way most people would understand that term and so they lost in court on how they market their autonomous driving features.

  • > that's just bad luck

    Can't agree with this. No, not at all. That can't be true... That's not "just bad luck". I believe this is actually a serious case of negligence and oversight - regardless of where exactly it occurred, whether on the part of the drug’s manufacturer, the government agency responsible for oversight, or somewhere else. It just doesn’t work that way. Any drug undergoes very thorough and rigorous testing before widespread use (which is implied by "millions of deaths"). Maybe I’m just dumb. And yeah, this isn’t my field. But damn it, I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection. With such a high mortality rate (I'll reinforce - millions of deaths cannot be "unseen edge cases"), it simply shouldn’t be possible with a proper approach to testing. Please, correct me if I’m wrong.

    > I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?

    It’s simple. In this case, ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.

    Suppose, during the production of a hypothetical drug at a factory, a malfunction in one of the production machines (please excuse the somewhat imprecise terminology) - caused by a design flaw (i.e., the manufacturer is to blame for the failure; it’s not a matter of improper operation), and because of this malfunction, the drugs are produced incorrectly and lead to deaths, then at least part of the responsibility must fall on the machine manufacturer. Of course, responsibility also lies with those who used it for production - because they should have thoroughly tested it before releasing something so critically important - but, damn it, responsibility in this case also lies with the manufacturer who made such a serious design error.

    The same goes for ChatGPT. It’s clear that the user also bears responsibility, but if this “machine” is by design capable of generating a recipe for a deadly poison disguised as a “medicine” - and the recipe is so convincing that it passes government inspections - then its creators must also bear responsibility.

    EDIT: I've just remembered... I'm not sure how relevant this is, but I've just remembered the Therac-25 incidents, where some patients were receiving the overdose of radiation due to software faults. Who was to blame - the users (operators) or the manufacturer (AECL)? I'm unsure though how applicable it is to the hypothetical ChatGPT case, because you physically cannot "program" the guardrails in the same way as you could do in the deterministic program.

    • > I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection.

      It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example. You can't test drugs indefinitely, at some point you need to say the test is over and it looks good. What if the downsides occur past the end of the test horizon?

      > ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.

      ChatGPT is not intended to be a drug manufacturing tool though? If you use any other random piece of software in the course of designing drugs, that doesn't make it the software developer's fault if it has a bug that you didn't notice that results in you making faulty drugs. And that's if it's even a bug! ChatGPT can give bad advice without even having any bugs. That's just how it works.

      In the Therac-25 case the machine is designed and marketed as a medical treatment device. If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.

      I think where there may be some confusion is if ChatGPT claims that a drug design is safe and effective. Is that a de facto statement from OpenAI that they should be held to? I don't think so. That's just how ChatGPT works. If we can't have a ChatGPT that is able to make statements that don't bind OpenAI, then I don't think we can have ChatGPT at all.

      2 replies →

    • > it simply shouldn’t be possible with a proper approach to testing.

      It just has to be delayed. Like many years after application. Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.

      Or both...

      On top of that, If I remember correctly, this liability wavering also exist for Vaccines.

      1 reply →

> Because they should 100% be liable for the latter.

I completely agree with you here. I only want to add that in this case, the users (the one(s) who used ChatGPT to design the drug, whichever entity(ies) that is) should also be held liable for their actions.

Shouldn’t the pharmaceutical company be held liable for insufficiently understanding the drug before releasing it? I don’t think I understand blaming a tool used in the process of designing it and not those who chose to release it.

  • Pharmaceuticals are heavily regulated, the "we vibecoded a therapeutic and released it without testing" hypothetical has no basis in reality

> Is this for like military scenarios

Probably not. Weapons manufacturers are already well shielded from liability.

Why shouldn’t they be liable for military scenarios? Oh right, we don’t value our “enemies” lives, including their civilians.

  • Since when have arms merchants been liable for military scenarios? Lockheed doesn't get sued for building the planes that bomb orphanages. Maybe the world would be a better place if they did, but obviously it's not in the interests of a government to have their own contractors getting sued out of existence for something that government is doing.