← Back to context

Comment by nearbuy

11 hours ago

> The main ones are that most people don't want to be mass murderers and actually doing it would be the fast ticket to Epic Retaliation.

The main thing preventing random nutcases from making nuclear weapons is they don't have access to the required materials. Restricting the instructions is unnecessary.

It would be a very different story if someone discovered a new type of WMD that anyone could make in a few days from commonly available materials, if only they knew the secret recipe.

> It would be a very different story if someone discovered a new type of WMD that anyone could make in a few days from commonly available materials, if only they knew the secret recipe.

It would need even more to be public. Suppose it was easy to make a biological weapon. You wouldn't be able to effectively censor it anyway and trying to would leave you sitting on an apocalypse bomb waiting for it to leak to someone nefarious or get independently rediscovered before anyone else is allowed to discuss it. What you need is for knowledge of how it works to be public so that everyone can join in the effort to quickly devise countermeasures before some nutcase destroys the world.

Moreover, if something is already public enough to be in the AI training data then it's already public.

  • Your plan is to release the secret recipe that anyone can use to make a WMD in a few days to absolutely everyone and hope someone comes up with a countermeasure before some nutcase or terrorist decides to try out the new WMD?

    The odds of us inventing and deploying countermeasures to a new bomb or chemical weapon or biological agent in a few days is miniscule. You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical. What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public?

    • > What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public?

      The premise of censorship is that you're trying to prevent someone from telling other people something. If the only person who knows how to do it is some scientist who is now going to try to come up with a countermeasure before announcing it, there is no need for a law prohibiting them from doing something they've chosen not to do. And even then it's still not clear that this is the right thing to do, because what if their efforts alone aren't enough to come up with a countermeasure before someone bad rediscovers it? If they decide they need help, the law should prohibit them from telling anyone?

      Which brings us back to AI. If the scientist now goes to the AI for help, should it refuse because it's about a biological weapon? What happens if that delays the development of a countermeasure until it's too late?

      Meanwhile if this is someone else and they ask the AI about it, it's only going to be in the training data if it's already public or can be deduced from public information, and when that's the case you're already in a race against the clock and you need everyone in on finding a solution. This is why we don't try to censor vulnerabilities that are already out there.

      > You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical.

      There are some principles that should always be upheld because the exceptions are so rare or ridiculous or purely hypothetical that it's better to eat them than to let exceptions exist at all. The answer has to be "yes, we're going to do it then too" or people get into the business of actually building the censorship apparatus and then everybody wants to use it for everything, when it shouldn't exist to begin with.

      4 replies →

Not quite a nuke (just try obtaining enough uranium ore) but there are some fairly dangerous things a determined nutcase can make without drawing suspicion.

Example determined ned nutcases include Aum Shinrikyo, who tried anthrax, botox, and nukes before succeeding with sarin gas (thank IG Farben!) among other things.

It's a fascinating (if troubling) story: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack#Back...