Comment by photochemsyn
2 years ago
If they try to limit LLMs from discussing nuclear, biological and chemical issues, they'll have no choice but to ban all related discussion because of the 'dual-use technology' issue - including of nuclear energy production, antibiotic and vaccine production, insecticide manufacturing, etc. Similarly, illegal drug synthesis only differs from legal pharmaceutical synthesis in minor ways. ChatGPT will tell you everything you want about how to make aspirin from willow bark using acetic anhydride - and if you replace the willow bark with morphine from opium poppies, you're making heroin.
Also, script kiddies aren't much of a threat in terms of physical weapons compared to cyberattack issues. Could one get an LLM to code up a Stuxnet attack of some kind? Are the regulators going to try to ban all LLM coding related to industrial process controllers? Seems implausible, although concerns are justified I suppose.
I'm sure the regulatory agencies are well aware of this and are just waving this flag around for other reasons, such as gaining censorship power over LLM companies. With respect to the DOE's NNSA (see article), ChatGPT is already censorsing 'sensitive topics':
> "Details about any specific interactions or relationships between the NNSA and Israel in the context of nuclear power or weapons programs may not be publicly disclosed or discussed... As of my last knowledge update in January 2022, there were no specific bans or regulations in the U.S. Department of Energy (DOE) that explicitly prohibited its employees from discussing the Israeli nuclear weapons program."
I'm guessing the real concern is that LLMs don't start burbling on about such politically and diplomatically embarrassing subjects at length without any external controls. In this case, NNSA support for the Israeli nuclear weapons program would constitute a violation of the Non-Proliferation Treaty.
No comments yet
Contribute on Hacker News ↗