← Back to context

Comment by jefftk

2 years ago

The laws here are in a pretty sad shape. For example, did you know that companies that synthesize DNA and RNA are not legally required to screen their orders for known hazards, and many don't? This is bad, but it hasn't been a problem yet in part because the knowledge necessary to interact with these companies and figure out what you'd want to synthesize if you were trying to cause massive harm has been limited to a relatively small number of people with better things to do. LLMs lower the bar for causing harm by opening this up to a lot more people.

Long term limiting LLMs isn't a solution, but while we get the laws and practices around risky biology into better shape I don't see how else we avoid engineered pandemics in the meantime.

(I'm putting my money where my mouth is: I left my bigtech job to work on detecting engineered pathogens.)

Now I know that I can order synthetic virus RNA unscreened. Should your comment be illegal or regulated?

  • This is a lot like other kinds of security: when there's a hazard out in the wild you sometimes need to make people aware of all or part of the problem as part of fixing it. I would expect making it illegal for people to talk about the holes to make us less safe, since then they never get fixed.

    This particular hole is not original to me, and is reasonably well known. A group trying to tackle it from a technical perspective is https://securedna.org, trying to make it easier for companies to do the right thing. I'm pretty sure there are also groups trying to change policy here, though I know less about that.

    • You seemingly dodged the question.

      In justifying your post, you actually answered contrary to your original assertion. The information is out there, we should talk about it to get the issue fixed. The same justification applies to avoiding LLM censorship.

      There's a sea-change afoot, and having these models in the hands of a very few corporations, aligned to the interests of those corporations and not individuals, is a disaster in the making. Imagine the world in two years... The bulk of the internet will be served up through an AI agent buffer. That'll be the go-to interface. Web pages are soooo last decade.

      When that happens, the people controlling the agents control what you see, hear, and say in the digital realm. Who should control the alignment of those models? It's for sure not OpenAI, Microsoft, Google, Meta, or Apple.