Comment by pr337h4m
2 years ago
First Amendment hasn't been fully destroyed yet, and we're talking about large 'language' models here, so most mandates might not even be enforceable (except for requirements on selling to the government, which can be bypassed by simply not selling to the government).
Edited to add:
https://www.whitehouse.gov/briefing-room/statements-releases...
Except for the first bullet point (and arguably the second), everything else is a directive to another federal agency - they have NO POWER over general-purpose AI developers (as long as they're not government contractors)
The first point: "Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public."
The second point: "Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety."
Since the actual text of the executive order has not been released yet, I have no idea what even is meant by "safety tests" or "extensive red-team testing". But using them as a condition to prevent release of your AI model to the public would be blatantly unconstitutional as prior restraint is prohibited under the First Amendment. Prior restraint was confirmed by the Supreme Court to apply even when "national security" is involved in New York Times Co. v. United States (1971) - the Pentagon Papers case. The Pentagon Papers were actually relevant to "national security", unlike LLMs or diffusion models. More on prior restraint here: https://firstamendment.mtsu.edu/article/prior-restraint/
Basically, this EO is toothless - have a spine and everything will be all right :)
Most restrictions probably aren't enforceable.
> After four years and one regulatory change, the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional.
https://en.wikipedia.org/wiki/Bernstein_v._United_States
Also the defense production act was never meant for anything like this, and likely won't be allowed if challenged. If they don't shut it down in some other way first.
Every other use of the act is to ensure production of 'something' remains in the US. It'd even be possible to use the act to require the model shared with the government, but not sure how they justify using the act to add 'safety' requirements.
Also any idea if this would apply to fine tunes? It's already been shown you can bypass many protections simply by fine tuning the model. And fine tuning the model is much more accessible than creating an entire model.
On the subject of toothlessness:
>Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.
So the big American companies will be guided to watermark their content. AI-enabled fraud and deception from outside the US will not be affected.
--
>developing any foundation model
I'm curious why they specified this.