Comment by marcinzm

2 years ago

Reading this all I'm seeing is "we'll research these things", "we'll look into how to keep AIs from doing these things" and "tell the US government how you tested your foundational models." Except for the last one none of the others are really restrictions on anything or requirements for working with AI. There's a lot of fearful comments here, am I missing something?

Even the testing reports are a grey area and questionably enforceable, and a big question about what it applies to.

"In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."

It's leap to use the defense production act for this, and unlikely to survive a legal challenge.

Even then, what legal test would you use to determine whether a model "poses a serious risk to national security, national economic security, or national public health and safety"?

So they paid some lip service to the ban matrix math crowd but otherwise ignored them. Top notch.