I'm honestly curious, how so? From what I can tell the only thing which isn't a "we'll research this area" or "this only applies to the government" is "tell the US government how you tested your foundational models."
For example, AI watermarking only applies to government communications and may be used as a standard for non-government uses but it's not require.
It is also very open ended, but the US text reads like some compliance will start immediately, like sharing the results of safety tests with the government directly.
I'm honestly curious, how so? From what I can tell the only thing which isn't a "we'll research this area" or "this only applies to the government" is "tell the US government how you tested your foundational models."
For example, AI watermarking only applies to government communications and may be used as a standard for non-government uses but it's not require.
That last one seems like a pretty big deal though. It's not just how you tested, but "other critical information" about the model.
I imagine the government can deem any AI to be a "serious risk" and prevent it from being made public.
The EU regulation is here: https://www.europarl.europa.eu/news/en/headlines/society/202...
It is also very open ended, but the US text reads like some compliance will start immediately, like sharing the results of safety tests with the government directly.