Comment by thorum
1 day ago
The practical problem I see is that unless US AI labs have perfect security (against both cyber attacks and physical espionage), which they don’t, there is no way to prevent foreign intelligence agencies from just stealing the weights whenever they want.
Of course. They're mitigations, not preventions. Few defenses are truly preventative. The point is to make it difficult. They know bad actors will try to circumvent it.
This isn't lost on the authors. It is explicitly recognized in the document:
> The risk is even greater with AI model weights, which, once exfiltrated by malicious actors, can be copied and sent anywhere in the world instantaneously.
> The point is to make it difficult.
Does it, though?
This. We put toasters on the internet and are no longer surprised, when services we use send us breach notices at regular intervals. The only thing this regulation would do, as written, is add an interesting choke point for compliance regulators to obsess over.