Comment by rvz
2 years ago
> So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model?
Yes.
This is exactly the goals of this EO is meant to do and amplifies the fear of extremely large models for the sake of so-called "AI safety" nonsense.
The best counter weight against AI being controlled by a select few companies is by making it accessible to all including open source or $0 AI models.
A 'safety score' for an cloud-based AI model is hardly transparent.
No comments yet
Contribute on Hacker News ↗