Comment by DebtDeflation

2 years ago

>The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.

So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model? Welcome to a world where only OpenAI, Anthropic, Google, and Amazon are allowed to release foundation models.

> So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model?

Yes.

This is exactly the goals of this EO is meant to do and amplifies the fear of extremely large models for the sake of so-called "AI safety" nonsense.

The best counter weight against AI being controlled by a select few companies is by making it accessible to all including open source or $0 AI models.

A 'safety score' for an cloud-based AI model is hardly transparent.

Not necessarily.

Meta could just do a "private" release, knowing that the results will likely show up on the pirate bay.

All it takes is a single hero with a USB drive, to effectively release world changing technology.