Comment by daoboy

2 years ago

Andrew Ng would be inclined to agree.

"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," he told the news outlet. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."

https://www.businessinsider.com/andrew-ng-google-brain-big-t...

When I read the original announcement, I had hoped it was more about the transparency of testing.

E.g. "What tests did you run? What results did you get? Where did you publish those results so they can be referenced?"

Unfortunately, this seems to be more targeted at banned topics.

No "How I make nukulear weapon?" is less interesting than "Oh, our tests didn't check whether output rental prices were different between protected classes."

Mandating open and verified test results would be an interesting, automatable, and useful regulation around ML models.

Perhaps ironically limiting competition in the AI space might just as well be more risky. If the barrier to creating AI is low then a great variety of AI can be built for the purpose of fighting AI misuse.

If there's only a few organisations that can create competitive AI no-one can compete with them if they turn out less than ideal.