Comment by wolframhempel
2 years ago
I feel there is a strong interest by large incumbents in the AI space to push for this sort of regulation. Models are increasingly cheap to run and open source and there isn't too much of a defensible moat in the model itself.
Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field. A regulation for all AI companies to have a testing regime that requires a 20 headstrong team is easy to meet for incumbents, but impossible for newcomers.
Now, this is not to diminish that there are genuine risks in AI - but I'd argue that these will be exploited, if not by US companies, then by others. And the best weapon against AI might in fact be AI. So, pulling the ladder up behind the existing companies might turn out to be a major mistake.
Yes, there are interests pushing for regulation using different arguments.
The regulation in the article is about AIs giving assistance on producing weapons of mass destruction and mentions nuclear and biological. Yann LeCun posted this yesterday about the risk of runaway AIs that would decide to kill or enslave humans, but both arguments result in an oligopoly over AI:
> Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
> They are the ones who are attempting to perform a regulatory capture of the AI industry.
> You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.
> ...
> The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet.
> What does that mean for democracy?
> What does that mean for cultural diversity?
https://twitter.com/ylecun/status/1718670073391378694
I find Lecun’s argument very interesting, and the whole discussion has parallels to the early regulation and debate surrounding cryptography. For those of us who aren’t on twitter and aren’t aware of all the players in this, can you tell us who he’s responding to as well as who “Geoff” and “Yoshua” are?
As the sibling comment by idkwhatiamdoing says, Geoff is Geoffrey Hinton: «Geoffrey Hinton leaves Google and warns of danger ahead» https://www.youtube.com/watch?v=VcVfceTsD0A
Probably Geoffrey Hinton and Yoshua Benigo who have had major contributions to the field of A.I. in their scientific careers.
I feel, when it comes to pushing regulation, governments always start with the maximalist position since it is the hardest to argue against.
- the government must regulate the internet to stop the spread of child pornography
- the government must regulate social media to stop calls for terrorism and genocide
- the government must regulate AI to stop it from developing bio weapons
...etc. It's always easiest to push regulation via these angles, but then that regulation covers 100% of the regulated subject, rather than the 0.01% of the "intended" subject
At the risk of sounding pedantic, it's probably worth pointing out that this executive order isn't really regulating AI.
That's congress' job.
It's doing some guideline stuff and specifying how it's used internally in the government and by government funded entities.
We're still free to develop AI any way we chose.
1 reply →
Andrew Ng would be inclined to agree.
"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," he told the news outlet. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."
https://www.businessinsider.com/andrew-ng-google-brain-big-t...
When I read the original announcement, I had hoped it was more about the transparency of testing.
E.g. "What tests did you run? What results did you get? Where did you publish those results so they can be referenced?"
Unfortunately, this seems to be more targeted at banned topics.
No "How I make nukulear weapon?" is less interesting than "Oh, our tests didn't check whether output rental prices were different between protected classes."
Mandating open and verified test results would be an interesting, automatable, and useful regulation around ML models.
Perhaps ironically limiting competition in the AI space might just as well be more risky. If the barrier to creating AI is low then a great variety of AI can be built for the purpose of fighting AI misuse.
If there's only a few organisations that can create competitive AI no-one can compete with them if they turn out less than ideal.
It increases the threshold to enter, but with the intention of increasing public safety and accountability. There’s also a high threshold to enter for just about every other product you can manufacture and purchase - food, pharmaceuticals, machinery to name obvious examples - why should software be different if it can affect someone’s life or livelihood?
There's two things in this take that IMHO are a bit off.
People are skeptical that introducing the regulatory threshold has anything to do with the increasing public safety or accountability, and instead lifts the ladder up to stop others (or open-source models) catching up. This is a pointless, self-destructive endeavour in either case, as no other country is going to comply with these regulations and if anything will view them as an opportunity to help companies local to their jurisdiction (or their national government) to catch up.
The other problem is that asking why software should be different if it can affect someone's life or livelihood is quite a broad ask. Do you mean self-driving cars? Medical scanners? Diagnostic tests? I would imagine most people agree with you that this should be regulated. If you mean "it threatens my job and therefore must be stopped" then: welcome to software, automating away other people's jobs is our bread and butter.
Feels a little like getting a license from Parliament to run a printing press to catch people printing scandalous pamphlets, no?
Didn't the printing press lead to the modern idea of copyright, the Reformation, and by extension contributed to the 80 Year's War, and through that Westphalian sovereignty?
3 replies →
Because software is protected under the First Amendment: https://www.eff.org/cases/bernstein-v-us-dept-justice
Government cannot regulate it.
Published software is protected.
Entities operating SaaS are in a much greyer area.
Agree with best weapon against AI (in the hands of power) is equal AI access for all.
Hate to be the nitpicker but "defensible moat" implies the moat itself is what needs protecting :)
>best weapon against AI (in the hands of power) is equal AI access for all.
That assumes the threat isn't complete annihilation of humanity, which is what's being claimed. That assumption is the weak link, and is what should be attacked.
Again, if we assume that AI poses an existential risk (and to be clear, I don't think it does), then it follows that we should regulate it analogously to the way in which we regulate weapons-grade plutonium.
Power accessible by few in private contexts is ripe for hidden abuses. We have seen this time and time again. I would rather 1 billion people trying to work with AI to "change" the world than a group of elites without many to care for. The technology can be used for defense as well as it can be used for offense. Who says the people with unsafeguarded access have the best intentions? At least with equal access for all, we can be sure there are people using it with good intentions.
2 replies →
> Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field.
Precisely. And the same governments will make stealing your data and ip legal. I believe that’s how corruption works - pump money into politicians and they make laws that favour oligarchs.
Is there any statement in this Executive Order that increases the bar for smaller AI companies? Most of the statements are about funding new research or fostering responsible use of the AIs, and the only statement that would add burden to AI companies seems to be the first one: Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. And only the most powerful AI systems have such a requirement.
[dead]
Big companies making it difficult for new players to get in in the name of safety.
Too many small players have made the jump to the big leagues already for those who don’t want competition.
Just echoing what the article said - maybe succinctly.
If some people are going to have the tech it will create a different kind of balance.
Tough issue to navigate.