Comment by habitue
2 years ago
There are two dominant narratives I see when AI X-Risk stuff is brought up:
- it's actually to get regulatory capture
- it's hubris, they're trying to seem more important and powerful than they are
Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI. Maybe they're wrong, but I don't think this kind of incredulous conspiratorial reaction is a useful thing to engage in.
When in doubt take people at their word. Maybe the CEOs of these companies have some sneaky 5D chess plan, but many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns. They're worth taking seriously.
> Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI
This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.
If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.
Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.
It's a race dynamic. Can you truly imagine any one of them stopping without the others agreeing? How would they tell that the others really have stopped. I think they do believe that it's dangerous what they're doing but that they would rather be the ones to build it than let somebody else get there first because who knows what they'll do.
It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.
The best way to not get nuked is to develop nukes first. That's the gist of their usual rebuttal to this argument.
Except the argument, projected to the dimension of WMDs, is not that AI is like nukes - rather, AI is like bioweapons. Nukes are dangerous when someone is willing to drop them at someone else. Bioweapons are inherently dangerous - the more you refine them, the worse it gets; eventually, you may build one so deadly that one careless handling mistake ends the world.
2 replies →
If those CEOs really thought AI was as bad as nukes they would actually dissolve their companies, destroy all their data, and go churn butter with the Amish instead. The US, having developed nukes first, now has the most nuclear warheads pointed at it.
That argument doesn't hold water when they also argue the mere existence of nukes is dangerous. I would love to hear when Hinton had this revelation when his life's work was to advance AI.
Apart from Japan, I'd say America is the country that has historically come closest to being nuked, with the Soviet Union a close second.
> When in doubt take people at their word.
This is not mutually exclusive with it being either hubris or regulatory capture. People see the world colored by their own interests, emotions, background, and values. It's quite possible that the person making the statement sincerely believes there's a danger to humanity, but it's actually a danger to their monopoly that their self-image will not let them label as a such.
It's never regulatory capture when you're the one doing it. It's always "The public needs to be protected from the consequences that will happen if any non-expert could hang up a shingle." Oftentimes the dangers are real, but the incumbent is unable to also perceive the benefits of other people competing with them (if they could, competition wouldn't be dangerous, they'd just implement those benefits themselves).
When I see comments like these, it's clear that the commenter is probably an individual contributor that has never seen how upper management or politics actually works. Regulatory capture is probably one of the biggest wealth generating techniques out there. It's very real.
If some rando anonymous posters could think it up, it doesn't require a CEO to play 5D chess to think it up. And many of us have witnessed these techniques being used by companies directly. Microsoft was famous for doing this sort of thing, and in a much more roundabout fashion, for instance with the SCO debacle.
It's standard business practice, not conspiracy 5D chess or whatever moniker you want to give it to be dismissive.
This is a normal way for companies to shut down competition. No cleverness required.
Many of the people making this claim are not associated with any company.
The traditional method of regulatory capture is not to purport to solve a problem that doesn't really exist, it's to go look around for whatever people are actually worried about, over-hype it if necessary, and then propose a solution which shuts out competitors whether or not it does anything about the problem. It may even reduce that specific problem while still being intentionally crafted to shut down competition.
This is not incompatible with honest people having legitimate concerns about the original problem, because the dispute is not existence of the problem, it's the net benefit of the proposed solution.
You mean they are not currently employed by the well-known companies. Did they declare they divested their shares in their former employer and/or acquirer?
7 replies →
No, they've been sold a line by those that are, and believe it because it matches with their pre-existing assumptions.
3 replies →
>it's hubris, they're trying to seem more important and powerful than they are
>Both of these explanations strike me as too clever by half
This is a good point. You have to be clever to hop on a soapbox and make a ruckus about doomsday to get attention. Only savvy actors playing 5D chess can aptly deploy the nuanced and difficult pattern of “make grandiose claims for clicks”
Well, it didn't work for nuclear.
Nuclear actually ended up keeping the world mostly at peace. Unfortunately, AGI is not something you can use to create stability via MAD doctrine - it's much more like bioweapons, in that it starts as a weapon of mass annoyance, and developing it delivers spin-off tech that bolsters your economy... until you cross a threshold where a random mistake in handling it plain ends the world, just like that.
You can go back 30 years and read passages from textbooks about how dangerous an underspecified AI could be, but those were problems for the future. I'm sure there's some degree of x-risk promotion in the industry serving the purpose of hyping up businesses, but it's naive to act like this is a new or fictitious concern. We're just hearing more of it because capabilities are rapidly increasing.
> They're worth taking seriously.
1. While their contributions to AI tech are unmistakable, what do Bengio and Hinton really know about the human dangers of AI? Being an expert in one thing does not make one an expert in everything. It is unlikely that they understand the human dangers any more than any other random kook on Reddit. Why take them more seriously than the other kooks?
2. Hinton's big concern is that AI will make it easy to steal identities. Even if we assume that is true, it is already not that hard to steal identities. It is a danger that already exists even without AI and, realistically, already needs to be addressed. What's the takeaway if we are to take the message seriously? That AI will make the problems we already have more noticeable, and because of that we will finally have to get off our lazy asses and do something about those problems that we've tried to sweep under the rug? That seems like a good thing.
Getting the government to regulate your competition isn't 5d chess, it's barely even chess. If you study the birth of any technology in the last 200 years -- rail, electricity, radio, integrated circuits, etc -- you will see the same playbook put to this use. Any good tech executive must be aware of this history.
None of this requires every doomer to be disingenuous or even ill-informed, or even for specific leaders to by lying about their beliefs. It's just that those beliefs that benefit highly capitalized companies get amplified, and the alternatives not so much.
> many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns
I respect these researchers, but I believe they are doing it to build their own brand, whether consciously or subconsciously. There's no doubt it's working. I'm not in the sub-field, but I have been following neural nets for a long time, and I haven't heard of either Bengio nor Hinton before they started talking to the press about this.
>but I believe they are doing it to build their own brand, whether consciously or subconsciously.
I am always in awe at how easily people craft unfalsifiable worldviews in service to their preconceived opinions.
As someone who has been following deep learning for quite some time as well, Bengio and Hinton would be some of the first people I think of in this field. Just search Google for "godfathers of ai" if you don't believe me.
Both Bengio and Hinton have their names plastered over many of the seminal works in deep learning.
AlexNet, the paper that arguable started it all, came out of Hinton's lab.
https://papers.nips.cc/paper_files/paper/2012/hash/c399862d3...
I really don't think they need to build any more of a brand.
> I really don't think they need to build any more of a brand.
Brand-building is an ongoing process. You'll notice even the most recognized brands on earth, like Apple and Coca-Cola, are still working on building their brand.
> When in doubt take people at their word.
Hanlon's razor works great when applied to your personal relationships, but it falls apart when billions/trillions of dollars are at stake.
Besides the point, but FYI you are misusing the term parsimonious.
It's a reference to the more apt name for Occam's razor. I happen to disagree with GP because governments always want to expand their power. When they do something that results in what they want it's actually the parsimonious explaination to say that they did it because they wanted that result.
He is not. There are multiple definitions. The other definition is to explain something using an economical/simple approach.