Comment by novok
7 hours ago
Anthropic is the AI doomer / safetyism lab, and Hinton is one of the patron saints of 'rationalist' AI doomerism.
AI doomerism is psychologically attractive to "people with autistic cognitive traits, including dichotomous (black-and-white) thinking, intolerance of uncertainty, and a tendency toward catastrophizing". They are pascal's mugging themselves, to ironically use one of their terms. It's fundamentally a cognitive distortion.
I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...
"What if AI doom is all fear-mongering, and we create AI less prone to make up dangerous stuff or mistake buggy goals for real ones" (which is what alignment is) "for nothing?"
Even if Yudkowsky is autistic, you're muddling the condition. Autistic people have a *practical* intolerance of uncertainty in the moment (everything unexpected from a noise to a missed turn can be a jump-scare in their day-to-day activities), but often they're absolutely fine with intellectual uncertainty, unconventional ideas, abstract ambiguity, nonconformity, etc. Indeed, one of Yudkowsky's main things is Bayesianism, i.e. being precise about uncertainty.
Yudkowsky's reported P(doom) is somewhere around 90%, which is very much in the realm of "we might eventually be able to figure this out, *but we're not even close to ready so for the love of everything slow down so we can figure this all out*"; the book title comes from a long tradition of authors noticing you need to beat readers over the head with your point for them to notice it.
Anthropic (like at least also OpenAI), appears to think they can solve the problems that Yudkowsky has found. They're a lot more optimistic than him, but they take these problems seriously.
For his work on AI, Hinton got a Nobel prize in Physics, a Turing Award, the inaugural Rumelhart Prize, a Princess of Asturias Award, a VinFuture Prize, and a Queen Elizabeth Prize for Engineering. Calling him a "patron saint" of "doomerism" is like calling Paul Krugman (Nobel laureate in Economics) a patron saint of "Trump Derangement Syndrome" on the basis of what he says in his YouTube channel: a smart person's considered opinions are worth listening to even if you have not got time for the details, because you can be sure someone else has considered the details and will absolutely be responding to even an i missing a dot.
A Pascal's mugging would be more like S-risk (S stands for suffering) than doom risk: https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering
> I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...
The people who've made the biggest contribution to creating a better world over the last 50 years have been the Chinese; powered largely by coal and petroleum. And in one of the most ironic results in the 21st century, they're now the leaders in solar panel production on the back of the largest investment in fossil fuel energy in global history.
The comic ran into the same problem as the climate change movement in general - they proposed ideas that generally made people worse off. And if measured in terms of CO2 emissions achieved nothing except pushing wealth creation to Asia. Which, in fairness, is probably appreciated by the Asians.
That cartoon was drawn at the very end of 2009.
BYD had release the first plug in hybrid the year before.
The Beijing Olynpics had made air pollution a hot topic in China in 2007-8.
Wind power had accelerated after their 2005 Renewable Energy Law.
Solar panel production rose around this time, taking over the market from European manufacturers when the Financial Crisis hit and they pulled back investments.
So China at that time, was doing all the things on the cartoon's presentation list, and has benefitted greatly from them.
Something that has been largely forgotten about is that it used to be routine to see pictures of smoggy Chinese and Asian cities, this was a problem for them that they solved. I can't help thinking we can't get this kind of preventative action on any large scale, we need to have severe issues first and that's not accounting for longer term/cumulative effects.
Mm, there is that.
The unfortunate comparable here is that all the people who care about making sure their AI is safe, regardless of what they mean by that, are beaten to the market by the people who don't.
The problem is that effort spent to reduce the "risk" of creating an evil god who tortures us all for the rest of time doesn't actually produce outcomes that reduces the risk of things like widespread job loss or hyperaggregation of influence and money.
"Oh we'll at least get some side benefit" is not actually what is coming out of the endlessly circular forums talking about the apocalypse.