Researches at top AI labs don't consider EY to be a kook even though they may not necessarily agree. EY concepts/terminology appear in Anthropic safety papers. Geoffrey Hinton takes him quite seriously and mentions him in his interviews.
Just because some researchers are infected with this idiocy that EY propagates does not mean that it is legit.
Maybe they should pay more attention to real problems like the sycophantic nature of current LLMs causing psychosis in people and worry less about theoretical AGI.
Who are you to say? Why do you have such little regard for everyone in the field, both pro- and anti- AI development? Do you think they're colluding to deceive us?
Anthropic is the AI doomer / safetyism lab, and Hinton is one of the patron saints of 'rationalist' AI doomerism.
AI doomerism is psychologically attractive to "people with autistic cognitive traits, including dichotomous (black-and-white) thinking, intolerance of uncertainty, and a tendency toward catastrophizing". They are pascal's mugging themselves, to ironically use one of their terms. It's fundamentally a cognitive distortion.
"What if AI doom is all fear-mongering, and we create AI less prone to make up dangerous stuff or mistake buggy goals for real ones" (which is what alignment is) "for nothing?"
Even if Yudkowsky is autistic, you're muddling the condition. Autistic people have a *practical* intolerance of uncertainty in the moment (everything unexpected from a noise to a missed turn can be a jump-scare in their day-to-day activities), but often they're absolutely fine with intellectual uncertainty, unconventional ideas, abstract ambiguity, nonconformity, etc. Indeed, one of Yudkowsky's main things is Bayesianism, i.e. being precise about uncertainty.
Yudkowsky's reported P(doom) is somewhere around 90%, which is very much in the realm of "we might eventually be able to figure this out, *but we're not even close to ready so for the love of everything slow down so we can figure this all out*"; the book title comes from a long tradition of authors noticing you need to beat readers over the head with your point for them to notice it.
Anthropic (like at least also OpenAI), appears to think they can solve the problems that Yudkowsky has found. They're a lot more optimistic than him, but they take these problems seriously.
For his work on AI, Hinton got a Nobel prize in Physics, a Turing Award, the inaugural Rumelhart Prize, a Princess of Asturias Award, a VinFuture Prize, and a Queen Elizabeth Prize for Engineering. Calling him a "patron saint" of "doomerism" is like calling Paul Krugman (Nobel laureate in Economics) a patron saint of "Trump Derangement Syndrome" on the basis of what he says in his YouTube channel: a smart person's considered opinions are worth listening to even if you have not got time for the details, because you can be sure someone else has considered the details and will absolutely be responding to even an i missing a dot.
Thank you, like most of the world I would assume "EY" would refer to Ernst and Young, the multi-national Big Four with a website of ey.com who I'm sure has opinions on AI, but nowhere near enough to be classed as expertise
Researches at top AI labs don't consider EY to be a kook even though they may not necessarily agree. EY concepts/terminology appear in Anthropic safety papers. Geoffrey Hinton takes him quite seriously and mentions him in his interviews.
And people working on the metaverse endlessly referenced Ready Player One despite it being ludicrous fiction.
Yudkowsky is obviously read a lot by some people working in AI. That doesn't make his ideas prescient.
Researchers at top AI labs also have the incentive to say whatever shit it will take to get their lab funded, reason be damned.
Just because some researchers are infected with this idiocy that EY propagates does not mean that it is legit.
Maybe they should pay more attention to real problems like the sycophantic nature of current LLMs causing psychosis in people and worry less about theoretical AGI.
They are worried about both risks.
Who are you to say? Why do you have such little regard for everyone in the field, both pro- and anti- AI development? Do you think they're colluding to deceive us?
Anthropic is the AI doomer / safetyism lab, and Hinton is one of the patron saints of 'rationalist' AI doomerism.
AI doomerism is psychologically attractive to "people with autistic cognitive traits, including dichotomous (black-and-white) thinking, intolerance of uncertainty, and a tendency toward catastrophizing". They are pascal's mugging themselves, to ironically use one of their terms. It's fundamentally a cognitive distortion.
I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...
"What if AI doom is all fear-mongering, and we create AI less prone to make up dangerous stuff or mistake buggy goals for real ones" (which is what alignment is) "for nothing?"
Even if Yudkowsky is autistic, you're muddling the condition. Autistic people have a *practical* intolerance of uncertainty in the moment (everything unexpected from a noise to a missed turn can be a jump-scare in their day-to-day activities), but often they're absolutely fine with intellectual uncertainty, unconventional ideas, abstract ambiguity, nonconformity, etc. Indeed, one of Yudkowsky's main things is Bayesianism, i.e. being precise about uncertainty.
Yudkowsky's reported P(doom) is somewhere around 90%, which is very much in the realm of "we might eventually be able to figure this out, *but we're not even close to ready so for the love of everything slow down so we can figure this all out*"; the book title comes from a long tradition of authors noticing you need to beat readers over the head with your point for them to notice it.
Anthropic (like at least also OpenAI), appears to think they can solve the problems that Yudkowsky has found. They're a lot more optimistic than him, but they take these problems seriously.
For his work on AI, Hinton got a Nobel prize in Physics, a Turing Award, the inaugural Rumelhart Prize, a Princess of Asturias Award, a VinFuture Prize, and a Queen Elizabeth Prize for Engineering. Calling him a "patron saint" of "doomerism" is like calling Paul Krugman (Nobel laureate in Economics) a patron saint of "Trump Derangement Syndrome" on the basis of what he says in his YouTube channel: a smart person's considered opinions are worth listening to even if you have not got time for the details, because you can be sure someone else has considered the details and will absolutely be responding to even an i missing a dot.
A Pascal's mugging would be more like S-risk (S stands for suffering) than doom risk: https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering
5 replies →
EY = Eliezer Yudkowsky
Appreciate that you made account just for this. I was well aware of Yudkowsky but even so couldn't parse this "EY" initialism
That book was written by him, so I figured the acronym was obvious. My bad!
Thank you, like most of the world I would assume "EY" would refer to Ernst and Young, the multi-national Big Four with a website of ey.com who I'm sure has opinions on AI, but nowhere near enough to be classed as expertise