← Back to context

Comment by loandbehold

20 hours ago

Researches at top AI labs don't consider EY to be a kook even though they may not necessarily agree. EY concepts/terminology appear in Anthropic safety papers. Geoffrey Hinton takes him quite seriously and mentions him in his interviews.

Anthropic is the AI doomer / safetyism lab, and Hinton is one of the patron saints of 'rationalist' AI doomerism.

AI doomerism is psychologically attractive to "people with autistic cognitive traits, including dichotomous (black-and-white) thinking, intolerance of uncertainty, and a tendency toward catastrophizing". They are pascal's mugging themselves, to ironically use one of their terms. It's fundamentally a cognitive distortion.

  • I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...

    "What if AI doom is all fear-mongering, and we create AI less prone to make up dangerous stuff or mistake buggy goals for real ones" (which is what alignment is) "for nothing?"

    Even if Yudkowsky is autistic, you're muddling the condition. Autistic people have a *practical* intolerance of uncertainty in the moment (everything unexpected from a noise to a missed turn can be a jump-scare in their day-to-day activities), but often they're absolutely fine with intellectual uncertainty, unconventional ideas, abstract ambiguity, nonconformity, etc. Indeed, one of Yudkowsky's main things is Bayesianism, i.e. being precise about uncertainty.

    Yudkowsky's reported P(doom) is somewhere around 90%, which is very much in the realm of "we might eventually be able to figure this out, *but we're not even close to ready so for the love of everything slow down so we can figure this all out*"; the book title comes from a long tradition of authors noticing you need to beat readers over the head with your point for them to notice it.

    Anthropic (like at least also OpenAI), appears to think they can solve the problems that Yudkowsky has found. They're a lot more optimistic than him, but they take these problems seriously.

    For his work on AI, Hinton got a Nobel prize in Physics, a Turing Award, the inaugural Rumelhart Prize, a Princess of Asturias Award, a VinFuture Prize, and a Queen Elizabeth Prize for Engineering. Calling him a "patron saint" of "doomerism" is like calling Paul Krugman (Nobel laureate in Economics) a patron saint of "Trump Derangement Syndrome" on the basis of what he says in his YouTube channel: a smart person's considered opinions are worth listening to even if you have not got time for the details, because you can be sure someone else has considered the details and will absolutely be responding to even an i missing a dot.

    A Pascal's mugging would be more like S-risk (S stands for suffering) than doom risk: https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering

    • Much like a lot of LLM usage burns tokens so that mediocre people can hallucinate that they're doing something brilliant, Yudkowskyism is just a lot of empty verbiage for the purpose of building a sex cult around a plump gnome. Reusing his nonsensical and poorly defined terms but failing to get the benefit of the sex cult really misses the point of the entire exercise.

    • The problem is that effort spent to reduce the "risk" of creating an evil god who tortures us all for the rest of time doesn't actually produce outcomes that reduces the risk of things like widespread job loss or hyperaggregation of influence and money.

      "Oh we'll at least get some side benefit" is not actually what is coming out of the endlessly circular forums talking about the apocalypse.

      1 reply →

    • > I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...

      The people who've made the biggest contribution to creating a better world over the last 50 years have been the Chinese; powered largely by coal and petroleum. And in one of the most ironic results in the 21st century, they're now the leaders in solar panel production on the back of the largest investment in fossil fuel energy in global history.

      The comic ran into the same problem as the climate change movement in general - they proposed ideas that generally made people worse off. And if measured in terms of CO2 emissions achieved nothing except pushing wealth creation to Asia. Which, in fairness, is probably appreciated by the Asians.

      10 replies →

Just because some researchers are infected with this idiocy that EY propagates does not mean that it is legit.

Maybe they should pay more attention to real problems like the sycophantic nature of current LLMs causing psychosis in people and worry less about theoretical AGI.

  • Who are you to say? Why do you have such little regard for everyone in the field, both pro- and anti- AI development? Do you think they're colluding to deceive us?

    • theres billions, even trillions of dollars on the line, why not start with the assumption they have every incentive to deceive, even if unintentional (ie, deceiving themselves)

And people working on the metaverse endlessly referenced Ready Player One despite it being ludicrous fiction.

Yudkowsky is obviously read a lot by some people working in AI. That doesn't make his ideas prescient.

  • Ready Player One was completely misread and misunderstood by people who thought they could make a lot of money with VR.

    It wasn't a homage to 70s/80s/90s nerd culture and a hopeful glimpse of what VR tech could be.

    It was a warning for people to get off their fucking phones and to work together at improving the real world, versus ignoring it and living out unrealistic fantasies inside a digital ecosystem that makes us all a bit less human.

    The whole point of the book is that VR and addictive tech is a red herring. It was deliberately misunderstood by Zuck and his ilk.

Researchers at top AI labs also have the incentive to say whatever shit it will take to get their lab funded, reason be damned.