Comment by reducesuffering
18 hours ago
It's because few realize how downstream most of this AI industry is of Thiel, Eliezer Yudkowsky and LessWrong.com.
Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.
It is very weird to wonder, what if they're all wrong. Sam Bankman-Fried was clearly as committed to these ideas, and crashed his company into the ground.
But clearly if out of context someone said something like this:
"Clearly, the most obvious effect will be to greatly increase economic growth. The pace of advances in scientific research, biomedical innovation, manufacturing, supply chains, the efficiency of the financial system, and much more are almost guaranteed to lead to a much faster rate of economic growth. In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible."
I'd say that they were a snake oil salesman. All of my life experience says that there's no good reason to believe Dario's predictions here, but I'm taken in just as much as everyone else.
(they are all wrong)
A fun property of S-curves is that they look exactly like exponential curves until the midpoint. Projecting exponentials is definitionally absurd because exponential growth is impossible in the long term. It is far more important to study the carrying capacity limits that curtail exponential growth.
> I'd say that they were a snake oil salesman.
I don't know if "snake oil" is quite demonstrable yet, but you're not wrong to question this. There are phrases in the article which are so grandiose, they're on my list of "no serious CEO should ever actually say this about their own company's products/industry" (even if they might suspect or hope it). For example:
> "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power"
LLMs can certainly be very useful and I think that utility will grow but Dario's making a lot of 'foom-ish' assumptions about things which have not happened and may not happen anytime soon. And even if/when they do happen, the world may have changed and adapted enough that the expected impacts, both positive and negative, are less disruptive than either the accelerationists hope or the doomers fear. Another Sagan quote that's relevant here is "Extraordinary claims require extraordinary evidence."
"In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible.""
Absolutely comical. Do you realise how much that is in absolute terms? These guys are making up as they go along. Cant believe people buy this nonsense.
Why not? If they increase white color productivity by 25%, and that accounts for 50% of the economy, you'd get such a result.
> Cant believe people buy this nonsense.
I somewhat don't disagree, and yet. It feels like more people in the world buy into it than don't? To a large degree?
I mean, once we're able to run and operate multinational corporations off-world, GDP becomes something very different indeed
I really recommend “More Everything Forever” by Adam Becker. The book does a really good job laying out the arguments for AI doom, EA, accelerationism, and affiliated movements, including an interview with Yudkowsky, then debunking them. But it really opened my eyes to how… bizarre? eccentric? unbelievable? this whole industry is. I’ve been in tech for over a decade but don’t live in the bay, and some of the stuff these people believe, or at least say they believe, is truly nuts. I don’t know how else to describe it.
Yeah, it's a pretty blatant cult masquerading as a consensus - but they're all singing from the same hymn sheet in lieu of any actual evidence to support their claims. A lot of it is heavily quasi-religious and falls apart under examination from external perspectives.
We're gonna die, but it's not going to be AI that does it: it'll be the oceans boiling and C3 carbon fixation flatlining that does it.
> Anthropic was a more X-risk concerned fork of OpenAI.
What is XRisk? I would have inductively thought adult but that doesn't sound right.
Existential