Comment by 2001zhaozhao
1 day ago
It's interesting just how many opinions Amodei shares with AI 2027's authors despite coming from a pretty different context.
- Prediction of exponential AI research feedback loops (AI coding speeding up AI R&D) which Amodei says is already starting today
- AI being a race between democracies and autocracies with winner-takes-all dynamics, with compute being crucial in this race and global slowdown being infeasible
- Mention of bioweapons and mirror life in particular being a big concern
- The belief that AI takeoff would be fast and broad enough to cause irreplaceable job losses rather than being a repeat of past disruptions (although this essay seems to be markedly more pessimistic than AI 2027 with regard to inequality after said job losses)
- Powerful AI in next few years, perhaps as early as 2027
I wonder if either work influenced the other in any way or is this just a case of "great minds think alike"?
In the AI scene, everyone knows everyone.
It used to be a small group of people who mostly just believed that AI is a very important technology overlooked by most. Now, they're vindicated, the importance of AI is widely understood, and the headcount in the industry is up x100. But those people who were on the ground floor are still there, they all know each other, and many keep in touch. And many who entered the field during the boom were those already on the periphery of the same core group.
Which is how you get various researchers and executives who don't see eye to eye anymore but still agree on many of the fundamentals - or even things that appear to an outsider as extreme views. They may have agreed on them back in year 2010.
"AGI is possible, powerful, dangerous" is a fringe view in the public opinion - but in the AI scene, it's the mainstream view. They argue the specifics, not the premise.
It's because few realize how downstream most of this AI industry is of Thiel, Eliezer Yudkowsky and LessWrong.com.
Early "rationalist" community was concerned with AI in this way 20 years ago. Eliezer inspired and introduced the founders of Google DeepMind to Peter Thiel to get their funding. Altman acknowledged how influential Eliezer was by saying how he is most deserving of a Nobel Peace prize when AGI goes well (by lesswrong / "rationalist" discussion prompting OpenAI). Anthropic was a more X-risk concerned fork of OpenAI. Paul Christiano inventor of RLHF was big lesswrong member. AI 2027 is an ex-OpenAI lesswrong contributor and Scott Alexander, a centerpiece of lesswrong / "rationalism". Dario, Anthropic CEO, sister is married to Holden Karnofsky, a centerpiece of effective altruism, itself a branch of lesswrong / "rationalism". The origin of all this was directionally correct, but there was enough power, $, and "it's inevitable" to temporarily blind smart people for long enough.
It is very weird to wonder, what if they're all wrong. Sam Bankman-Fried was clearly as committed to these ideas, and crashed his company into the ground.
But clearly if out of context someone said something like this:
"Clearly, the most obvious effect will be to greatly increase economic growth. The pace of advances in scientific research, biomedical innovation, manufacturing, supply chains, the efficiency of the financial system, and much more are almost guaranteed to lead to a much faster rate of economic growth. In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible."
I'd say that they were a snake oil salesman. All of my life experience says that there's no good reason to believe Dario's predictions here, but I'm taken in just as much as everyone else.
(they are all wrong)
A fun property of S-curves is that they look exactly like exponential curves until the midpoint. Projecting exponentials is definitionally absurd because exponential growth is impossible in the long term. It is far more important to study the carrying capacity limits that curtail exponential growth.
2 replies →
> I'd say that they were a snake oil salesman.
I don't know if "snake oil" is quite demonstrable yet, but you're not wrong to question this. There are phrases in the article which are so grandiose, they're on my list of "no serious CEO should ever actually say this about their own company's products/industry" (even if they might suspect or hope it). For example:
> "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power"
LLMs can certainly be very useful and I think that utility will grow but Dario's making a lot of 'foom-ish' assumptions about things which have not happened and may not happen anytime soon. And even if/when they do happen, the world may have changed and adapted enough that the expected impacts, both positive and negative, are less disruptive than either the accelerationists hope or the doomers fear. Another Sagan quote that's relevant here is "Extraordinary claims require extraordinary evidence."
"In Machines of Loving Grace, I suggest that a 10–20% sustained annual GDP growth rate may be possible.""
Absolutely comical. Do you realise how much that is in absolute terms? These guys are making up as they go along. Cant believe people buy this nonsense.
3 replies →
I really recommend “More Everything Forever” by Adam Becker. The book does a really good job laying out the arguments for AI doom, EA, accelerationism, and affiliated movements, including an interview with Yudkowsky, then debunking them. But it really opened my eyes to how… bizarre? eccentric? unbelievable? this whole industry is. I’ve been in tech for over a decade but don’t live in the bay, and some of the stuff these people believe, or at least say they believe, is truly nuts. I don’t know how else to describe it.
> Anthropic was a more X-risk concerned fork of OpenAI.
What is XRisk? I would have inductively thought adult but that doesn't sound right.
Existential
Yeah, it's a pretty blatant cult masquerading as a consensus - but they're all singing from the same hymn sheet in lieu of any actual evidence to support their claims. A lot of it is heavily quasi-religious and falls apart under examination from external perspectives.
We're gonna die, but it's not going to be AI that does it: it'll be the oceans boiling and C3 carbon fixation flatlining that does it.