Comment by dreambuffer
10 hours ago
FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.
I don't think you can use lindy on trends as if trends are static objects, but that's another conversation.
So, this is not quite right: Alexander contributed to the report, but his personal opinion is more like the mid-2030s[1]. Freddie feels like this is him backing down from the original statement, but in fact he said this at the time the report was published, and in fact pointed out a graf below the quote that Freddie claims does tie him to 2027:
> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out. [2]
I don't think this changes your observation that he is "personally invested" (i.e. believes this trendline will continue), but I'm pretty sure when AGI doesn't appear in 2027, many people will believe that this invalidates the arguments being made here (or in the report). The actual report was intended to give a feel for what a near-future "disaster" AGI scenario, and settled on a date to give that some concrete immediacy. The collective review that gave that as a possible, but not inevitable date is still ongoing (they originally pushed their best estimate out a bit further, but now they think, judging by the goals that are being hit, their scenario was a little too conservative). [3]
[1] https://freddiedeboer.substack.com/p/im-offering-scott-alexa... [2] https://www.astralcodexten.com/p/introducing-ai-2027 [3] https://blog.aifutures.org/p/grading-ai-2027s-2025-predictio...
Mind you, he is only personally invested insofar as he's staked his reputation on it. Throughout his writing, he expresses the same point over and over again: desperately wants AI to slow down, advocates for politics that would slow it down, and most likely nothing would bring him greater peace than to see a sigmoid curve appear.
How convenient; when AGI doesn’t appear in 1-2 years his reputation is pristine because he slowed it down.
To make that argument you'd want to show some causal link which so far we haven't seen.
This is incorrect as written. The author contributed writing to AI-2027 but distanced himself from the underlying model. That model had 2027 as the modal year of AGI, not median or mean. The authors of that model revised it to a later date shortly after and (if I recall correctly) have since done so again.
It is broadly true that Scott believes that AGI will come in the near future and from LLMs, although his reputation runs a ways deeper than that.
Ok, but you can just look at the METR curve. Mythos saturated the 50% time horizon. The 80% is now at 3 hours. The rate of progress is accelerating not slowing down. There’s no indication yet that this is a sigmoid!
AGI has become such a meaningless nondescript term, arguing when or how it is here has become pointless. Even OpenAI caved in and removed their AGI clause from their contract with Microsoft because they weren't fully sure that we are not there yet. The original ARC AGI was hailed as proof that AGI is not here yet, but now that ARC 1 and 2 got saturated, noone wanted to consider that perhaps we crossed the point where average humans are getting left behind. Frontier models are primarily limited by context and modality at this point, not by intelligence.
> FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.
I mean, that's called "having an opinion".
He co-authored a report, which is something more than an opinion. It may be used to inspire policy. There should be greater reputational consequences for publishing something you spent a few months studying and writing about along with several experts. Just my opinion.
I don't understand what you're trying to imply here. Yes, he co-authored a report. What is supposed to be dangerous or suspicious about this? What does your statement about "reputational consequences" have to do with your original comment, which implies that this some indicates a bias on his part?
It seems to me like you're trying to somehow imply that writing things to convince people of what you believe is somehow nefarious? It isn't! It's what we're all doing here right now! Putting it in a format that certain people will take more seriously doesn't make it nefarious either. I am quite confused by your point of view here.
1 reply →
And now he's publishing more information about that same opinion he still has. How horrible.
He wrote articles arguing that pro-AI people are dismissive of risks or even suggesting they are intellectually lazy. He's taken a side. if he's wrong I would hope he owns up to it
> He's taken a side.
Yes, that's called "having an opinion". Typically people writing argumentative pieces are doing so because they have a belief about the matter. I'm not sure what exactly you expect here.
> if he's wrong I would hope he owns up to it
I think Scott Alexander is pretty good about that.
> He wrote articles arguing that pro-AI people are dismissive of risks or even suggesting they are intellectually lazy
I mean.. this is 2026 right? You're not writing that comment from 2024 or something?
We see massive problems already where photos are just not believable anymore, nor is audio, and not even video actually with many people falling for AI fake image clips from the Gaza war for example. And since then these tools are MASSIVELY more powerful. Disinformation is essentially free, and the cost of truth has been static. Meaning the "buying power" of truth has collapsed and is falling faster and faster.
Anyone who dismissed AI risks a few years ago IS ALREADY PROVEN WRONG.
He only has 1.5 more months. If he's wrong he needs to own it. Same for Eliezer Yudkowsky. But these people have too much riding on their brands. No one has the courage to fess up to being wrong. Given how many podcasts he and others have been on professing this belief, it will be hard to just pretend otherwise.