Comment by catmanjan

7 months ago

How can anyone still believe the AGI scam

If you think the possibility of AGI within 7-10 years is a scam then you aren't paying attention to trends.

  • I wouldn't call 7-10 years a scam, but I would call it low odds. It is pretty hard to be accurate on predictions of a 10 year window. But I definitely think 2027 and 2030 predictions are a scam. Majority of researchers think it is further away than 10 years, if you are looking at surveys from the AI conferences rather than predictions in the news.

    • The thing is, AI researchers have continually underestimated the pace of AI progress

      https://80000hours.org/2025/03/when-do-experts-expect-agi-to...

      >One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors.

      >In 2022, they thought AI wouldn’t be able to write simple Python code until around 2027.

      >In 2023, they reduced that to 2025, but AI could maybe already meet that condition in 2023 (and definitely by 2024).

      >Most of their other estimates declined significantly between 2023 and 2022.

      >The median estimate for achieving ‘high-level machine intelligence’ shortened by 13 years.

      Basically every median timeline estimate has shrunk like clockwork every year. Back in 2021 people thought it wouldn't be until 2040 or so when AI models could look at a photo and give a human-level textual description of its contents. I think is reasonable to expect that the pace of "prediction error" won't change significantly since it's been on a straight downward trend over the past 4 years, and if it continues as such, AGI around 2028-2030 is a median estimate.

      13 replies →

I can't believe this is so unpopular here. Maybe it's the tone, but come on, how do people rationally extrapolate from LLMs or even large multimodal generative models to "general intelligence"? Sure, they might do a better job than the average person on a range of tasks, but they're always prone to funny failures pretty much by design (train vs test distribution mismatch). They might combine data in interesting ways you hadn't thought of; that doesn't mean you can actually rely on them in the way you do on a truly intelligent human.

  • I think it’s selection bias - a y-combinator forum is going to have a larger percentage of people who are techno-utopianists than general society, and there will be many seeking financial success by connecting with a trend at the right moment. It seems obvious to me that LLMs are interesting but not revolutionary, and equally obvious that they aren’t heading for any kind of “general intelligence”. They’re good at pretending, and only good at that to the extent that they can mine what has already been expressed.

    I suppose some are genuine materialists who think that ultimately that is all we are as humans, just a reconstitution of what has come before. I think we’re much more complicated than that.

    LLMs are like the myth of Narcissus and hypnotically reflect our own humanity back at us.