← Back to context

Comment by falcor84

2 days ago

> Any talk of "AGI" is, as always, ridiculous.

How did you arrive at "ridiculous"? What we're seeing here is incredible progress over what we had a year ago. Even ARC-AGI-2 is now at over 50%. Given that this sort of process is also being applied to AI development itself, it's really not clear to me that humans would be a valuable component in knowledge work for much longer.

It requires constant feedback, critical evaluation, and checks. This is not AGI, its cognitive augmentation. One that is collective, one that will accelerate human abilities far beyond what the academic establishment is currently capable of, but that is still fundamentally organic. I don't see a problem with this--AGI advocates treat machine intelligence like some sort of God that will smite non-believers and reward the faithful. This is what we tell children so that they won't shit their beds at night, otherwise they get a spanking. The real world is not composed of rewards and punishments.

  • It does seem that the venn diagram of "roko's basilisk" believers and "AGI is coming within our lifetimes" believers is nearly a circle. Would be nice if there were some less... religious... arguments for AGI's imminence.

    • I think the “Roko’s Basilisk” thing is mostly a way for readers of Nick Land to explain part of his philosophical perspective without the need for, say, an actual background in philosphy. But the simplicity reduces his nuanced thought into a call for a sheeplike herd—they don’t even need a shepherd! Or perhaps there is, but he is always yet to come…best to stay in line anyway, he might be just around the corner.

  • > It requires constant feedback, critical evaluation, and checks. This is not AGI, its cognitive augmentation.

    To me that doesn't sound qualitatively different from a PhD student. Are they just cognitive augmentation for their mentor?

    In any case, I wasn't trying to argue that this system as-is is AGI, but just that it's no longer "ridiculous", and that this to me looks like a herald of AGI, as the portion being done by humans gets smaller and smaller

    • People would say the same thing about a calculator, or computation in general. Just like any machine it must be constructed purposefully to be useful, and once we require something which exceeds that purpose it must be constructed once again. Only time will tell the limits of human intelligence, now that AI is integrating into society and industry.

  • >AGI advocates treat machine intelligence like some sort of God that will smite non-believers and reward the faithful.

    >The real world is not composed of rewards and punishments.

    Most "AGI advocates" say that AGI is coming, sooner rather than later, and it will fundamentally reshape our world. On its own that's purely descriptive. In my experience, most of the alleged "smiting" comes from the skeptics simply being wrong about this. Rarely there's talk of explicit rewards and punishments.

> it's really not clear to me that humans would be a valuable component in knowledge work for much longer.

To me, this sounds like when we first went to the moon, and people were sure we'd be on Mars be the end of the 80's.

> Even ARC-AGI-2 is now at over 50%.

Any measure of "are we close to AGI" is as scientifically meaningful as "are we close to a warp drive" because all anyone has to go on at this point is pure speculation. In my opinion, we should all strive to be better scientists and think more carefully about what an observation is supposed to mean before we tout it as evidence. Despite the name, there is no evidence that ARC-AGI tests for AGI.

  • > To me, this sounds like when we first went to the moon, and people were sure we'd be on Mars be the end of the 80's.

    Unlike space colonisation, there are immediate economic rewards from producing even modest improvements in AI models. As such, we should expect much faster progress in AI than space colonisation.

    But it could still turn out the same way, for all we know. I just think that's unlikely.

    • The minerals in the asteroid belt are estimated to be worth in the $100s of quintillions. I would say that’s a decent economic incentive to develop space exploration (not necessarily colonization, but it may make it easier).

You either have a case of human augmented AI here or AI augmented human. Either by themself would not have made the step.

Excellent! Humans can then spend their time on other activities, rather than get bogged down in the mundane.

  • Other activites such as the sublime pursuit of truth and beauty . . . aka mathematics ;-)

  • Not going to happen as long as the society we live in has this big of a hard on for capitalism and working yourself to the bone is seen as a virtue. Every time there’s a productivity boost, the newly gained free time is immediately consumed by more work. It’s a sick version of Parkinson’s law where work is infinite.

    https://en.wikipedia.org/wiki/Parkinson%27s_law

“Much longer” is doing a lot of heavy lifting there.

  • Let me put it like this: I expect AI to replace much of human wage labor over the next 20 years and push many of us, and myself almost certainly included, into premature retirement. I'm personally concerned that in a few years, I'll find my software proficiency to be as useful as my chess proficiency today is useful to Stockfish. I am afraid of a massive social upheaval both for myself and my family, and for society at large.

    • There are other bounds here at play that are often not talked about.

      Ai runs on computers. Consider the undecidability of Rices theorem. Where compiled code of non trivial statements may or may not be error free. Even an ai can’t guarantee its compiled code is error free. Not because it wouldn’t write sufficient code that solves a problem, but the code it writes is bounded by other externalities. Undecidability in general makes the dream of generative ai considerably more challenging than how it’s being ‘sold.

    • Here “much of” is doing the heavy lifting. Are you willing to commit to a percentage or a range?

      I work at an insurance company and I can’t see AI replacing even 10% of the employees here. Too much of what we do is locked up in decades-old proprietary databases that cannot be replaced for legal reasons. We still rely on paper mail for a huge amount of communication with policyholders. The decisions we make on a daily basis can’t be trusted to AI for legal reasons. If AI caused even a 1% increase in false rejections of claims it would be an enormous liability issue.

      2 replies →

    • > massive social upheaval

      You don’t even need AGI for that though, just unbounded investor enthusiasm and a regulatory environment that favors AI providers at the expense of everyone else.

      My point is there are number of things that can cause large scale unemployment in the next 20 years and it doesn’t make sense to worry about AGI specifically while ignoring all of the other equally likely root causes (like a western descent into oligarchy and crony capitalism, just to name one).

  • As is "even if it was in my area of specialty". I would not be able to do this proof, I can tell you that much.