← Back to context

Comment by quesera

2 years ago

I agree that the definitions are slippery and evolving.

But I cannot make the leap from "super intelligent" to "has access to all the levers of social and physical systems control" without the explicit, costly, and ongoing, effort of humans.

I also struggle with the conflation of "intelligent" and "has free will". Intelligent humans will argue that not even humans have free will. But assuming we do, when our free will contradicts the social structure, society reacts.

I see no reason to believe that the emergent properties of a highly complex system will include free will. Or curiosity, or a sense of humor. Or a soul. Or goals, or a concept of pleasure or pain, etc. And I think it's possible to be "intelligent" and even "sentient" (whatever that means) without those traits.

Honestly -- and I'm not making an accusation here(!) -- this fear of AI reminds me of the fear of replacement / status loss. We humans are at the top of the food chain on all scales we can measure, and we don't want to be replaced, or subjugated in the way that we presently subjugate other species.

This is a reasonable fear! Humans are often difficult to share a planet with. But I don't think it survives rational investigation.

If I'm wrong, I'll be very very wrong. I don't think it matters though, there is no getting off this train, and maybe there never was. There's a solid argument for being in the engine vs the caboose.

Totally fair points.

> I cannot make the leap from "super intelligent" to "has access to all the levers of social and physical systems control" without the explicit, costly, and ongoing, effort of humans.

Yeah this is a fair point! The super intellect may just convince humans, which seems feasible. Either way, the claim that there are 0 paths here for a super intelligence is pretty strong so I feel like we can agree on: It'd be tricky, but possible given sufficient cleverness.

> I see no reason to believe that the emergent properties of a highly complex system will include free will.

I really do think in the next couple years we will be explicitly implementing agentic architectures in our end-to-end training of frontier models. If that is the case, obviously the result would have something analogous to goals.

I don't really care about it's phenomenal quality or anything, it's not relevant to my original point.

  • > Either way, the claim that there are 0 paths here for a super intelligence is pretty strong so I feel like we can agree on: It'd be tricky, but possible given sufficient cleverness.

    Agreed, although I'd modify it a bit:

    A SI can trick lots of people (humans have succeeded, surely SI will be better), and the remaining untricked people, even if a healthy 50% of the population, will not be enough to maintain social stability.

    The lack of social stability is enough to blow up society. I don't think SI survives either though.

    If we argue that SI has a motive and a survival instinct, maybe this fact becomes self-moderating? Like the virus that cannot kill its host quickly?

    • Given your initial assumptions, that self-moderating end state makes sense.

      I feel like we still have a disconnect on our definition of a super intelligence.

      From my perspective this thing is insanely smart. We can hold ~4 things in our working memory (maybe Von Neumann could hold like 6-8); I'm thinking this thing can hold on the order of millions of things within its working memory for tasks requiring fluid intelligence.

      With that sort of gap, I feel like at minimum the ASI would be able to trick the cleverest human to do anything, but more reasonably, humans might appear to be entirely close formed to it, where getting a human to do anything is more of a mechanistic thing rather than a social game.

      Like the reason my early example was concrete pillars with weird wires is that with an intelligence gap so big the ASI will be doing things quickly that don't make sense, having a strong command over the world around it.