Comment by quesera

2 years ago

> Either way, the claim that there are 0 paths here for a super intelligence is pretty strong so I feel like we can agree on: It'd be tricky, but possible given sufficient cleverness.

Agreed, although I'd modify it a bit:

A SI can trick lots of people (humans have succeeded, surely SI will be better), and the remaining untricked people, even if a healthy 50% of the population, will not be enough to maintain social stability.

The lack of social stability is enough to blow up society. I don't think SI survives either though.

If we argue that SI has a motive and a survival instinct, maybe this fact becomes self-moderating? Like the virus that cannot kill its host quickly?

Given your initial assumptions, that self-moderating end state makes sense.

I feel like we still have a disconnect on our definition of a super intelligence.

From my perspective this thing is insanely smart. We can hold ~4 things in our working memory (maybe Von Neumann could hold like 6-8); I'm thinking this thing can hold on the order of millions of things within its working memory for tasks requiring fluid intelligence.

With that sort of gap, I feel like at minimum the ASI would be able to trick the cleverest human to do anything, but more reasonably, humans might appear to be entirely close formed to it, where getting a human to do anything is more of a mechanistic thing rather than a social game.

Like the reason my early example was concrete pillars with weird wires is that with an intelligence gap so big the ASI will be doing things quickly that don't make sense, having a strong command over the world around it.