Comment by 015a

2 years ago

Yeah, I think that 2nd point is really critical; I'm pretty plugged into this space, and its really not obvious to me what "safety" even means in the context of a superintelligence. Is it just, e.g., "filtering the AI so it never tells users how to build nukes"? Is it more of a firewall to stop sentient process/VM/computer escape [1]? Its a redundant question with no answer because we don't even know what the threat from superintelligence is; how can we build safety systems for a threat that doesn't exist, a threat which people can't even agree on the structure of?

The pdoomers have a generally great point that, if superintelligence kills us (they'd say "when", not "if"), we won't be able to predict the mechanism of doom. It'd be like asking ants to predict that the borax they're carrying back to the hive for their queen to feast on will disrupt their digestion and kill them.

I'd also argue that the biggest threat from ASI is what I've heard Roman Yampolskiy label as "ikigai-doom"; that AI could become so much better than humans at all the things humans do, that even in the best case humanity is left with no purpose, not even to pursue creativity because the AI will be so much better at even creative acts; and in the worst case, our government and societal structure can't adapt and millions become unemployed. There's no way to build a safety mechanism against this threat because the threat is intrinsic to all the good things ASI will give us. The only winning move is to not play.

[1] https://cyberpunk.fandom.com/wiki/Blackwall

With robots techno capitalism and techno communism converge.

We literally haven't had the technology to stand up better forms of government even though we have identified the flaws in our existing forms (the defense I heard of capitalism growing up was "it's flawed but it just the best we have") identifying flaws isn't the same as having good solutions for them.

People are so inured to technological progress, and so lacking in perspective that some actually pine for era's where there was widespread cholera and scarlet fever and death in childbirth.

Asimov's Solaria doesn't have to be sparsely populated.

Humans have two interesting facts that are at odds. 1. Most are heavily biased towards loss aversion. 2. Once new tech is proven out then loss of status drives adoption to "keep up with the joneses"

No one wants to have to dig a ditch, or hand wash dishes, or hand sewing, or hand wash clothes, or go back to using paper maps, or walking and rowing boats as our sole form of locomotion.

This is just humans loss aversion algorithms not yet catching up to our post subsistence living reality.

There is no there there. Even in our few living generations GenZ's worries and living conditions are wholly alien to The Lost Generations as to be living on a different planet.

> I'd also argue that the biggest threat from ASI is what I've heard Roman Yampolskiy label as "ikigai-doom"; that AI could become so much better than humans at all the things humans do, that even in the best case humanity is left with no purpose

I bet status games will stay. Robots may be sexual partners, CEOs and therapists, but they'd never take on status roles in our society – only utility roles.

Same as we do Olympics, even though machines are much better at throwing and lifting than us – we do it to win approval of others

  • Effectively what you've just said is: You can still do all these pursuits, as long as you're literally the best in the world. That's not really helpful; that's still ikigai-doom.

    The point is: If your passion is animation, you can probably find work doing that today. It might suck, certainly more than in times past, and its hard, and that's an intrinsic part of passion. When AI can do that, maybe the best animators in the world will still find work at some bespoke studio making "the authentic hand-made stuff", like a farmers market, but AI may make everything else. And it may be, not just because AI could produce it "better" (it may or may not), but definitely because AI will produce it cheaper. Capitalism doesn't generally care about quality.

    And the same goes for many careers: the ironic thing is that AI probably won't come for the plumbers and janitors very quickly, and there seems to be some kind of weird correlation between the jobs people find high passion in, and the jobs AI is likely to replace. In effect, we're evolving into a world where humans are relegated to manual labor, while AI handles everything else, poorly, but humans can't actually make a living doing that other stuff because AI does it so much cheaper and good enough.

    But sure; there will still be millionaire status celebs.