Comment by swyx
2 years ago
Reactions:
1. good for him. there was obviously infinity money to back him in achieving his goals
2. i have the same problem with this as i did the original version of anthropic - you cannot achieve "unilateral ai safety". either the danger is real and humanity is not safe until all labs are locked down, or the danger is not real and theres nothing really to discuss. Anthropic already tried the "we are openai but safe"
3. the focus is admirable. "assembling a lean, cracked team" terminology feels a bit too-online for ilya. buuut i wish there was more technical detail about what exactly he wants to do differently. i guess its all in his prior published work, but ive read along and still its not obvious what his "one product" will look like. that's a lot of leeway to pivot around in the pursuit of the one product.
4. "Superintelligence is within reach." what is his definition?
> i have the same problem with this as i did the original version of anthropic - you cannot achieve "unilateral ai safety". either the danger is real and humanity is not safe until all labs are locked down, or the danger is not real and theres nothing really to discuss.
It’s not clear to me that this would be true. Remember that there’s a lot of vagueness about the specific AI risk model, aside from the general idea that the AI will outsmart everyone and take over the world and then do something that might not be good for us with all that power.
How do you shut down all the AI labs? You can’t; some of them are in different countries that have nuclear weapons. But maybe you can defeat an evil AI with a good AI.
I’m not sure good and evil are well enough defined to hope for this. A machine super intelligence would be unknowable to us. Gorillas vs Homo sapiens.
From the gorilla’s perspective, humans who preserve gorilla habitats are preferable to humans who hunt gorillas for sport. But gorillas have tribes of their own[1] and if gorillas could invent humans, some of them might invent humans who hunt the enemy tribe while helping theirs. So if you were a gorilla from the rival tribe and you had nuanced ethics of your own, you might even try to invent humans that try to stop the enemy humans without necessarily exterminating the enemy tribe.
[1] I don’t know if gorillas specifically have been observed engaging in tribal warfare, but chimpanzees have; the Gombe Chimpanzee War for instance.
Yeah, I think that 2nd point is really critical; I'm pretty plugged into this space, and its really not obvious to me what "safety" even means in the context of a superintelligence. Is it just, e.g., "filtering the AI so it never tells users how to build nukes"? Is it more of a firewall to stop sentient process/VM/computer escape [1]? Its a redundant question with no answer because we don't even know what the threat from superintelligence is; how can we build safety systems for a threat that doesn't exist, a threat which people can't even agree on the structure of?
The pdoomers have a generally great point that, if superintelligence kills us (they'd say "when", not "if"), we won't be able to predict the mechanism of doom. It'd be like asking ants to predict that the borax they're carrying back to the hive for their queen to feast on will disrupt their digestion and kill them.
I'd also argue that the biggest threat from ASI is what I've heard Roman Yampolskiy label as "ikigai-doom"; that AI could become so much better than humans at all the things humans do, that even in the best case humanity is left with no purpose, not even to pursue creativity because the AI will be so much better at even creative acts; and in the worst case, our government and societal structure can't adapt and millions become unemployed. There's no way to build a safety mechanism against this threat because the threat is intrinsic to all the good things ASI will give us. The only winning move is to not play.
[1] https://cyberpunk.fandom.com/wiki/Blackwall
With robots techno capitalism and techno communism converge.
We literally haven't had the technology to stand up better forms of government even though we have identified the flaws in our existing forms (the defense I heard of capitalism growing up was "it's flawed but it just the best we have") identifying flaws isn't the same as having good solutions for them.
People are so inured to technological progress, and so lacking in perspective that some actually pine for era's where there was widespread cholera and scarlet fever and death in childbirth.
Asimov's Solaria doesn't have to be sparsely populated.
Humans have two interesting facts that are at odds. 1. Most are heavily biased towards loss aversion. 2. Once new tech is proven out then loss of status drives adoption to "keep up with the joneses"
No one wants to have to dig a ditch, or hand wash dishes, or hand sewing, or hand wash clothes, or go back to using paper maps, or walking and rowing boats as our sole form of locomotion.
This is just humans loss aversion algorithms not yet catching up to our post subsistence living reality.
There is no there there. Even in our few living generations GenZ's worries and living conditions are wholly alien to The Lost Generations as to be living on a different planet.
> I'd also argue that the biggest threat from ASI is what I've heard Roman Yampolskiy label as "ikigai-doom"; that AI could become so much better than humans at all the things humans do, that even in the best case humanity is left with no purpose
I bet status games will stay. Robots may be sexual partners, CEOs and therapists, but they'd never take on status roles in our society – only utility roles.
Same as we do Olympics, even though machines are much better at throwing and lifting than us – we do it to win approval of others
Effectively what you've just said is: You can still do all these pursuits, as long as you're literally the best in the world. That's not really helpful; that's still ikigai-doom.
The point is: If your passion is animation, you can probably find work doing that today. It might suck, certainly more than in times past, and its hard, and that's an intrinsic part of passion. When AI can do that, maybe the best animators in the world will still find work at some bespoke studio making "the authentic hand-made stuff", like a farmers market, but AI may make everything else. And it may be, not just because AI could produce it "better" (it may or may not), but definitely because AI will produce it cheaper. Capitalism doesn't generally care about quality.
And the same goes for many careers: the ironic thing is that AI probably won't come for the plumbers and janitors very quickly, and there seems to be some kind of weird correlation between the jobs people find high passion in, and the jobs AI is likely to replace. In effect, we're evolving into a world where humans are relegated to manual labor, while AI handles everything else, poorly, but humans can't actually make a living doing that other stuff because AI does it so much cheaper and good enough.
But sure; there will still be millionaire status celebs.