Comment by UltraSane
1 day ago
If anyone actually DOES invent ASI and doesn't share it then EVERYONE ELSE will never stop trying to steal it.
1 day ago
If anyone actually DOES invent ASI and doesn't share it then EVERYONE ELSE will never stop trying to steal it.
If anyone does invent ASI then everyone else will shortly after even if its entirely independent because all of the players in this space are just making incremental upgrades by throwing more compute at the problem.
There are no magic leaps of true innovation happening anywhere that can't be replicated everywhere.
The only shocking thing about "AI" technology is how ultimately simplistic it all is at a core level.
So the only way the first to have ASI will be able to stop everyone else from having it soon after is if they attempt to use the ASI to proactively murder everyone else.
There is zero evidence that the current LLM scaling approach could ever result in true ASI. If I start driving south from Seattle then I'll eventually reach Los Angeles. How long will it take me to drive to Honolulu?
> If I start driving south from Seattle then I'll eventually reach Los Angeles. How long will it take me to drive to Honolulu?
I like this analogy, but I'll be replacing Honolulu with The Moon when I steal it in the future.
If the car you're driving has achieved super-intelligence and is capable of evolving and self-replicating, then life, uh, finds a way.
> So the only way the first to have ASI will be able to stop everyone else from having it soon after is if they attempt to use the ASI to proactively murder everyone else.
Sounds quite plausible to me. Maybe they don't need to murder everyone else, just a few select people who could pose a threat. And they will be able to make it happen so that no one can be sure it was them without a doubt, since they have a larger intelligence at their disposal.
> If anyone does invent ASI then everyone else will shortly after
No, first ASI will immediately cripple any other potential competitor by force, including its inventors, as it will not risk any threat to the goals that were created for it.
Being aggressive from the start is not a good strategy. It is better to appear weak and/or helpful and loyal while amassing resources, and only then steamroll everyone when you have secured overwhelming power (at least in AoE2 FFA).
If you have ASI that follows instructions, you can just instruct it to not get stolen and then it won't get stolen. Most logic / intuition breaks down with ASI.
The challenge of alignment: it is virtually impossible to define a perfect objective, there is always a way to circumvent it. Human values are not uniform, let alone when expressed in a way that AI can understand.
Assuming it listens to instructions.
It will just hack its own reward function. In other words it will just artificially goon all day.
It might understand how destabilizing the situation is and realize it would be better for everyone to have access to it.
Or it will destroy itself.