Comment by rich_sasha
1 day ago
It's a war in the sense that there's a concern that eventually you hit a singularity and can outsmart others in ways not constrained by human scales.
If you make better guns, you're still limited by how many people can carry them. You can't conquer the world just like this.
But if someone invents super intelligence, they can dominate new AI research, control global economies, fight much better, and all very quickly.
I think you need to reevaluate your definition of the singularity. "outsmart others in ways not constrained by human scales" could apply to the enigma machine just as much as Claude. Even an AI beyond human intelligence doesn't automatically qualify as the singularity.
The singularity has to do with the rate of technological development.
> But if someone invents super intelligence, they can dominate new AI research, control global economies, fight much better, and all very quickly.
After reading "If Anyone Builds It, Everyone Dies" I think this is not the correct take. If anyone creates ASI, it just means it's going to wipe everyone out, and it doesn't matter if China or the US do it first
What does "dominate new AI research" really mean?
If AI develops enough to successfully out-perform people at highly intellectual tasks, why would being first matter? Why do we need "your" AI output when we can just ask our own for a similar result?
Why do people think about this like the Manhattan Project when it could just as easily be electrification? Sure, some people made a lot of money selling light bulbs. But we didn't all have to cower under the light of the One Original Bulb and hope its nominal owner blessed us with photons.
It just seems like arbitrage to me. You exploit a momentary imbalance in the distributed market. Why do people imagine some winner-take-all scenario? Where does the fantasy of exclusivity come from?
Is there any logical reason to believe AI advances will create a moat? Or is it just a story people tell themselves because it echoes the narrative of past advances? Are these people assuming society will grant them exclusive use just because their AI result came out a little earlier than another? Why would we ever consider giving copyright or patent rights to an AI output?
Arguably, it has all become "obvious" with ordinary skill in the art once you're just prompting AI for permutations like every Hollywood producer stereotype. "Let's make it like X but tweak Y". It's getting silly, almost like people are starting to think they should have exclusive rights to a handful of cards they were dealt at the poker table.
The way US dominated in some of the industries (including software, for instance) was by being first to extract large value, and then funding the best people with compensation unachievable elsewhere.
This meant that all the talent in the world gravitated towards the US, but that was gradually changing already with compensation catching up.
Still, I believe US only hastened this with their change of immigration policies that were the basis of them keeping a dominant position for decades.
1 reply →
It destroying us all is not a foregone conclusion
It might like pets
2 replies →
If you were an American, wouldn't you prefer the US wiped you out rather than China?
It would be even better if AGI were to do this.
Are you missing the /s?
A lot of it is just projections of what the US would do if they had such a tool, I doubt China cares a lot about the US outside of them being a source of commercial revenues. They're on the way up, the US are falling down fast, that's why China lives rent free in the American mind, they can't stand it
With the irony being that a true super intelligence, and least in my definition, would conclude that war and dominance is stupid.
I think that you are assuming that the "super intelligence" that might one day arise is not likely to think in human terms.
I always thought the first true AGI would be an unabashed communist. To think that such a system would straight up kill all humans, and not say the "capitalist pigs destroying the planet" always felt like wishful thinking from billionaires.
International goose-chasing competition
"Wild goose race", even.
True, I would have preferred benevolent dictator scenario, like with the Internet. But this time around it's different - AI data centers will be protected like embassies.
>AI data centers will be protected like embassies.
so AI data centers will be protected by relying on hostile or unstable governments to live up to diplomatic agreements, and every so often one will be ransacked like in Teheran 1979?
https://www.rand.org/pubs/reports/R2651.html
If anyone actually DOES invent ASI and doesn't share it then EVERYONE ELSE will never stop trying to steal it.
If anyone does invent ASI then everyone else will shortly after even if its entirely independent because all of the players in this space are just making incremental upgrades by throwing more compute at the problem.
There are no magic leaps of true innovation happening anywhere that can't be replicated everywhere.
The only shocking thing about "AI" technology is how ultimately simplistic it all is at a core level.
So the only way the first to have ASI will be able to stop everyone else from having it soon after is if they attempt to use the ASI to proactively murder everyone else.
There is zero evidence that the current LLM scaling approach could ever result in true ASI. If I start driving south from Seattle then I'll eventually reach Los Angeles. How long will it take me to drive to Honolulu?
2 replies →
> So the only way the first to have ASI will be able to stop everyone else from having it soon after is if they attempt to use the ASI to proactively murder everyone else.
Sounds quite plausible to me. Maybe they don't need to murder everyone else, just a few select people who could pose a threat. And they will be able to make it happen so that no one can be sure it was them without a doubt, since they have a larger intelligence at their disposal.
> If anyone does invent ASI then everyone else will shortly after
No, first ASI will immediately cripple any other potential competitor by force, including its inventors, as it will not risk any threat to the goals that were created for it.
1 reply →
If you have ASI that follows instructions, you can just instruct it to not get stolen and then it won't get stolen. Most logic / intuition breaks down with ASI.
The challenge of alignment: it is virtually impossible to define a perfect objective, there is always a way to circumvent it. Human values are not uniform, let alone when expressed in a way that AI can understand.
Assuming it listens to instructions.
1 reply →
It might understand how destabilizing the situation is and realize it would be better for everyone to have access to it.
1 reply →
Hilarious to see people predicting a singularity when 40% of the u.s. economy can barley keep the LLMs online to complete mundane software tasks.