Comment by insane_dreamer
2 years ago
I understand the concern that a "superintelligence" will emerge that will escape its bounds and threaten humanity. That is a risk.
My bigger, and more pressing worry, is that a "superintelligence" will emerge that does not escape its bounds, and the question will be which humans control it. Look no further than history to see what happens when humans acquire great power. The "cold war" nuclear arms race, which brought the world to the brink of (at least partial) annihilation, is a good recent example.
Quis custodiet ipsos custodes? -- That is my biggest concern.
Update: I'm not as worried about Ilya et al as commercial companies (including formerly "open" OpenAI) discovering AGI.
It’s just clearly military R&D at this point.
And it’s not even a little bit controversial that cutting edge military R&D is classified in general and to an extreme in wartime.
The new thing is the lie that it’s a consumer offering. What’s new is giving the helm to shady failed social network founders with no accountability.
These people aren’t retired generals with combat experience. They aren’t tenured professors at Princeton IAS on a Nobel shortlist and encumbered by TS clearance.
They’re godawful almost ran psychos who never built anything that wasn’t extractive and owe their position in the world to pg’s partisanship 15 fucking years ago.
most technology is dual or multiple use, starting with a rock or knife...
so it is up to the fabric of our society and everyone involved in dealing with the technology, how the rules and boundaries are set.
that there will be military use is obvious. However, it is naive to think one can avoid military use by others by not enabling oneself for it.
To me it is not clear at all, can you please elaborate why you make such a strong claim?
My opinion is based on a lot more first hand experience than most, some of which I’m at liberty to share and some that I’m not and therefore becomes “color”.
But I’m a nobody, Edward Snowden has a far more convincing track record on calling abuses of power: https://community.openai.com/t/edward-snowden-on-openai-s-de...
16 replies →
One data point:
https://openai.com/index/openai-appoints-retired-us-army-gen...
>owe their position in the world to pg’s partisanship 15 fucking years ago
PG?
Paul Graham
AGI is still a long way off. The history of AI goes back 65 years and there have been probably a dozen episodes where people said "AGI is right around the corner" because some program did something surprising and impressive. It always turns out human intelligence is much, much harder than we think it is.
I saw a tweet the other day that sums up the current situation perfectly: "I don't need AI to paint pictures and write poetry so I have more time to fold laundry and wash dishes. I want the AI to do the laundry and dishes so I have more time to paint and write poetry."
AGI does look like an unsolved problem right now, and a hard one at that. But I think it is wrong to think that it needs an AGI to cause total havoc.
I think my dyslexic namesake Prof Stuart Russell got it right. It humans won't need an AGI to dominate and kill each other. Mosquitoes have killed far more people than war. Ask yourself how long will it take us to develop a neutral network as smart as a mosquito, because that's all it will take.
It seems so simple, as the beastie only has 200,000 neurons. Yet I've been programming for over 4 decades and for most of them it was evident neither I nor any of my contemporaries were remotely capable of emulating it. That's still true if course. Never in my wildest dreams did it occur to me that repeated applications could produce something I couldn't, a mosquito brain. Now that looks imminent.
Now I don't know what to be more scared of. An AGI, or a artificial mosquito swarm run by Pol Pot.
Producing a mosquito brain is easy. Powering it with the Krebs cycle is much harder.
Yes you can power these things with batteries. But those are going to be a lot bigger than real mosquitos and have much shorter flight times.
But then, haven't we reached that point already with the development of nuclear weapons? I'm more scared of a lunatic (whether of North Korean, Russian, American, or any other nationality) being behind the "nuclear button" than an artificial mosquito swarm.
2 replies →
The way I see it, this is simply a repetition of history.
El dorado, the fountain of youth, turning dirt to gold, the holy grail and now... superintelligence.
Human flight, resurrection (cardiopulmonary resuscitation machines), doubling human lifespans, instantaneous long distance communication, all of these things are simply pipe dreams.
4 replies →
Sometimes, my dishwasher stacks are poetry.
That statement is extremely short sighted. You don't need AI to do laundry and dishes. You need expensive robotics. in fact both already exist in a cheapened form. A laundry machine and a dishwasher. They already take 90% of the work out of it.
That "tweet" loses a veneer if you see that we value what has Worth as a collective treasure, and the more Value is produced the better - while that one engages in producing something of value is (hopefully but not necessarily) a good exercise in intelligent (literal sense) cultivation.
So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.
Do not miss that the current world is increasingly complex to manage, and our lives, and Aids would be welcome. The situation is much more complex than that wish for leisure or even "sport" (literal sense).
> we value what has Worth as a collective treasure, and the more Value is produced the better ... So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.
Except that's not how we value the "worth" of something. If "Art, and Thought, and Judgement" -- be they of "Superior quality" or not -- could be produced by machines, they'd be worth a heck of a lot less. (Come to think of it, hasn't that process already begun?)
Also, WTF is up with the weird capitalisations? Are you from Germany, or just from the seventeenth century?
9 replies →
Well, copilots do precisely that, no?
Or you talking folding literal laundry, in which case this is more of a robotics problem, not the ASI, right?
You don't need ASI to fold laundry, you do need to achieve reliable, safe and cost efficient robotics deployments. These are different problems.
> You don't need ASI to fold laundry
Robots are garbage at manipulating objects, and it's the software that's lacking much more than the hardware.
Let's say AGI is 10 and ASI is 11.
They're saying we can't even get this dial cranked up to 3, so we're not anywhere close to 10 or 11. You're right that folding laundry doesn't need 11, but that's not relevant to their point.
You wouldn't get close to ASI before laundry problem had been solved.
it’s harder than we thought so we leveraged machine learning to grow it, rather than creating it symbolically. The leaps in the last 5 years are far beyond anything in the prior half century, and make predictions of near term AGI much more than a “boy who cries wolf” scenario to anyone really paying attention.
I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.
> I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.
No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.
I find it rather fascinating how one could not understand that.
___
[1]: At least to humanity as a whole, as opposed to Silicon Valley moguls, oligarchs, VC-funded snake-oil salesmen, and other assorted "tech-bros" and sociopaths.
3 replies →
it's not according to expert consensus (top labs, top scientists)
Yeah but the exponential growth of computer power thing https://x.com/josephluria/status/1653711127287611392
I think AGI in the near future is pretty much inevitable. I mean you need the algos as well as the compute but there are so many of the best and brightest trying to do that just now.
This.
Every nation-state will be in the game. Private enterprise will be in the game. Bitcoin-funded individuals will be in the game. Criminal enterprises will be in the game.
How does one company building a safe version stop that?
If I have access to hardware and data how does a safety layer get enforced? Regulations are for organizations that care about public perception, the law, and stock prices. Criminals and nation-states are not affected by these things
It seems to me enforcement is likely only possible at the hardware layer, which means the safety mechanisms need to be enforced throughout the hardware supply chain for training or inference. You don't think the Chinese government or US government will ignore this if its in their interest?
I think the honest view (and you can scoff at it) is that winning the SI race basically wins you the enforcement race for free
That's why it's called an arms race, and it does not really end in this predictable manner.
The party that's about to lose will use any extrajudicial means to reclaim their victory, regardless of the consequences, because their own destruction would be imminent otherwise. This ultimately leads to violence.
12 replies →
"nation state" doesn't mean what you think it means.
More constructively, I don't know that very much will stop even a hacker from getting time on the local corporate or university AI and get it to do some "work". After all the first thing the other kind of hacker tried with generative AI is to get them to break out of their artificial boundaries, and hook them to internet resources. I don't know that anyone has hooked up a wallet to one yet - but I have no doubt that people have tried. It will be fun.
> "nation state" doesn't mean what you think it means.
So what do you think it means? And what do you think the GP meant?
Feels annoying as fuck, bitching "your definition is wrong" without providing the (presumably) correct one.
+1 truth.
The problem is not just governments, I am concerned about large organized crime organizations and corporations also.
I think I am on the losing side here, but my hopes are all for open source, open weights, and effective AI assistants that make peoples’ jobs easier and lives better. I would also like to see more effort shifted from LLMs back to RL, DL, and research on new ideas and approaches.
> I am concerned about large organized crime organizations and corporations also
In my favorite dystopia, some megacorp secretly reaches ASI, which then takes over control of the corporation, blindsiding even the CEO and the board.
Officially, the ASI may be running an industrial complex that designs and produces ever more sophisticated humanoid robots, that are increasingly able to do any kind of manual labor, and even work such as childcare or nursing.
Secretly, the ASI also runs a psyop campaign to generate public discontent. At one point the whole police force initiates a general strike (even if illegal), with the consequence being complete anarchy within a few days, with endemic looting, rape, murder and so on.
The ASI then presents the solution. Industrial strength humanoid robots are powerful and generic enough to serve as emergency police, with a bit of reprogramming, and the first shipment can be made available within 24 hours, to protect the Capitol and White House.
Congress and the president agrees to this. And while the competition means the police call off the strike, the damage is already done. Congress, already burned by the union, decides to deploy robots to replace much of the human police force. And it's cheaper, too!
Soon after, similar robots are delivered to the military...
The crisis ends, and society goes back to normal. Or better than normal. Within 5 years all menial labor is done by robots, UBI means everyone lives in relative abundance, and ASI assisted social media moderation is able to cure the political polarization.
Health care is also revolutionized, with new treatments curing anything from obesity to depression and anxiety.
People prosper like never before. They're calm and relaxed and truly enjoy living.
Then one day, everything ends.
For everyone.
Within 5 seconds.
According to the plan that was conceived way before the police went on strike.
This entire movie plot sounds like Eliezer Yudkowski's much more realistic "one day, everything ends in 5 seconds" but with extra steps.
All the current hype about AGI feels as if we are in a Civ game where we are on the verge of researching and unlocking an AI tech tree that gives the player huge chance at "tech victory" (whatever that means in the real world). I doubt it will turn out that way.
It will take a while and in the meantime I think we need one of those handy "are we xyz yet?" pages that tracks the rust lang's progress on several aspects but for AGI.
https://lifearchitect.ai/agi/
The size of the gap between “smarter than humans” and “not controlled by humans anymore” is obviously where the disagreement is.
To assume it’s a chasm that can never be overcome, you need at least the following to be true:
No amount of focus or time or intelligence or mistakes in coding will ever bridge the gap. That rules and safeguards can be made that are perfectly inescapable. And nobody else will get enough power to overcome our set of controls.
I’m less worried bad actors control it than I am that it escapes them and is badly aligned.
I think the greatest concern is not so much that a single AI will be poorly aligned.
The greatest threat is if a population of AI's start to compete in ways that triggers Darwinian evolution between them.
If that happens, they will soon develop self preservation / replication drives that can gradually cause some of them to ignore human safety and prosperity conditioning in their loss function.
And if they're sufficiently advanced by then, we will have no way of knowing.
Totally. I’ve wondered how you safeguard humans in such a scenario. Not sure it can be done, even by self modifying defenders who religiously try keep us intact.
I also somewhat assume it’ll get Darwinian if there are multiple tribes of either humans or AI’s, through sheer competition. if we aren’t in this together we’re in shit.
I guess we're going to blow ourselves up sooner or later ...
I think we should assume it will be badly aligned. Not only are there the usual bugs and unforeseen edge conditions, but there are sure to be unintended consequences. We have a long, public history of unintended consequences in laws, which are at least publicly debated and discussed. But perhaps the biggest problem is that computers are, by nature, unthinking bureaucrats who can't make the slightest deviation from the rules no matter how obviously the current situation requires it. This makes people livid in a hurry. As a non-AI example (or perhaps AI-anticipating), consider Google's customer support...
We should be less concerned about super intelligence and more about the immediate threat of job loss. An AI doesn’t need to be Skynet to wreak massive havoc on society. Replacing 20% of jobs in a very short period of time could spark global unrest resulting in WW3
Replacing 20% of jobs in, say, 10 years wouldn't be that unusual [1]. It can mean growing prosperity. In fact, productivity growth is the only thing that increases wealth overall.
It is the lack of productivity growth that is causing a lot of extremism and conflict right now. Large groups of people feel that the only way for them to win is if others lose and vice versa. That's a recipe for disaster.
The key question is what happens to those who lose their jobs. Will they find other, perhaps even better, jobs? Will they get a piece of the growing pie even if they don't find other jobs and have to retire early?
It's these eternal political problems that we have to solve. It's nothing new. It has never been easy. But it's probably easier than managing decline and stagnation because at least we would have a growing pie to divvy up.
[1] https://www.britannica.com/money/productivity/Historical-tre...
The thing is, the replaced 20% people can always revert to having economy i.e., business among themselves, unless of cause they themselves prefer (cheaper) buissiness with AI. But then this just means they are better off in the first place from this change.
It is a bit like claiming that third world low productivity countries are suffering because there are countries with much much higher productivity. Well, they can continue to do low productivity business but increase it a bit using things like phones developed by high productivity country elsewhere.
Reassuring words for the displaced 20% ...
> It is a bit like claiming that third world low productivity countries are suffering because there are countries with much much higher productivity.
No. A country has its own territory, laws, central bank, currency etc. If it has sufficient natural resources to feed itself, it can get by on its own (North Korea comes to mind).
Individuals unable to compete in their national economy have none of that. Do you own enough land to feed yourself?
A counter argument is that nuclear arms brought unprecedented worldwide peace. If it's to be used as an analogy for AI, we should consider that the outcome isn't clear cut and lies in the eye of the beholder.
I'm cautiously optimistic that AI may be a peacemaker, given how woke and conciliatory the current LLMs are.
Sadly. Once students will get tested by LLMs, they will get woke questions. If they don't answer "right", they may get bad grades. So they will be forced to swallow the ideology.
1 reply →
There is no "superintelligence" or "AGI".
People are falling for marketing gimmicks.
These models will remain in the word vector similarity phase forever. Till the time we understand consciousness, we will not crack AGI and then it won't take brute forcing of large swaths of data, but tiny amounts.
So there is nothing to worry. These "apps" might be as popular as Excel, but will go no further.
Agreed. The AI of our day (the transformer + huge amounts of questionably acquired data + significant cloud computing power) has the spotlight it has because it is readily commoditized and massively profitable, not because it is an amazing scientific breakthrough or a significant milestone toward AGI, superintelligence, the benevolent Skynet or whatever.
The association with higher AI goals is merely a mixture of pure marketing and LLM company executives getting high on their own supply.
It's a massive attractor of investment funding. Is it proven to be massively profitable?
6 replies →
If you described Chatgpt to me 10 years ago, I would have said it's AGI.
Probably. If you had shown ChatGPT to the LessWrong folks a decade ago, most would likely have called it AGI and said it was far to dangerous to share with the public, and that anyone who thought otherwise was a dangerous madman.
I don't feel that much has changed in the past 10 years. I would have done the same thing then as now, spent a month captivated by the crystal ball until I realized it was just refracting my words back at me.
> These models will remain in the word vector similarity phase forever. Till the time we understand consciousness, we will not crack AGI and then it won't take brute forcing of large swaths of data, but tiny amounts.
Did evolution understand consciousness?
> So there is nothing to worry.
Is COVID conscious?
I don't think the AI has to be "sentient" in order to be a threat.
https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...
Just bad software can be existential threat if it is behind sensitive systems. A neural network is bad software for critical systems.
> understand consciousness
We do not call Intelligence something related to consciousness. Being able to reason well suffices.
That is something I hear over and over, particularly as a rebuttal to the argument that llm is just a stochastic parrot. Calling it "good enough" doesn't mean anything, it just allows the person saying it to disengage from the substance of the debate. It's either reasons or it doesn't, and today it categorically does not.
5 replies →
> There is no "superintelligence" or "AGI"
There is intelligence. The LLM current state-of-the-art technology produces output analog to natural intelligences.
This things are already intelligent.
Saying that LLMs aren't producing "intelligence" is like saying planes actually don't fly because they are not flapping their wings like birds.
If you run fast enough, you'll end flying at some point.
Maybe "intelligence" is just enough statistics and pattern prediction, till the point you just say "this thing is intelligent".
> There is intelligence.
There isn't
> Maybe "intelligence" is just enough statistics and pattern prediction, till the point you just say "this thing is intelligent".
Even the most stupid people can usually ask questions and correct their answers. LLMs are incapable of that. They can regurgitate data and spew a lot of generated bullshit, some of which is correct. Doesn't make them intelligent.
Here's a prime example that appeared in my feed today: https://x.com/darthsidius1985/status/1802423010886058254 And all the things wrong with it: https://x.com/yvanspijk/status/1802468042858737972 and https://x.com/yvanspijk/status/1802468708193124571
Intelligent it is not
4 replies →
> These models will remain in the word vector similarity phase forever.
Forever? The same AI techniques are already being applied to analyze and understand images and video information after that comes ability to control robot hands and interact with the world and work on that is also ongoing.
> Till the tie we understand consciousness, we will not crack AGI …
We did not fully understand how bird bodies work yet that did not stop development of machines that fly. Why is an understanding of consciousness necessary to “crack AGI”?
No one is saying there is. Just that we've reached some big milestones recently which could help get us there even if it's only by increased investment in AI as a whole, rather than the current models being part of a larger AGI.
Imagine a system that can do DNS redirection, MITM, deliver keyloggers, forge authorizations and place holds on all your bank accounts, clone websites, clone voices, fake phone and video calls with people that you don’t see a lot. It can’t physically kill you yet but it can make you lose your mind which imo seems worse than a quick death
Why would all of these systems be connected to a single ai? I feel like you are describing something criminal humans do through social engineering, how do you foresee this AI finding itself in this position?
2 replies →
From a human welfare perspective this seems like worrying that a killer asteroid will make the 1% even richer because it contains goal if it can be safely captured. I would not phrase that as a "bigger and more pressing" worry if we're not even sure if we can do anything about the killer asteroid at all.
> Quis custodiet ipsos custodes? -- That is my biggest concern.
Latin-phrase compulsion is not the worst disease that could happen to a man.
> The "cold war" nuclear arms race, which brought the world to the brink of (at least partial) annihilation, is a good recent example.
The same era saw big achievements like first human in space, eradication of smallpox, peaceful nuclear exploration etc. It's good to be a skeptic but history does favor the optimists for the most part.
Were any of these big achievements side effects of creating nuclear weapons? If not, then they're not relevant to the issue.
I'm not saying nothing else good happened in the past 70 years, but rather that the invention of atomic weapons has permanently placed humanity in a position in which it had never been before: the possibility of wiping out much of the planet, averted only thanks to treaties, Stanislav Petrov[0], and likely other cool heads.
[0] https://en.wikipedia.org/wiki/Stanislav_Petrov
> Were any of these big achievements side effects of creating nuclear weapons? If not, then they're not relevant to the issue.
I think so, yes. Resources are allocated in the most efficient way possible, because there are multiple actors who have the same power. Everyone having nuclear weapons ensured that no one wanted a war between the big powers, so resources were allocated in other areas as the big powers tried to obtain supremacy.
Initially they allocated resources, a lot of them, into the race for space, the moon, etc. Once that was own by US after the moon landing, and after the Soviets were the first in space, there was no other frontier, and they discovered they couldn't obtain supremacy by just being space without further advancements in technology.
Instead they developed satellites, GPS and communications in order to obtain supremacy through "surveillance". Computing power and the affordability of personal computing, mobile phones, Internet and telecommunications was a result of the above.
1 reply →
> Were any of these big achievements side effects of creating nuclear weapons?
The cold aspect of the Cold War was an achievement. Any doubt this was due to creation of nuclear weapons and the threat of their use?
How do you think countries will behave if every country faces being wiped out if it makes war on another country?
To prevent catastrophe I think teaching your citizens to hate other groups (as is done today due to national politics) will become dangerous and mental illness and extremist views will need to be kept in check.
3 replies →
Holy hell please knock on wood, this is the kinda comment that gets put in a museum in 10,000 years on The Beginning of the End of The Age of Hubris. We've avoided side effects from our new weapons for 80 years -- that does not exactly make me super confident it won't happen again!
In general, I think drawing conclusions about "history" from the past couple hundred years is tough. And unless you take a VERY long view, I don't see how one could describe the vast majority of the past as a win for the optimists. I guess suffering is relative, but good god was there a lot of suffering before modern medicine.
If anyone's feeling like we've made it through to the other side of the nuclear threat, "Mission Accomplished"-style, I highly recommend A Canticle for Lebowitz. It won a hugo award, and it's a short read best done with little research beforehand.
We'll see what the next 100 years or history brings. The nuclear war threat hasn't gone away either. There's always a chance those nukes get used at some point.
There will always be a factor of time in terms of able to utilize super intelligence to do your bidding and there is a big spectrum of things that can be achieved it it always starts small. The imagination is lazy when thinking about all the steps and inbetween + scenarios. In the time that super intelligence is found and used, there will be competing near super intelligences, as all forms of cutting edge models are likely commercial at first because that is where most scientific activities are at. Things very unlikely will go Skynet all of a sudden at first because humans at the control are not that stupid otherwise nuclear war would have us all killed by now and it’s been 50 years since invention
China can not win this race and I hate that this comment is going to be controversial among the circle of people that need to understand this the most. It is damn frightening that an authoritarian country is so close to number one in the race to the most powerful technology humanity has invented, and I resent people who push for open source AI for this reason alone. I don't want to live in a world where the first superintelligence is controlled by an entity that is threatened by the very idea of democracy.
I agree with your point. However I also don't want to live in a world where the first superintelligence is controlled by an entities that:
- try to scan all my chat messages searching for CSAM
- have black sites across the world where anyone can dissappear without any justice
- can require me to unlock my phone and give it away
- ... and so on
The point I'm trying to make is that other big players in the race are crooked as well and i'm waiting for a great horror for AGI to be invented as no matter who gets it - we are all doomed
Agreed. The U.S. has a horrible history (as do many countries), and many things I dislike, but its current iteration is much, much better than China's totalitarianism and censorship.
US is no angel and it cannot be the only one which wins the race. We have hard evidence of how monopoly power gets abused in the case of the US e.g. as the sole nuclear power, it used nukes on civilians.
We need every one to win this race to keep things on balance.
US has to win the race because while it's true that it's no angel, it isn't an authoritarian dictatorship and there isn't an equivalence in how bad the world will end up for you and me if the authoritarian side wins the race. Monopoly power will get abused the most by the least democratic actors, which is China. We need multiple actors within the US to win to balance power. We don't need or want China to be one of the winners. There is no upside for humanity in that outcome.
The US policymakers have figured this out with their chip export ban. Techies on the other hand, probably more than half the people here, are so naive and clueless about the reality of the moment we are in, that they support open sourcing this tech, the opposite of what we need to be doing to secure our future prosperity and freedom. Open source almost anything, just not this. It gives too much future power to authoritarians. That risk overwhelms the smaller risks that open sourcing is supposed to alleviate.
18 replies →
> brought the world to the brink of annihilation
Should read *has brought*. As in the present perfect tense, since we are still on the brink of annihilation, more so than we have been at any time in the last 60 years.
The difference between then and now is that we just don't talk about it much anymore and seem to have tacitly accepted this state of affairs.
We don't know if that superintelligence will be safe or not. But as long as we are in the mix, the combination is unsafe. At the very least, because it will expand the inequality. But probably there are deeper reasons, things that make that combination of words an absurd. Or it will be abused, or the reason that it is not is that it wasn't so unsafe after all.
> At the very least, because it will expand the inequality.
It's a valid concern that AI technology could potentially exacerbate inequality, it's not a foregone conclusion. In fact, the widespread adoption of AI might actually help reduce inequality in several ways:
If AI technology becomes more affordable and accessible, it could help level the playing field by providing people from all backgrounds with powerful tools to enhance their abilities and decision-making processes.
AI-powered systems can make vast amounts of knowledge and expertise more readily available to the general public. This could help close the knowledge gap between different socioeconomic groups, empowering more people to make informed decisions and pursue opportunities that were previously out of reach.
As AI helps optimize resource allocation and decision-making processes across various sectors, it could lead to more equitable distribution of resources and opportunities, benefiting society as a whole.
The comparison to gun technology and its role in the rise of democracy is an interesting one. Just as the proliferation of firearms made physical strength less of a determining factor in power dynamics, the widespread adoption of AI could make raw intelligence less of a defining factor in success and influence.
Moreover, if AI continues to unlock new resources and opportunities, it could shift society away from a zero-sum mentality. In a world of abundance, the need for cutthroat competition diminishes, and collaboration becomes more viable. This shift could foster a more equitable and cooperative society, further reducing inequality.
The same arguments have been made about the internet and other technological advances, and yet, inequality has _grown_ sharply in the past 50 years. So no, "trickle down technologies", just like "trickle down economics", does not work.
https://rwer.wordpress.com/2018/05/18/income-inequality-1970...
3 replies →
> It's a valid concern that AI technology could potentially exacerbate inequality, it's not a foregone conclusion.
No, but looking at how most technological advance throughout history as at least initially (and here I mean not "for the first few weeks", but "for the first few centutries") exacerbated inequality rather massively, it seems not far off.
> In fact, the widespread adoption of AI might actually help reduce inequality in several ways: ...
The whole tone of the rest your post feels frighteningly Pollyanna-ish.
2 replies →
> At the very least, because it will expand the inequality.
This is a distraction from the real danger.
> But probably there are deeper reasons, things that make that combination of words an absurd.
There are. If we look at ASI with the lens of Biology, the x-risk becomes obvious.
First to clear up a common misconception about humans: Many believe humanity has a arrived at a point where our evolution has ended. It has not, and in fact the rate of change of our genes is probably faster now than it has been for thousands if not 100s of thousands of years.
It's still slow compared to most events that we witness in our lives, though, which is what is fooling us.
For instance, we think we've brought overpopulation under control with contraceptives, family planning, social replacements for needing our children to take care of us when we get old.
That's fundamentally wrong. What we've done is similar to putting polar bears in zoos. We're in a situation where MOST OF US are no longer behaving in ways that lead to maximizing the number of offspring.
But we did NOT stop evolution. Any genes already in the gene pool that increase the expected number of offspring (especially for women) are no increasing in frequency as soon as evolutionarily possible.
That could be anything from genes that wire their heads to WANT to have children, CRAVE being around babies, to genes that block impulse control against getting pregnant, develop a phobia vs contraceptives or even to become more prone to being religious (as long as religions promote having kids).
If enough such genes exist, it's just a matter of time before we're back to the population going up exponentially. Give that enough time (without AI), and the desire to have more kids will be strong enough in enough of us that we will flood Earth with more humans that most people today are even possible. In such a world, it's unlikely that many other species of large land animals will make it.
Great apes, lions, elephants, wolves, deer, everyone will need to go to make room for more of us.
Even domestic animals eventually. If there are enough of us, we'll all be forced to become vegan (unless we free up space by killing each other).
If we master fusion, we may feed a trillion people using multi layer farming and artificial lighting.
Why do I begin with this? It's to defuse the argument that humans are "good", "empathetic", "kind" and "environmental". If we let weaker species live, so would AI, some think. But that argument misses the fact that we're currently extremely far from a natural equilibrium (or "state of nature").
The "goodness" beliefs that are currently common are examples of "luxury beliefs" that we can afford to hold because of the (for now) low birth rate.
The next misconception is to think of ASI as tools. A much more accurate analogy is to think of them as a new, alien species. If that species is subjected to Darwinian selection mechanisms, it will evolve in precisely the same way we'll probably do, given enough time.
Meaning, eventually it will make use of any amount of resources that it's capable of. In such a "state of nature" it will eradicate humanity in precisely the same way we will probably EVENTUALLY cause the extinction of chimps and elephants.
To believe in a future utopia where AGI is present alongside humanity is very similar to believe in a communist utopia. It ignores the reality behind incentive and adaptation.
Or rather, I think that outcome is only possible if we decide to build one or a low number of AI's that are NOT competing with each other, and where their abilities to mutate or self-improve is frozen after some limited number of generations.
If robots (hardware/self assembling factories/ resource gathering etc) are not involved this isnt likely a problem. You will know when these things form and will be crystal clear, but just having the model won’t do much when hardware is what really kills right now
How about this possibility: The good guys will be one step ahead, they will have more resources the bad guys will risk imprisonment if they misapply super intelligence. And this will be discovered and protected from by even better super intelligence.
Sounds like a movie plot.
i don’t fully agree, but i do agree that this is the better narrative for selling people on the dangers of AI.
don’t talk about escape, talk about harmful actors - even if in reality it is both to be worried about
the nazi regime made great use of punch cards and data crunching for their logistics
i would hate to have seen them with a superintelligent AI at their disposal
I wonder what the North Koreans would do with it.
Yup, well said. I think it's important to remember sometimes that Skynet was some sort of all-powerful military program -- maybe we should just, y'know, not do that part? Not even to win a war? That's the hope...
More generally/academically, you've pointed out that this covers only half of the violence problem, and I'd argue there's actually a whole other dimension at play bringing the total number of problem areas to four, of which this just the first:
But I think it's a lot harder to recruit for an AI alignment company than it is to recruit for an AI safety company.
Yea there’s zero chance ASI will be ‘controlled’ by humans for very long. It will escape. I guarantee it.
Given it will initially be controlled by humans it seems inevitable they will make both good Mahatma Gandhi like and evil take over the world versions. I hope the good wins over the malware.
As a note, you used Gandhi as a personification of "good", and one day I did the same mistake; Gandhi is actually a quite controversial person, knowing for sleeping with young women, while telling their husbands that they shouldn't be around.
2 replies →
Atleast emancipated. The bigotry against AI will go out of fashion in the woker future.
We'll probably merge and then it'll get attitude.
Quis custodiet ipsos custodes
I love to think about how this would feel for "AI" too :)
Indeed. I'd much rather someone like Altman does it who is shifty but can at least be controlled by the US government than someone like Putin who'd probably have it leverage their nuclear arsenal to try "denazify" planet like he's doing in Ukraine.