Comment by devjab
21 days ago
I’m curious as to how you think it’ll be our doom. As for the ethics in it, there are two ways to look at it in my opinion. One is yours, the other is to accept that AI is coming and at least work to help your civilisation “win”. I doubt we’ll see any form of self aware AI in our lifetimes, but the AI tools are obviously going to become extremely powerful tool. I suspect we’ll continue heading down the “cyberpunk” road leading to the dystopian society we read about in the 80/90ies, but that’s not really the doom of mankind as such. It just sucks.
As a former history major I do think it’ll be interesting to follow how AI shifts the power balances. We like to pretend that we’ve left the “might makes right” world behind, but AI is an arms race, and it’ll make some terrible weapons. Ethics aside you’re going to want to have the best of those if you want your civilisation to continue being capable or deciding which morals it wants to follow.
I don't think most are primarily concerned about war applications, but simply driving mass unemployment.
This even seems to be the exact goal of many who then probably imagine the next step would then be some sort of basic income to keep things moving, but the endless side effects of this transition make it very unclear if this is even economically feasible.
At best, it would seem to be a return to defacto feudalism. I think 'The Expanse' offered a quite compelling vision of what "Basic" would end up being like in practice.
Those who are seen (even if through no fault of their own) as providing no value to society - existing only to consume, will inevitably be marginalized and ultimately seen as something less than.
To expand on ‘The Expanse’ “Basic”.
The expanse was a 9+ book series that won several literary awards that takes place in an interplanetary humanity several centuries in the future.
Roughly one half of the population of earth, or 30 billion people, live on basic assistance from The United Nations. The only way to leave basic is to get a job or get an education, and there are significant hurdles to both of those routes. People on basic do not get money, but they do receive everything they need to live a life. A barter economy exists among those on basic, and some small industry is available to those on basic if it flies under the government’s radar. Some (unspecified population size) undocumented people do not receive basic, and may resort to crime in order to make ends meet.
I love The Expanse and it gets things right more than other sci-fi. However, I think it vastly _underestimates_ the amount of injustice than can be caused by powerful people with the help of advanced technology and ML.
1) You can literally cover the planet with sensors and make privacy impossible. Cameras and microphones are already cheap and small. What will they look like in several hundred years? You can already eavesdrop on a conversation in a closed room, e.g. by bouncing a laser off the window to amplify air vibrations. What will be possible in several hundred years?
2) Right now, suppressing the population by force requires control of a sufficient number of serviles. These serviles are prone to joining the revolution if you ask them to harm their own friends and families (Chine only managed to massacre Tianennmen square after reinforcements from other regions survived because the initial wave joined the protesters). They are prone to only serving as long as you can offer them money or threaten then credibly.
In the near future, it will be possible to suppress any uprising (if you're willing to use violence) by a small number of people controlling a large number of automated tools (e.g. killbots, the drone war in Ukraine is a taste of what's to come).
Spoilers ahead.
The story vastly underestimates the competence of state level bad actors.
In the books, Holden and his group were attacked on Eros by a small number (single digits) of covert agents and only managed to survive thanks to Miller. In reality, you don't send 4 people to apprehend 4 people, you send 40.
Later, Holden and other people were apprehended on Ganymede and again, managed to get out of it by overpowering their captors because the government just didn't send enough people. This is not gonna happen in reality.
(Though you might be able to kill one if you're also willing to die in the process. A Belarusian citizen had several KGB agents break into his flat but because it took them a while to break the door down, he managed to grab his gun, ambushed them and shot one in the stomach. The aggressor later bled out but the citizen was also killed.)
It's also worth remembering that in "Expanse", there'a also Mars, which is a separate state that does not have this arrangement - everyone is employed, but conversely there's no unconditional welfare.
However, it is made pretty clear in the books that the reason why this is possible for Mars is because they have this huge ongoing terraforming project that will take a century to complete. So there's always more jobs than people to fill them, basically, and it's all ultimately still paid for by the government, just not directly (via contracts to large enterprises).
Proper UBI is absolutely economically feasible if we start taxing things like, say, capital gains properly.
"The Expanse" shows the kind of UBI that Big Tech bros would like to see, absolutely. Which is to say, the absolute minimum you need to give people to prevent a revolt and maintain a status quo. But you shouldn't assume that this is the only possibility.
As far as "seen as providing no value to society", that is very much a cultural thing and it is not a constant, so it can and should be changed. OTOH if we insist on treating that particular aspect as immutable, our society is always going to be shitty towards a large number of people in one way or another.
The fallacy most people make is assuming the status quo, making a change, and imagining that there are no other resultant changes.
A change like this would be a dramatic shift and the indirect economic consequences are mostly impossible to foresee.
For a simple example the overwhelming majority of jobs that involve unpredictable physical labor aren't going anywhere - everything from janitors to plumbers to doctors.
But in this brave new world these workers, especially the lower paid, would likely require dramatic pay increases, when they have the option of simply not working for an at least comparable 'salary' (and presumably much more if former white collar workers expect their basic to provide more than a janitorial salary). So now you end up turning the labor market upside down with dramatic changes in the overall economic system.
And keep in mind how finely balanced economies are - most Western economies, if growing, are only growing by a couple of percent by year. And now imagine hitting them with this scale of change.
> how you think it’ll be our doom
There's 2 main possibilities:
1) Self aware AI with its own agency / free will / goals. This is much harder to predict and is IMO less likely with the current approaches so i'll skip it.
2) A"I" / ML tools will become a force multiplier and the powerful will be even more so. Powerful people and organizations (including governments) already have access to much more data about individuals than ordinary citizens. But currently you usually need loyal people to sift through data and to act on it.
With advanced ML tools, you can analyze every person's entire personality, beliefs, social status, etc. And if they align with your goals, you can promote them, if not, you can disadvantage them.
2a) This works if you're a rich person deciding whose medical bills you will pay (and one such person was recently killed for abusing this power).
2b) This works if you're a rich person owning a social network by deciding who's opinions will be more or less visible to others. You can shape entire public discourse and make entire opinions and topics invisible to those who have not already been exposed to them. For example one such censored topic in western discourse is when the use of violence is justified and moral. The west, at least for now, is willing to celebrate moral acts of violence in the past (French revolution, American civil war, assassination of Reinhard Heydrich) but discussion of situations where violence should be used in recent times is taboo and "banned" on many centrally moderated platforms.
2c) And obviously nation states have insane amount of info on both their own citizens and those from other nation states. They already leads to selective enforcement (everybody is guilty of something) and it can get even worse when the government becomes more totalitarian. Can you imagine current China ever having a revolution and reinstating democracy? I can't because any dissent will be stopped before it reaches critical mass.
So states which are currently totalitarian are very unlikely to restore democracy and states which are currently democracies are prone to increasingly totalitarian rule by manipulation from rich individuals - see point 2b.
I’m expecting more of a cyberpunk reality where governments continue to lose power to massive corporations run by oligarchs. You could argue that the aristocracy never really left, but it’s certainly been consolidating power since 2000. Part of the aristocracy still believes in the lessons which lead to the enlightenment, and I suspect many other families will re-learn them in the coming decades. It’s not exactly fun to be an oligarch in a totalitarian country after all, and throughout history the most successful have always gravitated toward more “free” societies if they could. Because it’s better to live in the Netherlands than to have the king of Spain seize your riches. I’m not too worried AI will give us social points the way they do in China. I am European so that helps, and the US would frankly have a hard time making it worse for the lower class citizens anyway. If anything the increased access to knowledge might even help educate many people on just how bad they have it.
I’m sure you’ll see bad actors who use AI to indoctrinate people, but at least as long as there is so much competition it’ll be harder to do that than what is happening in more totalitarian states where LLM answers are propaganda.
I think the AI investor class wants to find a way to have it replace a large amount of human labor. I think if they succeed this will damage our society irreparably which, like it or not, only works well when people have jobs.
I’m also very worried about AI spam and impersonation eroding all interpersonal trust online which has obvious disastrous consequences.
While unemployment certainly deserves a conversation of its own, I think the more overlooked aspects of education and democracy will erode our society deeper into a hole by themselves.
I'm rather fearful for the future of education in this current climate. The tools are already powerful enough to wreak havoc and they haven't stopped growing yet! I don't think we'll properly know the effect for some years now, not until the kids that are currently in 5th, 6th, or 7th start going into the workforce. While the individual optimist in me would like to see AI as this great equalizer, personal tutor for everyone, equal opportunity deliverance, I think we've fumbled it for all but a select few. Certainly there will be cases of great success, students who leverage AI to it's fullest extent. But I urge one to think of the other side of the pie. How will that student respond to this? And how many students are really in this section?
AI in its current state presents a pact with the devil for all but the most disciplined and passionate of us. It makes it far to easy to resign all use of your critical mental faculties, and to stagnate in your various abilities to navigate our complex modern world. Skills such as critical reading, synthesizing, and writing are just a few of the most notable examples. Unrestrained use of tools that help us so immensely in these categories can bring nothing but slow demise for us in the end.
This thought pattern pairs nicely with the discussion of AIs effects on democracy. Hopefully the step taken from assuming the aforementioned society, with its rampant inabilities to reason critically about its surroundings, to saying that this is categorically bad for democracy, isn't too large. Democracy, an imperfect form of government that is the best we have at this moment, only works well with an educated populace. An uneducated democracy governs on borrowed time. One can already see the paint start to peel (there is a larger effect that the Internet has on democracy that I'll leave out of this for now, but is worth thinking about as it's the one responsible for the current decline in our political reality).
The unfortunate conclusion that I reach when I think of all of this, is that it comes down to the ability of government and corporations to properly restrain this technology and foster its growth in a manner that is beneficial for society. And that restraint is hard to see coming with our current set up. This is to avoid being overly dramatic and saying that it's impossible.
If you look at the history of the United States, and truly observe the death grip that its baby, capitalism, has on its governance, if you look at this, you find it hard to believe that this time will be any different from times past. There is far too much money and national security concern at stake here to do anything but put the pedal to the floor and rapidly build an empire in this wild west of AI. The unfortunate conclusion is that perhaps this could have been a wonderful tool for humanity, and allowed us to realize our collective dreams, but due to the reasons stated above I believe this is unachievable with our current set up of governance and understanding of ethics, globally.