Comment by habosa
20 days ago
I worked in Big Tech and it changed my life financially so I can’t judge anyone else for doing it, but I will say that I had a moral reckoning while I was there and I am (right now) unwilling to go back.
At the time (2012-2022) the things about the business model that bothered me were surveillance culture, excessive advertising, and monopoly power. Internally I was also horrified at the abuse of “vendor/contractor” status to maintain a shadow workforce which did a lot of valuable work while receiving almost none of the financial benefits that the full-time workforce received.
3 years later all of those concerns remain but for me they’re a distant second behind the rise of AI. There’s a non-zero chance that AI is one of the most destructive creations in human history and I just can’t allow myself to help advance it until I’m convinced that chance is much closer to zero. I’m in the minority I know, so the best case scenario for me is that I’m wrong and everyone getting rich on AI right now has gotten rich for bringing us something good, not our doom.
I’m curious as to how you think it’ll be our doom. As for the ethics in it, there are two ways to look at it in my opinion. One is yours, the other is to accept that AI is coming and at least work to help your civilisation “win”. I doubt we’ll see any form of self aware AI in our lifetimes, but the AI tools are obviously going to become extremely powerful tool. I suspect we’ll continue heading down the “cyberpunk” road leading to the dystopian society we read about in the 80/90ies, but that’s not really the doom of mankind as such. It just sucks.
As a former history major I do think it’ll be interesting to follow how AI shifts the power balances. We like to pretend that we’ve left the “might makes right” world behind, but AI is an arms race, and it’ll make some terrible weapons. Ethics aside you’re going to want to have the best of those if you want your civilisation to continue being capable or deciding which morals it wants to follow.
I don't think most are primarily concerned about war applications, but simply driving mass unemployment.
This even seems to be the exact goal of many who then probably imagine the next step would then be some sort of basic income to keep things moving, but the endless side effects of this transition make it very unclear if this is even economically feasible.
At best, it would seem to be a return to defacto feudalism. I think 'The Expanse' offered a quite compelling vision of what "Basic" would end up being like in practice.
Those who are seen (even if through no fault of their own) as providing no value to society - existing only to consume, will inevitably be marginalized and ultimately seen as something less than.
To expand on ‘The Expanse’ “Basic”.
The expanse was a 9+ book series that won several literary awards that takes place in an interplanetary humanity several centuries in the future.
Roughly one half of the population of earth, or 30 billion people, live on basic assistance from The United Nations. The only way to leave basic is to get a job or get an education, and there are significant hurdles to both of those routes. People on basic do not get money, but they do receive everything they need to live a life. A barter economy exists among those on basic, and some small industry is available to those on basic if it flies under the government’s radar. Some (unspecified population size) undocumented people do not receive basic, and may resort to crime in order to make ends meet.
2 replies →
Proper UBI is absolutely economically feasible if we start taxing things like, say, capital gains properly.
"The Expanse" shows the kind of UBI that Big Tech bros would like to see, absolutely. Which is to say, the absolute minimum you need to give people to prevent a revolt and maintain a status quo. But you shouldn't assume that this is the only possibility.
As far as "seen as providing no value to society", that is very much a cultural thing and it is not a constant, so it can and should be changed. OTOH if we insist on treating that particular aspect as immutable, our society is always going to be shitty towards a large number of people in one way or another.
1 reply →
> how you think it’ll be our doom
There's 2 main possibilities:
1) Self aware AI with its own agency / free will / goals. This is much harder to predict and is IMO less likely with the current approaches so i'll skip it.
2) A"I" / ML tools will become a force multiplier and the powerful will be even more so. Powerful people and organizations (including governments) already have access to much more data about individuals than ordinary citizens. But currently you usually need loyal people to sift through data and to act on it.
With advanced ML tools, you can analyze every person's entire personality, beliefs, social status, etc. And if they align with your goals, you can promote them, if not, you can disadvantage them.
2a) This works if you're a rich person deciding whose medical bills you will pay (and one such person was recently killed for abusing this power).
2b) This works if you're a rich person owning a social network by deciding who's opinions will be more or less visible to others. You can shape entire public discourse and make entire opinions and topics invisible to those who have not already been exposed to them. For example one such censored topic in western discourse is when the use of violence is justified and moral. The west, at least for now, is willing to celebrate moral acts of violence in the past (French revolution, American civil war, assassination of Reinhard Heydrich) but discussion of situations where violence should be used in recent times is taboo and "banned" on many centrally moderated platforms.
2c) And obviously nation states have insane amount of info on both their own citizens and those from other nation states. They already leads to selective enforcement (everybody is guilty of something) and it can get even worse when the government becomes more totalitarian. Can you imagine current China ever having a revolution and reinstating democracy? I can't because any dissent will be stopped before it reaches critical mass.
So states which are currently totalitarian are very unlikely to restore democracy and states which are currently democracies are prone to increasingly totalitarian rule by manipulation from rich individuals - see point 2b.
I’m expecting more of a cyberpunk reality where governments continue to lose power to massive corporations run by oligarchs. You could argue that the aristocracy never really left, but it’s certainly been consolidating power since 2000. Part of the aristocracy still believes in the lessons which lead to the enlightenment, and I suspect many other families will re-learn them in the coming decades. It’s not exactly fun to be an oligarch in a totalitarian country after all, and throughout history the most successful have always gravitated toward more “free” societies if they could. Because it’s better to live in the Netherlands than to have the king of Spain seize your riches. I’m not too worried AI will give us social points the way they do in China. I am European so that helps, and the US would frankly have a hard time making it worse for the lower class citizens anyway. If anything the increased access to knowledge might even help educate many people on just how bad they have it.
I’m sure you’ll see bad actors who use AI to indoctrinate people, but at least as long as there is so much competition it’ll be harder to do that than what is happening in more totalitarian states where LLM answers are propaganda.
I think the AI investor class wants to find a way to have it replace a large amount of human labor. I think if they succeed this will damage our society irreparably which, like it or not, only works well when people have jobs.
I’m also very worried about AI spam and impersonation eroding all interpersonal trust online which has obvious disastrous consequences.
While unemployment certainly deserves a conversation of its own, I think the more overlooked aspects of education and democracy will erode our society deeper into a hole by themselves.
I'm rather fearful for the future of education in this current climate. The tools are already powerful enough to wreak havoc and they haven't stopped growing yet! I don't think we'll properly know the effect for some years now, not until the kids that are currently in 5th, 6th, or 7th start going into the workforce. While the individual optimist in me would like to see AI as this great equalizer, personal tutor for everyone, equal opportunity deliverance, I think we've fumbled it for all but a select few. Certainly there will be cases of great success, students who leverage AI to it's fullest extent. But I urge one to think of the other side of the pie. How will that student respond to this? And how many students are really in this section?
AI in its current state presents a pact with the devil for all but the most disciplined and passionate of us. It makes it far to easy to resign all use of your critical mental faculties, and to stagnate in your various abilities to navigate our complex modern world. Skills such as critical reading, synthesizing, and writing are just a few of the most notable examples. Unrestrained use of tools that help us so immensely in these categories can bring nothing but slow demise for us in the end.
This thought pattern pairs nicely with the discussion of AIs effects on democracy. Hopefully the step taken from assuming the aforementioned society, with its rampant inabilities to reason critically about its surroundings, to saying that this is categorically bad for democracy, isn't too large. Democracy, an imperfect form of government that is the best we have at this moment, only works well with an educated populace. An uneducated democracy governs on borrowed time. One can already see the paint start to peel (there is a larger effect that the Internet has on democracy that I'll leave out of this for now, but is worth thinking about as it's the one responsible for the current decline in our political reality).
The unfortunate conclusion that I reach when I think of all of this, is that it comes down to the ability of government and corporations to properly restrain this technology and foster its growth in a manner that is beneficial for society. And that restraint is hard to see coming with our current set up. This is to avoid being overly dramatic and saying that it's impossible.
If you look at the history of the United States, and truly observe the death grip that its baby, capitalism, has on its governance, if you look at this, you find it hard to believe that this time will be any different from times past. There is far too much money and national security concern at stake here to do anything but put the pedal to the floor and rapidly build an empire in this wild west of AI. The unfortunate conclusion is that perhaps this could have been a wonderful tool for humanity, and allowed us to realize our collective dreams, but due to the reasons stated above I believe this is unachievable with our current set up of governance and understanding of ethics, globally.
> There’s a non-zero chance that AI is one of the most destructive creations in human history
Geoffrey Hinton was interviewed by Sajid Javid on BBC R4 on Friday [1] and was considerable more pessimistic. If I hear it correctly he reckoned that there is a 10% to 30% chance that AI wipes us out within the next 30 years.
[1] https://www.bbc.co.uk/programmes/p0kbsg05
So you worked there 10 years, made your stack of cash, and then had a moral reckoning? So brave.
Not ideal I guess but better than no reckoning at all.
Have you been offered a job in a BigTech that pays 3 times your current salary, and did you decline it?
It's easy to criticise others when you are not confronted to the situation.
100% - however, if you are great enough to get such an offer from bigtech you won’t really worry about your finances…
younger-me, I would 100% take the money. older-and-wiser me would not even apply to begin with
1 reply →
I specifically said I don’t judge others for working there now and just wanted to add to the conversation by explaining how my thoughts changed over time. I do not think I was or am brave for any of my career decisions.
I was 20 when I started working in big tech and the reputation of those companies was at its absolute peak. I had a lot to learn.