Comment by roxolotl

9 months ago

This is a rare piece on AI which takes a coherent middle of the road viewpoint. Saying both that AI is “normal” and that it will be transformative is a radical statement in today’s discussions about AI.

Looking back on other normal but transformative technologies: steam power, electricity, nuclear physics, the transistor, etc you do actually see similarly stratified opinions. Most of those are surrounded by an initial burst of enthusiasm and pessimism and follow a hype cycle.

The reason this piece is compelling is because during the initial hype phase taking a nuanced middle of the road viewpoint is difficult. Maybe AI really is some “next step” but it is significantly more likely that belief is propped up by science fiction and it’s important to keep expectations inline historically.

I wouldn't call it a "middle" road rather a "nuanced" road (or even a "grounded" road IMO).

If its a "middle" road what is it in the middle of (i.e. what "scale")? And how so?

I'm not trying to be pedantic. I think our tendency to call nuanced, principled positions as "middle" encourages an inherent "hierarchy of ideas" which often leads to applying some sort of...valence to opinions and discourse. And I worry that makes it easier for people to "take sides" on topics which leads to more superficial, myopic, and repetitive takes that are much more about those "sides" than they are about the pertinent evidence, facts, reality, whatever.

  • >If its a "middle" road what is it in the middle of (i.e. what "scale")?

    That's pretty clear. We already have two "sides": AI is the latest useless tech boondoggle that consumes vast quantities of money and energy while not actually doing anything of value vs. AI is the dawn of a new era of human existence that will fundamentally change all aspects of our world (possibly with the additional "and lead to the AGI Singularity").

    • You have captured everything that is annoying about talking about ai/ml/statistics and everything related these days. I’m happy to forget those two sides exist and try and see application of these tools to various problem spaces.

    • The middle of that is recognizing that AI is transformative technology that is here to stay, but is also hyped into a frenzy, a la the dot com bubble.

    • No this is not the middle that should be considered.

      Nobody really thinks that AI is useless anymore. We disagree on timelines, we disagree on how useful it will be, but the extremes are not between useful and useless (although some people believe it will not change anything, but pretty fringe). The real extremes here are "evil god AGI" and "benevolent god AGI". This piece takes the road where they say: it's pretty powerful/useful, but it's not a godlike something because it's not everywhere, all the time, at once. It will change the world, but over a pretty long timespan and it will feel like normal technology. It will be used for evil and good.

      6 replies →

AI will transform everything, and after that life will continue as normal, so except for the details, it's not a big deal.

Going to be a simultaneously wild and boring ride.

  • I think this is my take as well. Like, the web and smart phones and social media have transformed everything ... and also life goes on.

    • Right, but society changes around this stuff pretty substantially. A lot of the discourse gets spent debating whether we like the changes or not, and the solutions to the new problems are usually very pie-in-the-sky or completely unachievable. Like we can't really put the social media genie back in the bottle, its here and society has reshaped around it. But banning phones from schools is pretty tangible and I'd probably argue its for the betterment of students.

      1 reply →

  • > AI will transform everything, and after that life will continue as normal

    100%. It just happened with the advent of the internet and then smartphones.

    • I wrote this a few days ago:

      This is a permanent part of the software industry now. It will just stop being such a large part of the conversation.

      Once upon a time, all software was local, with no connectivity. The Internet commercialised and became mainstream, and suddenly a wave of startups were like X, but online! Adding connectivity was a foundational pillar you could base a business around. Once that permeated the industry and became the norm, connectivity was no longer a reason to start a business, but it was still hugely popular as a feature. And then further down the line, everybody just assumes all software can use the Internet for even the smallest, most mundane things. The Internet never “died down”, it just stopped being the central thing driving innovation and became omnipresent in software development.

      Mobile was the same. Smartphones took off, and suddenly a wave of startups were like X, but as an app! Having an app was a foundational pillar you could base a business around. Once that permeated the industry and became the norm, being on mobile was no longer a reason to start a business, but it was still hugely popular as a feature. And then further down the line, everybody just assumes you’re available on mobile for even the smallest, most mundane things. Mobile never “died down”, it just stopped being the central thing driving innovation and became omnipresent in software development.

      Now we’ve got intelligence as the next big thing. We’re still in the like X, but smart! phase. It’s never going to “die down”. You’re just going to see intelligence became a standard part of the software development process. Using AI in a software project will become as mundane as using a database, or accessing the Internet. You aren’t going to plan a project around it so much as just assume it’s there.

      https://www.reddit.com/r/ExperiencedDevs/comments/1jylp6y/ai...

    • I think the problem I have is that it still doesn't feel like society has really caught up to the impact that The Internet and Smartphones have had, and we keep getting disrupted faster and faster it seems

      I know that there have always been disruptive technologies, but I the the rate of "technologies that are disrupting everything" as opposed to "technologies that are disrupting just one thing" has been kind of crazy

      3 replies →

    • And thanks to both, future trends are getting established faster and faster. It took about a decade for the internet to go mainstream, maybe five years for smartphones, and now just a year or two for LLMs. It’s pretty fascinating watching that process get more and more compressed over a lifetime.

      3 replies →

Add birth control to that list too.

After these technologies, certainly life is "normal" as in "life goes on" but the social impacts are most definitely new and transformative. Fast travel, instantaneous direct and mass communications, control over family formation all have had massive impact on how people live and interact and then transform again.

  • Humanae vitae by Pope Paul VI, July 25, 1976

    https://www.vatican.va/content/paul-vi/en/encyclicals/docume...

    Consequences of Artificial Methods

    17. Responsible men can ... first consider how easily this course of action could open wide the way for marital infidelity and a general lowering of moral standards. ... [A] man who grows accustomed to the use of contraceptive methods may forget the reverence due to a woman, and, disregarding her physical and emotional equilibrium, reduce her to being a mere instrument for the satisfaction of his own desires, no longer considering her as his partner whom he should surround with care and affection.

The surest defense against fashionable nonsense is a sound philosophical education and a temperament disinclined to hysteria. Ignorance leaves you wide open to all manner of emotional misadventure. But even when you are in possession of the relevant facts — and a passable grasp of the principles involved — it requires a certain moral maturity to resist or remain untouched by the lure of melodrama and the thrill of believing you live at the edge of transcendence.

(Naturally, the excitement surrounding artificial intelligence has less to do with reality than with commerce. It is a product to be sold, and selling, as ever, relies less on the truth than on sentiment. It’s not new. That’s how it’s always been.)

  • > sound philosophical education and a temperament disinclined to hysteria

    Sound good common sense suffices - the ability to go "dude, that's <whatever it is>". Preferring a clear idea of reality to intoxication... That should not be hard to ask and obtain.

    > it requires a certain moral maturity to

    Same thing.

    • It’s a fair point, broadly speaking. If which case, then what we’re observing in the tech sector is not merely an oversight, but a pervasive absence of basic common sense.

From the article:

====

History suggests normal AI may introduce many kinds of systemic risks While the risks discussed above have the potential to be catastrophic or existential, there is a long list of AI risks that are below this level but which are nonetheless large-scale and systemic, transcending the immediate effects of any particular AI system. These include the systemic entrenchment of bias and discrimination, massive job losses in specific occupations, worsening labor conditions, increasing inequality, concentration of power, erosion of social trust, pollution of the information ecosystem, decline of the free press, democratic backsliding, mass surveillance, and enabling authoritarianism.

If AI is normal technology, these risks become far more important than the catastrophic ones discussed above. That is because these risks arise from people and organizations using AI to advance their own interests, with AI merely serving as an amplifier of existing instabilities in our society.

There is plenty of precedent for these kinds of socio-political disruption in the history of transformative technologies. Notably, the Industrial Revolution led to rapid mass urbanization that was characterized by harsh working conditions, exploitation, and inequality, catalyzing both industrial capitalism and the rise of socialism and Marxism in response.

The shift in focus that we recommend roughly maps onto Kasirzadeh’s distinction between decisive and accumulative x-risk. Decisive x-risk involves “overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence,” whereas accumulative x-risk refers to “a gradual accumulation of critical AI-induced threats such as severe vulnerabilities and systemic erosion of econopolitical structures.” ... But there are important differences: Kasirzadeh’s account of accumulative risk still relies on threat actors such as cyberattackers to a large extent, whereas our concern is simply about the current path of capitalism. And we think that such risks are unlikely to be existential, but are still extremely serious.

====

That tangentially relates to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." Because as our technological capabilities continue to change, it becomes ever more essential to revisit our political and economic assumptions.

As I outline here: https://pdfernhout.net/recognizing-irony-is-a-key-to-transce... "There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" agencies [and economic corporations] are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ... The big problem is that all these new war machines [and economic machines] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political [and economic] mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."

A couple Slashdot comments by me from Tuesday, linking to stuff I have posted on risks form AI and other advanced tech -- and ways to address those risks -- back to 1999:

https://slashdot.org/comments.pl?sid=23665937&cid=65308877

https://slashdot.org/comments.pl?sid=23665937&cid=65308923

So, AI just cranks up an existing trend of technology-as-an-amplifier to "11". And as I've written before, if it is possible our path out of any singularity may have a lot to do with our moral path going into the singularity, we really need to step up our moral game right now to make a society that works better for everyone in healthy joyful ways.

  • The idea of abundance vs scarecety makes sense on the outset. But I have to wonder where all this alleged abundance is hiding. Sometimes the assumptions feel a bit like “drill baby drill” to me without figures and projections behind it. One would think if there was much untapped capacity in resources today it would get used up. We can look at how agriculture yields improved over the 19th century and see how that lead to higher populations but also less land under the plow and fewer hands working that land, vs having an equal land under plow and I don’t know dumping the excess yield someplace where it isn’t participating in the market?

    • I think to the parent's point it is as you say: there is already untapped capacity that isn't being used due to (geo)political forces maintaining the scarcity side of the argument. Using your agriculture example, a simple Google search will yield plenty of examples going back more than a decade of food sitting/rotting in warehouses/ports due to red tape and bureaucracy. So, we already can/do produce enough food to feed _everyone_ (abundance) but cannot get out of our own way to do so due to a number of human factors like greed or politics (scarcity).

      2 replies →