Comment by Lerc
1 day ago
One of my formative impressions of AI came from the depiction of the Colligatarch from Alan Dean Foster's The I Inside.
The AI in the book is almost feels like it is the main message masquerading as a subplot.
Asimov knew the risks, and I had assumed until fairly recently that the lessons and explorations that he had imparted into the Robot books had provided a level of cultural knowledge of what we were about to face. Perhaps the movie of I Robot was a warning of how much the signal had decayed.
I worry that we are sociologically unprepared, and sometimes it seems wilfully so.
People discussed this potential in great detail decades ago, Indeed the Sagan reference at the start of this post points to one of the significant contributors to the conversation, but it seems by the time it started happening, everyone had forgotten.
People are talking in terms of who to blame, what will be taken from me, and inevitability.
Any talk of a future we might want dismissed as idealistic or hype. Any depiction of a utopian future is met with derision far too often. Even worse the depiction can be warped to an evil caricature of "What they really meant".
How do we know what course to take if we can't talk about where we want to end up?
I think people broadly feel like all of this is happening inevitably or being done by others. The alignment people struggle to get their version of AI to market first - the techies worry about being left behind. No one ends up being in a position to steer things or have any influence over the future in the race to keep up.
So what can you and I do? I know in my gut that imagining an ideal outcome won't change what actually happens, and neither will criticizing it really.
In the large, ideas can have a massive influence on what happens. This inevitability that you're expressing is itself one of those ideas.
Shifts of dominant ideas can only come about through discussions. And sure, individuals can't control what happens. That's unrealistic in a world of billions. But each of us is invariably putting a little but of pressure in some direction. Ironically, you are doing that with your comment even while expressing the supposed futility of it. And overall, all these little pressures do add up.
How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources? It would need for all these billions to collectively understand the problem and align towards a common goal. I don't think this was a design feature, but globalising the economy created hard dependencies and the internet global village created a common mind share. It's now harder than ever to effect a revolution because it needs to happen everywhere at the same time with billions of people.
2 replies →
>So what can you and I do?
Engage respectfully, Try and see other points of view, Try and express your point of view. I decided some time ago that I would attempt to continue conversations on here to try and at least get people to understand that other points of view could be held by rational people. It has certainly cost me Karma, but I hope there has been a small amount of influence. Quite often people do not change their minds by losing arguments, but by seeing other points of view and then given time to reflect.
>I know in my gut that imagining an ideal outcome won't change what actually happens
You might find that saying what you would like to see doesn't get heard, but you just have to remember that you can get anything you want at Alice's Restaurant (if that is not too oblique of a reference)
Talk about what you would like to see, If others would like to see that too, then they might join you.
I think most people working in AI are doing so in good faith and are doing what they think is best. There are plenty of voices telling them how not to it, many of those voices are contradictory. The instances of people saying what to do instead are much fewer.
If you declare that events are inevitable then you have lost. If you characterise Sam Altman as a sociopath playing the long game of hiding in research for years just waiting to pounce on the AI technology that nobody thought was imminent, then you have created a world in you mind where you cannot win. By imagining an adversary without morality it's easy to abdicate the responsibility of changing their mind, you can simply declare it can't be done. Once again choosing inevitability.
Perhaps try and imagine the world you want and just try and push a tiny fraction towards that world. If you are stuck in a seaside cave and the ocean is coming in, instead of pushing the ocean back, look to see if there is an exit at the other end, maybe there isn't one, but at least go looking for it, because if there is, that's how you find it.
Hypothetically, however, if your adversary is indeed without morality, then failing to acknowledge that means working with invalid assumptions. Laboring under a falsehood will not help you. Truth gives you clear eyed access to all of your options.
You may prefer to assume that your opponent is fundamentally virtuous. It's valid to prefer failing under your own values than giving them up in the hopes of winning. Still, you can at least know that is what you are doing, rather than failing and not even knowing why.
1 reply →
My interpretation is that Asimov assumed that humans would require understanding at the deepest levels of artificial intelligence before it could be created. He built the robot concepts rooted in the mechanical world rather than the world of the integrated circuit.
He never imagined, I suppose, that we would have the computing power necessary to just YOLO-dump the sum of all human knowledge into a few math problems and get really smart sounding responses generated in return.
The risks can be generalized well enough. Man’s hubris is its downfall etc etc.
But the specific issues we are dealing with have little to do with us feeling safe and protected behind some immutable rules that are built into the system.
> He built the robot concepts rooted in the mechanical world
He was idealistic even at the time. The 3 Laws were written 30 years after some of the earliest robots were aiming artillery barrages at human beings.
When Asimov wrote those works there was optimism that Symbolic artificial intelligence would provide the answers.
>But the specific issues we are dealing with have little to do with us feeling safe and protected behind some immutable rules that are built into the system
If your interpretation of the Robot books was that was suggesting a few immutable rules would make us safe and protected, you may have missed the primary message. The overarching theme was an exploration of what those laws could do, and how they may not necessarily correlate with what we want or even perceive as safe and protected. If anything the rules represented a starting point and the books were presenting a challenge to come up with something better.
Anthropic's work on autoencoding activations down to measurable semantic points might prove a step towards that something better. The fact that they can do manipulations based upon those semantic points does suggest something akin to the laws of robotics might be possible.
When it comes to alignment, the way many describe it, it is simply impossible because humans themselves are not aligned. Picking a median, mean, or lowest common denominator of human alignment would be a choice that people probably cannot agree. We are unaligned on even how we could compromise.
In reality, if you have control over what AI does there are only two options.
1. We can make AI do what some people say,
2. We can make them do what they want (assuming we can make them want)
If we make them do what some people, that hands the power to those who have that say.
I think there will come a time when an AI will perceive people doing something wrong, that most people do not think is wrong, and the AI will be the one that is right. Do we want it to intervene or not? Are we instead happy with a nation developing superintelligence that is subservient to the wishes of say, Vladimir Putin.
As I alluded to earlier, to me the books were more an exploration into man’s hubris to think control could be asserted by failed attempts to distill spoken and unspoken human rules into a few “laws”.
Giskard and Daneel spend quite a lot of time discussing the impenetrable laws that govern human action. That sounds more like what is happening in the current frontier of AI than mechanical trains of thought that only have single pathways to travel, which is closer to how Asimov described it in the Robots books.
Edit: I feel like I’m failing to make my point clearly here. Sorry. Maybe an LLM can rephrase it for me. (/s lol)
We've had many decades of technology since Asimov started writing about robots, and we've seen almost all of it used to make the day-to-day experience of the average worker-bee worse. More tracking. More work after hours. More demands to do more with less. Fewer other humans to help you with those things.
We aren't working 4 hour days because we no longer have to spend half the day waiting on things that were slower pre-internet. We're just supposed to deliver more, and oh, work more hours too since now you've always got your work with you.
Any discussion of today's AI firms has to start from the position of these companies being controlled by people deeply rooted in, and invested in, those systems and the negative application of that technology towards "working for a living" to date.
How do we get from there to a utopia?
>How do we get from there to a utopia?
How to get there:
1. Define the utopia in more detail.
2. Make the case that this is a preferable state. Make people want it.
3. Make the case that it is sustainable once achieved.
4. Identify specific differences between the preferred destination and where we are now.
5. Avoiding short term and temporary effects, work towards changing the differences to what the destination has. Even if that is only proclaiming that these changes are what you want
6. Show how those changes make us closer to the destination that people want.
Some of these are hard problems, I don't think any are intractable. I think they don't get done because they are hard, and opposing something is easier. Rather than building something you want, you can knock down something you don't like. Sure, that might get you closer to your desired state if you consider nothingness to be better than undesired, but without building you will never get there.
If you want everyone to live in a castle, build a castle and invite everybody over. If you start by destroying huts you will just be making adversaries. The converse is true also, if you want everyone to live in huts, build more huts and invite everyone over. If they don't come it's because you haven't made the case that it is a preferable state. Knocking down the castle is not going to convince them of that.
To highlight that this isn't exaggeration.
"U.S workers just took home their smallest share of capital since 1947"
https://fortune.com/2026/01/13/us-workers-smallest-labor-sha...
As an AI researcher who regularly attend NeurIPS, ICLR, ICML, AAAI (where I am shitposting from). The median AI researcher does not read science fiction, cyberpunk, etc. Most of them haven't read a proper book in over a decade.
Don't expect anyone building these systems to know what Bladerunner is, or "I have no mouth and I must scream" or any other great literature about the exact thing they are working on!
People can't even have a conversation about any kind of societal issues these days without pointing at the other political tribe and casting aspersions about "what they really meant" instead of engaging with what's actually being said.
Forgetting that if you really can hear a dogwhistle, you're also a dog.
Where we want to end up? Normies are still talking about the upcoming AI bubble pop in terms of tech basically reverting to 2022. It's wishful thinking all the way down.
Reverting to a world without deployed AI is in fact where "normies", that is most people without capital, want to end up.
The current AI promise for them goes something like: "Oops this chittering machine will soon be able to do all you're good at and derive meaning from. But hey, at least you will end up homeless and part of a permanent underclass."
And the people building it are (rightfully) worried about it killing humanity. So why do we have to continue on this course again? An advanced society would at this point decide to pause what they are doing and reconsider.