Comment by nemomarx
1 day ago
I think people broadly feel like all of this is happening inevitably or being done by others. The alignment people struggle to get their version of AI to market first - the techies worry about being left behind. No one ends up being in a position to steer things or have any influence over the future in the race to keep up.
So what can you and I do? I know in my gut that imagining an ideal outcome won't change what actually happens, and neither will criticizing it really.
In the large, ideas can have a massive influence on what happens. This inevitability that you're expressing is itself one of those ideas.
Shifts of dominant ideas can only come about through discussions. And sure, individuals can't control what happens. That's unrealistic in a world of billions. But each of us is invariably putting a little but of pressure in some direction. Ironically, you are doing that with your comment even while expressing the supposed futility of it. And overall, all these little pressures do add up.
How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources? It would need for all these billions to collectively understand the problem and align towards a common goal. I don't think this was a design feature, but globalising the economy created hard dependencies and the internet global village created a common mind share. It's now harder than ever to effect a revolution because it needs to happen everywhere at the same time with billions of people.
> How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources?
By things like: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
and: https://www.scstatehouse.gov/sess126_2025-2026/bills/4583.ht... (I know nothing about South Carolina, this was just the first clear result from the search)
To be clear - the sociopaths and the culture of resource domination that generates and enables them is the real problem.
AI on its own is chaotic neutral.
>So what can you and I do?
Engage respectfully, Try and see other points of view, Try and express your point of view. I decided some time ago that I would attempt to continue conversations on here to try and at least get people to understand that other points of view could be held by rational people. It has certainly cost me Karma, but I hope there has been a small amount of influence. Quite often people do not change their minds by losing arguments, but by seeing other points of view and then given time to reflect.
>I know in my gut that imagining an ideal outcome won't change what actually happens
You might find that saying what you would like to see doesn't get heard, but you just have to remember that you can get anything you want at Alice's Restaurant (if that is not too oblique of a reference)
Talk about what you would like to see, If others would like to see that too, then they might join you.
I think most people working in AI are doing so in good faith and are doing what they think is best. There are plenty of voices telling them how not to it, many of those voices are contradictory. The instances of people saying what to do instead are much fewer.
If you declare that events are inevitable then you have lost. If you characterise Sam Altman as a sociopath playing the long game of hiding in research for years just waiting to pounce on the AI technology that nobody thought was imminent, then you have created a world in you mind where you cannot win. By imagining an adversary without morality it's easy to abdicate the responsibility of changing their mind, you can simply declare it can't be done. Once again choosing inevitability.
Perhaps try and imagine the world you want and just try and push a tiny fraction towards that world. If you are stuck in a seaside cave and the ocean is coming in, instead of pushing the ocean back, look to see if there is an exit at the other end, maybe there isn't one, but at least go looking for it, because if there is, that's how you find it.
Hypothetically, however, if your adversary is indeed without morality, then failing to acknowledge that means working with invalid assumptions. Laboring under a falsehood will not help you. Truth gives you clear eyed access to all of your options.
You may prefer to assume that your opponent is fundamentally virtuous. It's valid to prefer failing under your own values than giving them up in the hopes of winning. Still, you can at least know that is what you are doing, rather than failing and not even knowing why.
This lacks any middle ground. I mean, is world divided between adversaries and allies? Are assumptions either valid or invalid? Are humans either "fundamentally virtuous" or "without morality"?
Such crude model doesn't help in navigating the reality at all.