Comment by flr03
4 days ago
I'm not scared about AI recommending nuclear strikes, I'm scared about the human behind the keyboard delegating reasoning and responsability to something they think is always correct, something that can hide bias and flaws better than anything.
Some of the most reassuring and scariest things you can read are about the incidents that have already occurred where computers said "launch all the nukes" and the humans refused. On the one hand, good news! We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to. Bad news, it's been skin-of-our-teeth multiple times already.
https://www.warhistoryonline.com/cold-war/refused-to-launch-... - This isn't even the incident I was searching for to reference! This one was news to me.
https://en.wikipedia.org/wiki/Stanislav_Petrov#Incident - This is the one I was looking for.
> We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to.
previously no-one had spent trillions of dollars trying to convince the world that those computers were "Artificial Intelligence"
of course they did. That's the literal topic of War Games (1983). You should actually be somewhat reassured that we aren't living during the era of Dr. Strangelove where you had characters in the military industrial complex who were significantly more insane when it came to the beliefs of what computer systems and nukes can do.
There was a time when people wanted to dig tunnels with nukes https://en.wikipedia.org/wiki/Project_Plowshare
7 replies →
They had to do with "state-of-the-art radars", "military-grade communication systems", etc.
1 reply →
Or "alignment" which means "let's ensure the AIs recommend launching nukes only when it makes sense to, based on our [assumed objective] values"
Yeah... the more I learn about nuclear weapon history the more I discount our society's long term viability. There are way too many frighteningly close calls already, and there are probably others that aren't widely known.
It's not just nukes that are concerning either. If we're unable to mitigate such a visceral existential risk, we aren't going to do any better with more subtle vulnerabilities. AI of course accelarates some risks and introduces new ones.
This doesn't mean we're doomed or anything, but if I had a magic portal to peer a few hundred years in the future and saw humans had been obliterated by nukes, runaway AI, some generated supervirus, runaway climate change, or some other manufactured risk I would be completely unsurprised.
We shouldn't be the least bit surprised no human has complied so far.
If they had, then we wouldn't be having this conversation. For all we know, there may be a vast multiverse of universes some with humans and we would only find ourselves having this conversation in one of the universes where no human pressed the button.
By that logic, it may actually be pretty common for rabbits to swallow the sun. We just haven't seen it happen because we're in the wrong universe and would've died it it happened in ours.
Anthropic Principle
> We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to.
This relies on processes being in place to ensure that a human will always make the final decision. What about when that gets taken away?
I find it hard to imagine that the people in a position to kill those processes could ever be that zealously in love with AI, but recent events have given me a tiny bit of doubt.
1 reply →
I briefly got into a "rabbithole" of watching videos about trying to intercept BMs and glide hypersonic weapons, pretty interesting, decoys deployed in space... the outcome seemed to be not good, can't guarantee 100% interception
A missile will always be cheaper than a missile interceptor, and the interceptor will never be a 1:1 kill. Building a missile interceptor system ia a good way to get your strategic opponent to build a bigger stockpile.
2 replies →
I hope humans in charge are as wise now as they were then.
Surely that’s the definition of a quixotic hope.
I am scared of two things.
First, people being rubber stamps for AI recommendations. And yes, it is not unreasonable that in a dire situation, someone will outsource their judgment (day).
Second, someone at the Pentagon connecting the red button to OpenClaw. "You are right, firing nukes was my mistake. Would you like to learn more facts about nukes before you evaporate?"
If you think humans are going to delegate reasoning and responsibility to something, shouldn’t you also be concerned about the sorts of recommendations that thing is going to make?
If you found out the pentagon was using a magic 8 ball to make important war decisions what would you want to fix - our military leadership or the inner workings of the toy?
One of those sounds a lot easier than the other. The magic 8 ball toy company would also probably be pretty incentivized to not die in a nuclear holocaust.
4 replies →
The speed with which my technical cow-orkers and friends have started relying on the "AI Overview" only, in lieu of following any links, in search engine results (let alone not using search engines at all over chatbots) tells me reasoning and responsibility will be outsourced as soon as possible.
Humans are fundamentally lazy. The brain is an "expensive" organ to use.
One can try themself, for Claude is fine at waging war [1]. Notice the thoughtful UX, including the typing "I ACCEPT FULL RESPONSIBILITY".
[1]: https://nitter.poast.org/elder_plinius/status/20264475874910...
Be not scared about humans behind keyboards. Be scared about humans without keyboard, desk and no future beyond dhijhad, now getting nukes nearby because the age of empires has returned.
Trump's Golden Dome is literally advertised to help the U.S. win a nuclear war by leveraging AI.
Elon's involvement in the nuclear military complex https://www.mintpressnews.com/pentagon-recruiting-elon-musk-...