Comment by Animats
2 years ago
Most of the things people are worried about AI doing are the things corporations are already allowed to do - snoop on everybody, influence governments, oppress workers, lie. AI just makes some of that cheaper.
2 years ago
Most of the things people are worried about AI doing are the things corporations are already allowed to do - snoop on everybody, influence governments, oppress workers, lie. AI just makes some of that cheaper.
Turning something that we're already able to do into something we're able to do very easily can be extremely significant. It's the difference between "public records" and "all public records about you being instantly viewable online." It's also one of the subjects of the excellent sci fi novel "A Deepness in the Sky," which is still great despite making some likely bad guesses about AI.
And just like in politics the strategy is to redefine that which you want to achieve - in this case total control of a technology - as something else that’s bad so that people will be distracted from what you actually want which is exactly that which you describe as something else.
Politicians that point fingers at other politicians being corrupt or incompetent while they themselves are exactly that use the same strategy.
Power and manipulation. Nothing new under the sun. What’s new though is that we can see in plain sight how corporations control politics. Like literarily this can be documented with git commit history accuracy: thousands upon thousands of people repeating the exact same phrases defending openai and the “revolutionary” product, fear mongering, political lobby, manufactured threats and of course a cure that only they can provide and so on. I would not let people that use such tactics near an email account let alone ai policy making.
If anything, LLM can help process vast troves of customer data, communication and meta data more effectively than ever before.
And faster than humans can police.
Seems like a legitimately good reason to get a tourniquet on that thing now.
Nukes are the same as guns, just makes it cheaper.
A snowflake really isn't harmful.
A snowball probably isn't harmful unless you do something really dumb.
A snow drift isn't harmful unless you're not cautious.
An avalanche, well that gets harmful pretty damned quick.
These things are all snow, but suddenly at some point scale starts to matter.
I love this way of explaining it. I've been calling it the programmers fallacy -- "anything you can do you can do in a for loop."
I think in a lot of ways we all struggle with the nature of some things changing their nature depending on the context and scale. Like if you kill a frenchman on purpose that's a murder, if you killed him because if he attacked you first it's self defense, if you killed him because he was convicted of a crime that's an execution, if you killed him because he's french that's a hate crime, but if you're at war with France that's killing an enemy combatant, but if he's not in the military that's a civilian casualty, and if you do that a lot it becomes a war crime, and if you kill everyone who's french it's a genocide.
Yeah but ski-runs with human made snow are fine.
How do we know which slippery slope we are on?
1 reply →
Nukes are not cheap. It is cheaper to firebomb. I would love if the reason nukes were not used was that of empathy or humanitarian. It is strictly money, optics, psychological and practicality.
You don't want your troops to have to deal with the results of a nuked area. You want to use the psychological terror to dissuade someone to invade you, while you are invading them or others. See Russia's take.
Or you are a regime and want to stay in power. Having them keeps you in power; using them or crossing the suggestion to use them line will cause international retaliation and your removal. (See Iraq.)
And yet guns kill far, far, far more people than nukes.
The ironic thing is that many individuals now clamoring for more regulation have long claimed to be free-market libertarians who think regulation is "always" bad.
Evidently they think regulation is bad only when it puts their profits at risk. As I wrote elsewhere, the tech glitterati asking for regulation of AI remind me of the very important Fortune 500 CEO Mr. Burroughs in the movie "Class:"
Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."
Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."
Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."
---
Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ
Absolutely. Those folks arguing for AI regulation aren't arguing for safety – they're asking the government to build a moat around the market segment propping up their VC-funded scams.
Who is "those folks"? The ones I know of have been complaining about how the term "AI safety" has changed meaning from "don't kill everyone" to "don't embarrass the corporation".
The biggest players in AI haven’t been VC-funded for decades. Unless you mean their customers are VC-funded, but even then startups are a much smaller portion of their revenue than Fortune 500.
their motivations may be selfish, but that doesn't mean that regulation of AI is wrong. I'd prefer there be a few heavily-regulated and/or publicly-owned bodies in the public eye that can use and develop these technologies, rather than literally anyone with a powerful enough computer. yeah it's anti-competitive, but competition isn't always a good thing