Comment by wokwokwok
2 years ago
…and the problem with that is what, exactly?
The only meaningful thing in this discussion is about people who want to make money easy, but can’t, because of the rules they don’t like.
Well, suck it up.
You don’t get to make a cheap shity factory that pours its waste into the local river either.
Rules exist for a reason.
You want the life style but also all the good things and also no rules. You can’t have all the cake and eat it too.
/shrug
If China builds amazing AI tech (and they will) then the rest of the world will just use it. Some of it will be open source. It won’t be a big deal.
This “we must out compete China by being as shit and horrible as they are” meme is stupid.
If you want to live in China, go live in China. I assure you you will not find it to be the law less free hold of “anything goes” that you somehow imagine.
> Rules exist for a reason.
The trouble is sometimes they don't. Or they do exist for a reason but the rules are still absurd and net harmful because they're incompetently drafted. Or the real reason is bad and the rules are doing what they were intended to do but they were intended to do something bad.
> If China builds amazing AI tech (and they will) then the rest of the world will just use it.
Not if it's banned elsewhere, or they allow people to use it without publishing it, e.g. by offering it as a service.
And it matters a lot who controls something. "AI" potentially has a lot of power, even non-AGI AI -- it can create economic efficiency, or it can manipulate people. If an adversarial entity has greater economic efficiency, they can outcompete you -- the way the US won the Cold War was essentially by having a stronger economy. If an adversarial entity has a greater ability to manipulate people, that could be even worse.
> If you want to live in China, go live in China. I assure you you will not find it to be the law less free hold of “anything goes” that you somehow imagine.
But that's precisely the issue -- it's not an anarchy, it's an authoritarian competing nation state. We have to be better than them so the country that has an elected government and constitutional protections for human rights is the one with an economic advantage, because it isn't a law of nature that those things always go together, but it's a world-eating disaster if they don't.
> Or they do exist for a reason but the rules are still absurd and net harmful
Ok.
…but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?
Shouldn’t you be asking: does the law do a useful thing? Does it make the world better? Is it compatible with our moral values?
Organ harvesting.
Stem cell research.
Human cloning.
AI.
Slavery.
How can anyone stand there and go “well China will do it so we may as well?”
In an abstract sense this is a fundamentally invalid logical argument.
Truth on the basis of arbitrary assertion.
It. Is. False.
Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.
What we do is not governed by what China may or may not do.
If you want to win the “AI race” to AGI, then investment and effort is required, not allowing an arbitrary “anything goes” policy.
China as a nation is sponsoring the development of its technology and supporting its industry.
If you want want to beat that, opposing responsible AI won’t do it.
Of course you have to consider what other countries will do when you create your laws. The notion that you can ignore the rest of the world is both naive and incredibly arrogant.
There are plenty of technologies that absolutely do not "make the world better" but unfortunately must get built because humans are shitty to each other. Weapons are the obvious one, but not the only one. Often countries pass laws to encourage certain technologies or productions so as not to get outcompeted or outproduced by other countries.
The argument here about AI is exactly this sort of argument. If other countries build vastly superior AI by have fewer developmental restrictions, then your country maybe both at a military disadvantage but also at an economic disadvantage because you can be easily outproduced by countries using vastly more efficient technology.
You must balance all the harms and benefits when making laws, including external to the country issues.
1 reply →
> ...but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?
That depends on what "it" is. If it's slavery and the US but not China banning slavery causes there to be half as much slavery in the world as there would be otherwise, it would be stupid.
But if it's research and the same worldwide demand for the research results are there so you're only limiting where it can be done, which only causes twice as much to be done in China if it isn't being done in the US, you're not significantly reducing the scope of the problem. You're just making sure that any benefits of the research are in control of the country that can still do it.
> Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.
Of course it is. You could very easily pass laws that de facto prohibit AI research in the US, or limit it to large bureaucracies that in turn become stagnant for lack of domestic competitive pressure.
This doesn't even have anything to do with the stated purpose of the law. You could pass a law requiring government code audits which cost a million dollars, and justify them based on any stated rationale -- you're auditing to prevent X bad thing, for any value of X. Meanwhile the major effect of the law is to exclude anybody who can't absorb a million dollar expense. Which is a bad thing even if X is a real problem, because that is not the only possible solution, and even if it was, it could still be that the cure is worse than the disease.
Regulators are easily and commonly captured, so regulations tend to be drafted in that way and to have that effect, regardless of their purported rationale. Some issues are so serious that you have no choice but to eat the inefficiency and try to minimize it -- you can't have companies dumping industrial waste in the river.
But when even the problem itself is a poorly defined matter of debatable severity and the proposed solutions are convoluted malarkey of indiscernible effectiveness, this is a sure sign that something shady is being evaluated.
A strong heuristic here is that if you're proposing a regulation that would restrict what kind of code an individual could publish under a free software license, you're the baddies.
1 reply →
> What we do is not governed by what China may or may not do.
Yes it is... Where the hell would you get the impression we don't change how we govern and invest based on what China does, is doing, or might be doing? Do you really think nations don't adjust their behavior and laws based on other counties real or perceived? I can't imagine you're that ignorant.
> If you want want to beat that, opposing responsible AI won’t do it.
Not opposing it guarantees you lose though.
2 replies →
False equivalency at its finest. This is more akin to banning factories and people rightly saying our rivals will use these factories to out produce us. This is also a much better analogy because we did in fact give China a lot of our factories and are paying a big price for it.
I think you underestimate the power foreign governments will have and will use if we are relying on foreign AI in our everyday lives.
When we ask it questions, an AI can tailor its answers to change peoples opinions and how people think. They would have the power to influence elections, our values, our sense of right and wrong.
That's before we start allowing AI to just start making purchasing decisions for us with little or no oversight.
The only answer I see is for us all to have our own AI's that we have trained, understand, and trust. For me this means it runs on my hardware and answers only to me. (And not locked behind regulation)
// If China builds amazing AI tech (and they will) then the rest of the world will just use it. Some of it will be open source. It won’t be a big deal.
"Don't worry if our adversary develops nuclear weapons and we won't - it's OK we'll just use theirs"
> "Don't worry if our adversary develops nuclear weapons and we won't - it's OK we'll just use theirs"
Beneath this comment is hidden a truth that there is AI which can be used beneficially, AI which can be used detrimentally, AI which can be weaponized in warfare, and AI which can be used defensively in warfare. Discussions about policy and regulation should differentiate these, but also consider implications of how this technology is developed and for what purpose it could be employed.
We should definitely be developing AI to combat AI as it will most certainly be weaponized against us with greater frequency in the near future.
Yes and I think it's broader than that. For example, if a country uses AI to (say) optimize their education or their economy - they will "run away" from us. Rather than enabling us to use that technology too (why would they, even for money) they can just wait until their advantage is insurmountable.
So it's not just pure warfare systems that are risky for us but everything.
>…and the problem with that is what, exactly?
The problem is what the Powers-That-Be say and what they do are not in alignment.
We are now, after much long-time pressure from everyone not in power saying that being friendly with China doesn't work, waging a cold war against China and presumably we want to win that cold war. On the other hand, we just keep giving silver platter after silver platter to China.
So do we want the coming of Pax Sino or do we still want Pax Americana?
If we defer to history, we are about due for another changing of the guard as empires generally do not last more than a few hundred years if that, and the west seems poised to make that prophecy self-fulfilling.
Wish people stopped with that Cold War narrative. You're not waging anything just yet.
Here's the thing: the US didn't win the OG Cold War by being, as 'AnthonyMouse puts it upthread, "the country that has an elected government and constitutional protections for human rights" and "having a stronger economy". It won it by having a stronger economy, which it used to fuck half of the world up, in a low-touch dance with the Soviets that had both sides toppling democratic governments, funding warlords and dictatorships, and generally doing the opposite of protecting human rights. And at least through a part of that period, if an American citizen disagreed, or urged restraint and civility and democracy, they were branded a commie mutant spy traitor.
My point here isn't to pass judgement on the USA (and to be clear, I doubt things would've been better if the US let Soviets take the lead). Rather, it's that when we're painting the current situation as the next Cold War, then I think people have a kind of cognitive dissonance here. The US won the OG Cold War by becoming a monster, and not pulling any punches. It didn't have long discussions about how to safely develop new technologies - it just went full steam ahead, showered R&D groups with money, while sending more specialists to fuck up another country to keep the enemy distracted. This wasn't an era known for reasoned approach to progress - this was the era known for designing nuclear ramjets with zero shielding, meant to zip around the enemy land, irradiating villages and rivers and cities as they fly by, because fuck the enemy that's why.
I mean, if it is to happen, it'll happen. But let's not pretend you can keep Pax Americana by keeping your hands clean and being a nice democratic state. Or that whether being more or less serious about AI safety is relevant here. If it becomes a Cold War, both sides will just pull all the stops and rush full-steam to develop and weaponize AGI.
--
EDIT - an aside:
If the history of both sides' space programs is any indication, I wouldn't be surprised to see the US building a world-threatening AGI out of GPT-4 and some duct tape.
Take for example US spy satellites - say, the 1960s CORONA program. Less than a decade after Sputnik, no computers, with engineering fields like control theory being still under development - but they successfully pulled off a program that involved putting analog cameras in space on weird orbits, which would make ridiculously high-detail photos of enemy land, and then deorbit the film canisters, so they could be captured mid-air by a jet plane carrying a long stick. If I didn't know better, I'd say we don't have the technology today to make this work. The US did it in the 1960s, because it turns out you can do surprisingly much with surprisingly little, if you give creative people infinite budget, motivate them with basic "it's us vs. them" story, and order them to win you the war.
As impressive as such feats were (and there were plenty more), I don't think we want to have the same level of focus and dedication applied to AI - if that's a possibility, then I fear we've crossed the X-risk threshold already with the "safe" models we have now.