Comment by RationalDino
2 years ago
I am afraid that this will just lead down the path to what https://twitter.com/ESYudkowsky/status/1718654143110512741 was mocking. We're dictating solutions to today's threats, leaving tomorrow to its own devices.
But what will tomorrow bring? As Sam Altman warns in https://twitter.com/sama/status/1716972815960961174, superhuman persuasion is likely to be next. What does that mean? We've already had the problem of social media echo chambers leading to extremism, and online influencers creating cult-like followings. https://jonathanhaidt.substack.com/p/mental-health-liberal-g... is a sober warning about the dangers to mental health from this.
These are connected humans accidentally persuading each other. Now imagine AI being able to drive that intentionally to a particular political end. Then remember that China controls TikTok.
Will Biden's order keep China from developing that capability? Will we develop tools to identify how that might be being actively used against us? I doubt both.
Instead, we'll almost certainly get security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies. But is unlikely to address the likely future problems that haven't materialized yet.
>security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies.
Yeah I think this is my biggest worry given it will enable incumbents to be even more dominant in our lives than bigtech already is (unless we get an AI plateau again real soon).
And choosing not to regulate prevents that… how exactly?
Your question embeds a logical fallacy.
You're challenging a statement of the form, "A causes B. I don't like B, so we shouldn't do A." You are challenging it by asking, "How does not doing A prevent B?" Converting that to logic, you are replacing "A implies B" with "not-A implies not-B". But those statements are far from equivalent!
To answer the real question, it is good to not guarantee a bad result, even though doing so doesn't guarantee a good result. So no, choosing not to regulate does not guarantee that we stop this particular problem. It just means that we won't CAUSE it.
4 replies →
By ensuring there is competition and alternatives that don't cost a million before you can even start training.
2 replies →
> superhuman persuasion is likely to be next
Some people already seem to have superhuman persuasion. AI can level the playing field for those that lack it and give all the ability to see through such persuasion.
I am cautiously optimistic that this is indeed possible.
But the kind of AI that can achieve it has to itself be capable of what it is helping defend us from. Which suggests that limiting the capabilities of AI in the name of AI safety is not a good idea.