Comment by pj_mukh
5 days ago
"It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze."
I keep hearing this but have yet to find a good resource to study the issues. Most of what I've read so far falls into two buckets:
"It'll hijack our minds via Social Media" - in which case Social Media is the original sin and the problem we should be dealing with, not AI.
or
"It'll make us obsolete" - I use the cutting edge AI, and it will not, not anytime soon. Even if it does, I don't want to be a lamplighter rioting, I want to have long moved on.
So what other good theories of safety can I read? Genuine question.
> Research we published earlier this year showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Perhaps even more worryingly, our new research demonstrates that the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates
Bruce Scheneir, May 2024
https://www.schneier.com/academic/archives/2024/06/ai-will-i...
I am seeing a stream of comments on Reddit that are entirely ai driven, and even bots which are engaging in conversations. Worst case scenarios I’m looking at will mean it’s better to assume everyone online is a bot.
I know of cases where people have been duped into buying stocks because of an AI generated version of a publicly known VP of a financial firm.
Then there’s the case where someone didn’t follow email hygiene and got into a zoom call with what appeared to be their CFO and team members, and transferred several million dollars out of the firm.
And it’s only 2-3 years into this lovely process. The future is so bleak that just talking about this with people not involved with looking at these things call it nihilism.
It’s so bad that talking about it is like punching hope.
At some point trust will break down to a point, you will actually only believe things from a real human with a badge(talking to them in person).
For that matter, My email has been /dev/null for a while now, and unless I have spoken to a person over phone and expect their email, I don't even check my inbox. Facebook/Instagram account is largely used as a photo back up service, plus online directory. And Twitter is for news.
I mostly don't trust anything that comes online, unless I already have verified the other party is somebody Im familiar with and even then only through the established means of communication we both have agreed to.
I do believe reddit, quora, leet code et al, will largely be reduced /dev/null spaces very soon.
The issue is that you can say they but as an agglomeration of individuals - society can’t say that.
There was a direct benefit from digitization and being able to trust digital video and information that allowed nations to deliver services.
Trust was a public good. Factual information cheaply produced and disseminated was a public good.
Those are now more expensive because the genAI content easily surpasses any cheap bullshit filter.
It also ends up undermining faith in true content, which may be outlandish.
I saw an image of a penny hitch on Reddit and I have no idea if it’s real or not without having to check anymore.
2 replies →
Slightly tangential: A lot of these issues are philosophical in origin, because we don't have priors to study. But just because, for example, advanced nanotechnology doesn't exist yet, that doesn't mean we can't imagine some potential problems based on analogical things (viruses, microplastics) or educated assumptions.
That's why there's no single source that's useful to study issues related to AI. Until we see an incident, we will never know for sure what is just a possibility and what is (not) an urgent or important issue [1].
So, the best we can do is analogize based on analogical things. For example: the centuries of Industrial Revolution and the many disruptive events that followed; history of wars and upheavals, many of which were at least partially caused by labor-related problems [2]; labor disruptions in the 20th century, including proliferation of unions, offshoring, immigration, anticolonialism, etc.
> "Social Media is the original sin"
In the same way that radio, television and the Internet are the "original sin" in large-scale propaganda-induced violence.
> "I want to have long moved on."
Only if you have where to go. Others may not be that mobile or lucky.
[1] For example, remote systems existed for quite some time, yet we've only seen a few assassination attempts. Does that mean that slaughterbots are not a real issue? It's unclear and too early to say.
[2] For example, high unemployment and low economic mobility in post-WW1 Germany; serfdom in Imperial Russia.
Slightly tangential: A lot of these issues are philosophical in origin, because we don't have priors to study. But just because, for example, advanced nanotechnology doesn't exist yet, that doesn't mean we can't imagine some potential problems based on analogical things (viruses, microplastics) or educated assumptions.
That's why there's no single source that's useful to study issues related to AI. Until we see an incident, we will never know for sure what is just a possibility and what is (not) an urgent or important issue [1].
So, the best we can do is analogize based on analogical things. For example: the centuries of Industrial Revolution and the many disruptive events that followed; history of wars and upheavals, many of which were at least partially caused by labor-related problems [2]; labor disruptions in the 20th century, including proliferation of unions, offshoring, immigration, anticolonialism, etc.
> "Social Media is the original sin"
In the same way that radio, television and the Internet are the "original sin" in large-scale propaganda-induced violence.
> "I want to have long moved on."
Only if you have where to go. Others may not be that mobile or lucky. If autonomous trucks can make the trucking profession obsolete, it's questionable how quickly can truckers "move on".
[1] For example, remote systems existed for quite some time, yet we've only seen a few assassination attempts. Does that mean that slaughterbots are not a real issue? It's unclear and too early to say.
[2] For example, high unemployment and low economic mobility in post-WW1 Germany; serfdom in Imperial Russia.
try to find a date on a dating app, you will experience firsthand