Comment by intended
5 days ago
> Research we published earlier this year showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Perhaps even more worryingly, our new research demonstrates that the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates
Bruce Scheneir, May 2024
https://www.schneier.com/academic/archives/2024/06/ai-will-i...
I am seeing a stream of comments on Reddit that are entirely ai driven, and even bots which are engaging in conversations. Worst case scenarios I’m looking at will mean it’s better to assume everyone online is a bot.
I know of cases where people have been duped into buying stocks because of an AI generated version of a publicly known VP of a financial firm.
Then there’s the case where someone didn’t follow email hygiene and got into a zoom call with what appeared to be their CFO and team members, and transferred several million dollars out of the firm.
And it’s only 2-3 years into this lovely process. The future is so bleak that just talking about this with people not involved with looking at these things call it nihilism.
It’s so bad that talking about it is like punching hope.
At some point trust will break down to a point, you will actually only believe things from a real human with a badge(talking to them in person).
For that matter, My email has been /dev/null for a while now, and unless I have spoken to a person over phone and expect their email, I don't even check my inbox. Facebook/Instagram account is largely used as a photo back up service, plus online directory. And Twitter is for news.
I mostly don't trust anything that comes online, unless I already have verified the other party is somebody Im familiar with and even then only through the established means of communication we both have agreed to.
I do believe reddit, quora, leet code et al, will largely be reduced /dev/null spaces very soon.
The issue is that you can say they but as an agglomeration of individuals - society can’t say that.
There was a direct benefit from digitization and being able to trust digital video and information that allowed nations to deliver services.
Trust was a public good. Factual information cheaply produced and disseminated was a public good.
Those are now more expensive because the genAI content easily surpasses any cheap bullshit filter.
It also ends up undermining faith in true content, which may be outlandish.
I saw an image of a penny hitch on Reddit and I have no idea if it’s real or not without having to check anymore.
>>It also ends up undermining faith in true content, which may be outlandish.
In all honesty, art in some form or the other has always been simulated to some extent. Heck, the whole idea of a story, even if in a book is something you know hasn't happened in real life, but you are willing to suspend belief for a while to be entertained. This is the essence of all entertainment. It is not real, but it makes you feel good.
Like action movies have had cgi, cartoon shows, magic shows and even actors putting on make up itself can be considered deviation from truth.
I guess your idea is that news can be manufactured and one could rig public opinion to all sorts of bad things. But again, once you are here, a good amount of public already knows this to be false enough to be wary of it. Come to think of it, a lot of news is already heavily edited to a point it doesn't represent the original story. This is just a continuation of the same.
1 reply →