Comment by tabbott
5 hours ago
I feel like too little attention is given in this post to the problem of automated troll armies to influence the public's perception of reality.
Peter Pomerantsev's books are eye-opening on the previous generation of this class of tactics, and it's easy to see how LLM technology + $$$ might be all you need to run a high-scale influence operation.
>I feel like too little attention is given in this post to the problem of automated troll armies to influence the public's perception of reality.
I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for. The whole premise of a Democracy is that people have the right to vote however they want. There is no asterisk to that in my opinion.
I really dont see how 1 person 1 vote can survive this idea that people are only as good as the information they receive. If that's true, and people get enough bad information, then you can reasonably conclude that people shouldn't get a vote.
> I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for.
Ban bots from social media and all other speech platforms. We agree that people ought to have freedom of speech. Why should robots be given that right? If you want to express an opinion, express it. If you want to deploy millions of bots to impersonate human beings and distort the public square, you shouldn’t be able to.
> Ban bots from social media and all other speech platforms.
I would agree with that, but how do you do it? The problem is that as the bots become more convincing it becomes harder to identify them to ban them. I only see a couple options.
One is to impose crushing penalties on whatever humans release their bots onto such platforms, do a full-court-press enforcement program, and make an example of some offenders.
The other is to ban the bots entirely by going after the companies that are running them. A strange thing about this AI frenzy is that although lots of small players are "using AI", the underlying tech is heavily concentrated in a few major players, both in the models and in the infrastructure that runs them. It's a lot harder for OpenAI or Google or AWS to hide than it is for some small-time politician running a bot. "Top-down" enforcement that shuts down the big players could reduce AI pollution substantially. It's all a pipe dream though because no one has the will to do it.
2 replies →
By all means yes, for the clear case of propaganda bots, ban them. The problem is there will still be bots. And there is a huge gray area - many of the cases aren't clear. I think its just an intractable problem. People are going to have to deal with it.
Easier said than done.
The voting becomes a health-check for the information. We shouldn't revoke the rights of the individual based on arbitrary information they may or may not receive.
If you're reality isn't be influenced, then you're creating it yourself. Both are strengths and weakness, depending on context.