Comment by howmayiannoyyou

2 years ago

“hidden exploitation”

Its going to be challenging to detect AI intent when it is sharded across many sessions by a malicious AI or bad actors utilizing AI. To illustrate this, we can look at TikTok, where the algorithmic ranking of content in user feeds, possibly driven by AI, is shaping public opinion. However, we should also consider the possibility of a more subtle campaign that gradually molds AI responses and conversations over time.

This could be be gradually introducing biased information or subtly framing certain ideas in a particular way. Over time, this could influence how the AI interacts with users and impacts the perception of different topics or issues.

It will take narrowly scoped, highly tuned single-purpose AI to detect and address this threat, but I doubt the private sector has a profit motive for developing that tech. Here's where legislation, particularly tort law, should be stepping in - to create a financial incentive.