Comment by kmacdough
2 years ago
Sentient killer robots is not the risk most AI researchers are worried about. The risk is what happens as corporations give AI ever larger power over significant infrastructure and marketing decisions.
Facebook is an example of AI in it's current form already doing massive societal damage. It's algorithms optimize for "success metrics" with minimal regard for consequences. What happens when these algorithms are significantly more self modifying? What if a marketing campaign realizes a societal movement threatens it's success? Are we prepared to weather a propaganda campaign that understands our impulses better than we ever could?
No comments yet
Contribute on Hacker News ↗