Comment by GolfPopper
1 day ago
Even if LLMs don't become AGI (and I don't think they will), LLMs are potentially superb disinformation generators able to operate at massive scale. Modern society was already having difficulty holding onto consensus reality. "AI" may be able to break it.
Don't think "smart human". Think about a few trillion scam artists who cannot be distinguished from a real person except by face to face conversation.
Your every avenue of modern communication and information being innundate by note-perfect 419 scams, forever.
No comments yet
Contribute on Hacker News ↗