← Back to context

Comment by GolfPopper

1 day ago

Even if LLMs don't become AGI (and I don't think they will), LLMs are potentially superb disinformation generators able to operate at massive scale. Modern society was already having difficulty holding onto consensus reality. "AI" may be able to break it.

Don't think "smart human". Think about a few trillion scam artists who cannot be distinguished from a real person except by face to face conversation.

Your every avenue of modern communication and information being innundate by note-perfect 419 scams, forever.

I think we'll adapt. At any point we can start to treat the internet like the trash pile of bullshit it's been turning into and stop taking anything in our inboxes as legitimate and websites as nothing more than entertainment.