Comment by ordersofmag
2 days ago
Seems like the ability to distinguish LLM versus 'good human' writing depends on the size of the writing sample you have to look at (assuming you think it can be done). And that HN-scale posts are unlikely to be a long enough for useful discernment.
Within a few years, LLMs will be indistinguishable from human text.
Think how easy it was to tell the differences a year or two ago. By 2030 there will be no way to ever tell.
The same is true of all video, and all generated content. The death of the Internet comes not from spam, or Facebook nonsense, but instead from the fact that soon?
You'll never know of you're interacting with a human or not.
Why like a post? Reply to it? Interact online? Why read a "news" story?
If I was X or Meta or Reddit, I would be looking at the end.
When will Teslas be self-driving again?
Teslas have the wrong sense-gear, coupled with immense randomness. Pesky pedestrians. Waymo seems to be doing quite well in comparison. Regardless, a cat isn't a dog, and real-world navigation isn't posting on Facebook.
It would be better to make a direct point, such as "It will never be flawless". That's not really a problem here, it only need be flawless most of the time.
See my other post.
1 reply →
LLMs won’t destroy social media any more than it already is.
I don’t think I have ever had a meaningful human interaction with anyone on Twitter, Meta, or Reddit without already knowing them from somewhere else. Those sites are about interacting with information, not people. It’s purely transactional. Bots, spam, and bad actors are not new.
Meta has been a dumpster fire of spam and bots for over 15 years, the overwhelming majority of its existence.
Reddit has some pockets of meaningful interaction but you have to find them and the partitioned nature means that culture doesn’t spread across the site. It’s also full of bots and shills.
Nobody tells stories about meeting people on Twitter. At best it’s a microblog platform and at worst it’s X.
Common people go to such sites for updates from friends, or to follow celebrities.
Their friends will start using more and more AI, ans celebrities will become all AI.
Why read a friend's page, if it's just AI drivel. Same for a celebrity.
It doesn't even need tp be true. Burned once, people will never trust again. The humiliation of writing messages that your friend only has a bot summarize, and reply to, will kill it.
Imagine you speak to your friend, and they haven't even read any measages you wrote, but their AI responded? And you in turn. Imagine you've had dozens of conversations, but it was with a bot instead of your friend.
Your trust will be eroded.
SPAM doesn't act like your friend. A bot does.
And the inability to distinguish will be the clincher. And yes, you won't know the difference, not after the AI is trained on their sent mail folder.
[dead]