Comment by mikeocool
1 day ago
Kinda seems like we’re rapidly headed for the complete collapse of the internet as we know it.
Every site that is driven by user posting seems to be headed towards being overrun by AI bots chatting with each other, either for sake of promoting something or farming karma.
And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle. Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
The bot problem cannot be solved. Even if you strongly authenticate, people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future. Build your identity and reputation autonomously with the benefits that come with that.
This happens now on Onlyfans too. Content creators hire agencies which in the best case outsource chatting to "customers" to armies of cheap labour in Asia, and the worst case use bots.
The dead internet theory [1] is probably not just a theory anymore. HN recently made a policy to not allow AI posting and posters, but do you honestly think that's going to work? I would place a bet that a top HN poster within the next year is outed as using AI for posting on their behalf.
[1] https://en.wikipedia.org/wiki/Dead_Internet_theory
The bot problem can be solved.
Anubis is one such answer [0]. Cryptocurrency and micro transactions are another.
In the last few decades, spam was a problem because the marginal transaction costs of information exchange were orders of magnitude lower than they had been. Note that physical mail spam was, and still, is an issue. Focusing on perceptual or fuzzy computation as the limiting factor, through captchas and other 'human tests', allowed for most spam to be effectively mitigated.
Now that intelligence is becoming orders of magnitude cheaper, perceptual computation challenges no longer work, but we can still do computation challenges in the form of proof of work or proxies thereof. Spam will never wholly go away but we can at least cause more friction by charging bot networks to execute in the form of energy or money.
[0] https://github.com/TecharoHQ/anubis
I don't see how Anubis solves anything. If a human lets the bot control a completely vanilla computer (which there is now a lot of tooling for), then how is it going to stop that?
Indeed - the future is RL meet-ups and small, intimate online communities.
Perhaps not the worst thing in the world?
This is the optimistic take I’ve held.
Bots get so good that they become indistinguishable from humans. If that’s true then it doesn’t actually matter if your community is all bots. But it does matter because authenticity matters to humans. They will seek authenticity where they can successfully sense it, which will be in-person.
Human simulacrums will one day cause a repeat of this issue. Then we’ll have a whole Blade Runner 2049 issue about what exactly is authenticity?
Counterpoint: https://reddit.com/r/MyBoyfriendIsAI/
People will prefer the bots that give them head pats and tell them they're so smart and that they love them
1 reply →
> Perhaps not the worst thing in the world?
Definitely not. “Terminally online” is as deleterious as it sounds.
Yeah, you're completely right. Maybe this will be the impetus a lot of people need to detach from online.
"content creators" https://fgiesen.wordpress.com/2025/07/06/content-creator/
It's the same freelance advertisers who optimistically refer to themselves as "influencers".
The word "content" is gross.
"Creator", on the other hand, is beautiful. It means you don't have to pick a lane. Anything can be creative. Documentary filmmaking, stop motion, dance, costume work, historical reenactment, indie animation, economics essays, game dev...
The problem is we don't have a nice word that holistically captures the output of creators. They're not all making films or illustrations. So what do you call it? "Art" is awkward.
"Content" works, but it sounds like slop. We need a better alternative word that elevates creative output.
3 replies →
> people are letting bots act on their behalf (moltbook is a great example of this) and what's to stop people doing that in the future.
Verifiable credentials; services can get persistent pseudonymous identifiers that are linked to a real-world identity. Ban them once and they stay banned. It doesn’t matter if a person lets a bot post inauthentic content using their identity if, when they are caught, that person cannot simply register a new account. This solves a bunch of problems – online abuse, spam, bots, etc. – without telling websites who you are or governments what you do.
You kinda skipped the bit I wrote alongside this about strong authentication. There are numerous ways to do this. For example, in Finland you have to physically identify yourself to open a bank account and you can then use that to authenticate. It's used for all public sector services and a few others with strict accreditation.
The issue is that it solves nothing if you can't distinguish between text that is written by AI and isn't, regardless of strong authentication.
There is the other side of this too: Real people - fake posts.
So, you have other folks on here already saying that the code their bots write is better than their own, right?
How long until someone who is karma focused just uses a bot to write their comments and post their threads? I mean, it's probably already happening, right? Just like a bot doing your homework for you, but with somehow even less stakes. I imagine that non native speakers will take their posts and go to an AI to help clean them up, at the very least. At the worst, I can imagine a person having a bot interact fully under their name.
So even if we have some draconian system of verification, we will still have some non-zero percentage of bot spam. My out-of-my-butt guess is somewhere near 40%.
This is exactly right. The problem is the friction that this kind of system adds.
Even so, I implemented this and I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
The ability to make a new account is an important defense against abusive bans. You don't want it to be possible for Google to unperson you.
I've talked about this on here before, but we think the solution is an auth layer built on top of credit score through an intermediary like creditkarma. The score itself doesn't really matter but it does solve big problems.
Plus, if you wanted to implement a filtering system for users, I personally would rather trust reviews / comments from credit scores over 650, they have less incentive to be astroturfing.
But yes, I think your conclusion is correct. This is the only way.
1 reply →
IMO this is inevitable. HN is freaking about about the end of the anonymous internet, but it's already over and we're just figuring it out. Eventually the bots will find their 90s cyberpunk cosplay IRC channel too.
1 reply →
I'd rather have a system where there's a small investment cost to making an account, but you could always make another.
Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."
Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.
14 replies →
The bot problem can easily be solved. It’s just that no one likes the cure. Think about this for a minute: what would happen if you had a country where all its citizens could act anonymously with no consequences, no reputation, no repercussions, and no trace? Would you want to go there? Live there? No, because it would be a lawless wasteland dominated by the worst of the worst.
Yet people act like the internet is somehow different. The internet is a massive society. Social networks are very much like virtual countries, or even continents. We’ve all enjoyed the benefits of living in this society of zero consequence, but it’s now been overrun by the very worst people, just like the imaginary country above.
You claim we can’t solve this problem, but we already have solved it here in the physical world with identities, laws, and consequences. The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity. Unfortunately, there won’t be a choice for much longer. The internet will certainly be dead without a system that ties IP addresses and online identities to real people.
No, it’s not the internet we all wanted, but humanity has ruined the one we have.
I can imagine a "anonymity" or "reputation" filter attached to every interaction in the internet. Enabled by default, but you can disable safe mode and see bots having fun.
Also for me problem is not in the anonymity itself, but in the lack of reputation. If I have a signal that entity can be trusted, I don't care much about its real identity.
1 reply →
I suppose reshaping the fundamental social contract with the internet and the computers we use to access them would solve the problem.
So you are missing something here. Up until recently IRL was anonymous by the nature of capturing all that data of what people are doing was expensive and difficult to process. Cameras weren't everywhere either.
1 reply →
>The real problem is that most people don’t want to let go of the very thing that is the problem: anonymity.
Anonymity is not the problem though. We've gone with anonymity for a long while and it has worked fine. Would a removal of anonymity suddenly fix all this? No, absolutely not. Astroturfing and PR campaigns happened before AI comments were a concern, same as bad actors.
The problem here is the "recent" development of trusting whatever you read online. Of insisting that content should be personal, trustable and real, when none of this can ever be ensured. The separate, but related problem of engagement-based economy makes it way worse.
And remember: social media sites don't actually want to get rid of bots, for the most part. That's not in their interest, as long as bots increase engagement, does anyone trust them to actively hurt their bottom line in order to promote honest, productive discourse? Please.
With AI running rampant, it seems security through obscurity is basically the best thing we have. Everyone knows reddit, facebook, xitter, etc so any clown can and does have bots running loose. HN is "obscure" in that most normies don't know about this place, and so it's relatively safe from the floods of spam. But I think it's just a matter of time until non-tech people start looking for those few bastions of human comments online, come across this place, and a great flood begins and it'll never be undone. After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.
HN may not be “mainstream” but it is certainly _very_ vulnerable to bot spam given the topics discussed and the make-up of the audience.
You can already see it happening now - at least the bots that write like vanilla Claude/ChatGPT. Presumably there is a much larger hidden cohort of bots that are instructed to talk more naturally and thus are better adept at flying under the radar…
I would say that HN has a lot of features that would be seen as draconian in how much they limit your interaction by other platforms.
You can barely comment before you are rate limited.
You can’t upvote until you’ve been around a pretty long time.
New accounts are given a green badge of dishonor that makes users scrutinize their comments more.
I’m not saying these are bad things but they’re probably too restrictive for a social media network that’s just meant to be a good fun time.
7 replies →
Dang told me in 2019 that HN gets 150M page views a month, so it's not that obscure actually:
https://news.ycombinator.com/item?id=21201120
150m page views a month is peanuts and very far away from the "social" networks numbers. I don't have those numbers, but I know how many page views we had 2011 while running a german browser game community.
3 replies →
> After that, I guess it'll be a rise of invite-only forums like we had in the early 2000s all over again.
Which would be totally fine with me TBH.
Rather amusingly, invite-only torrent sites might be the only semi-public authentically human hangouts left on the internet!
I was thinking the same thing, that this wouldn't necessarily be a bad thing. I'm curious how far it will go.. if we'll get invite-only mesh networks with self-contained mini-internets and the like.
Eternal AI september.
Eternal LLMber
I've asked ChatGPT a question about something I read in a thread here and it responded with a comment from that thread, even though the thread was less than an hour old. HN is well known in the tech community and there are certain subjects, especially anything involving Israel or India, that nearly instantly result in a flood of comments from bad actors. HN isn't Reddit but it's also a shadow of what it once was, which is driving away more of the productive participation in favor of agenda-based posting.
Search engines seem to index HN in near real time. They must have custom scraping code to follow the incrementing post IDs.
1 reply →
Note that these topics often involve comments which you can predict very easily. Internet users are like that, agenda or no. Wasn’t it in the heyday of forums that you could recognize the most prolific/annoying members by their style and vocabulary? A model should have no problem pulling such things off.
1 reply →
The future is human curated content. Provide the same experience people get today but without the noise. Give them just the good stuff and don't let just anyone make a post. A book has an author, a movie has a director, maybe websites can have webmasters again who filter through the garbage for you.
The future is meeting in person and watching performers actually perform live.
You've nailed it. Social media is no longer and will never again be a substitute for real human interactions. It sort of worked when it was mostly real humans, but that era is ending and not coming back. Algorithms are now controlling what you see, and bots and agents are increasingly creating and posting most of the content.
Currently the biggest places with live performances are swamped and tickets get scalped for huge upcharges.
It’s what I’m trying to accomplish with my website(link is in my profile). Just trying to crank up the signal to noise ratio.
Nice. I like how clicking a tag also makes the word 'tag' light up.
6 replies →
How did Yahoo work out compared to Google?
AI is sucking up that content and denying traffic to its creators. This model is becoming obsolete.
A curator with a great taste and judgement is king.
Yes, precisely.
This means that only sites which verify identity will have any value in the future. And by verified, that means against government ID and verified as real.
No amount of sign up fee works as an alternative.
Note that a site can verify identity, prevent sock puppets, ban bad actors and prevent re-registration, all while keeping that ID private.
You still get a handle and publicly facing nick if you want it.
The company which handles this correctly will have a big B after it. Digg actually has a chance at this.
It has no users, so the outrage won't exist in the same capacity. Existing platforms will be pummeled in the market if they try to convert to this type of site, as their DAU will likely drop a thousandfold, just due to the eliminated bots.
But Digg could relaunch this way. And as exhibited, this is now the only way.
The age of the anonymous internet is over, it's done. People not realizing this are living in the past.
Note, I don't like this, but acknowledging reality is vital. Issues with leaked databases, users, hacking of Pii are all technical and legislative issues, and not relevant to whether or not this happens.
Because it will happen, and is happening.
It should be noted that falsifying ID is a crime. Fake ID coupled with computer fraud laws will eventually result in hefty jail time. This is sensible, if people want a world where ecommerce, and discourse is online... and the general public does.
And has exhibited a complete lack of care about privacy regardless.
I think people who want to stay anonymous just will not participate anymore. Like I’ve enjoyed using this site, Reddit etc but couldn’t care less about dropping them if I need to have an id verification to access. Someone will probably create a new communication method to replace this.
>No amount of sign up fee works as an alternative.
Simply put money is worth too much, at some point someone will want access to this human audience and offer too much to be resisted.
>It should be noted that falsifying ID is a crime
Lol, no one gives a shit on the internet. People will use stolen ID'S to get accounts. If the network is lucrative enough, governments will provide fake IDs to spread propaganda.
1 reply →
human curated -> human moderated. I, for one, don't care if it's ai, or human-written. I care if it's interesting/useful.
results are important, not the tools or process. (on this matter)
1 reply →
Every website needs to add the "friend or foe" system[0] so that I can mark bots to avoid their content and mark good posters so I can filter just to theirs.
[0]: https://hackersmacker.org/
This should be seperate from marking bots because what this really wil do is embed people into hearing only what they want, making discussion worse.
no, I truly do not want to read IHeartHitler88's opinion on jews, or donttreadonme09's bright opinions about how the economy would be better if we listened to Ayn Rand. I'll be very happy when they're out of my sight. If I want to have a miserable day, sure, I'll turn it off.
Fact of the matter is, most posts on the internet are already dogshit. Now they're also populated by AI, but the point stands. Most of what you will say online is at best useless.
2 replies →
On /. I would only mark obnoxious people as friends so I could see the friend-of-a-friend indicator and be cautious of anyone aligned with them.
> And there’s really not much point in publishing good content anymore, since AI is just going slurp it up and regurgitate it without driving you any traffic.
You just published good content knowing AI will slurp it up and not give you any traffic in return. I'm now replying to you with more content with the same expectations about AI and traffic. Why care about AI or traffic or recognition? Isn't the content the thing that matters?
It's like answering technical questions in an anonymous/pseudonymous chat or forum, which I'm sure you've done, too. We do it to help others. If an AI can take my answer and spread it around without paying me or mentioning one of my random usernames I change every month or so, I would be happy. And if the AI gives me credit like "coffeecup543 originally posted that on IRC channel X 5 years ago", I couldn't care less. It would be noise to the reader. Even if the AI uses my real name, so what?
The people who cared about traffic and money from their posts rarely made good content, anyway. Listicles and affiliate marketing BS and SEO optimizations and making a video that could be 1 minute into 10 minutes, or text that could've been 5 articles into a long book - all existed from before AI. With AI I actually get less of this crap - either skip it or condense it.
It's two different problems. People who run review sites and blogs and such care about traffic, and not getting attribution will kill their desire to participate. People who post here and on Reddit etc. care about talking with other human beings, and feeling ignored in a sea of botspam will kill *their* desire to participate.
> feeling ignored in a sea of botspam will kill their desire to participate.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
1 reply →
That's a little bit apples to oranges, because I'm not monetizing this content, or paying to host it, or trying to make a personal brand, etc.
Yes and no.
In the most simple sense - Yes, it is the content that matters.
In the more practical sense - cognitive and emotional resources are limited and our brains are not content agnostic.
We have different behaviors, expectations and capacities for talking to machines and talking to humans.
For example, if I am engaging with a human I can expect to potentially change their minds.
For a machine? Why bother even responding. It’s of no utility to me to respond.
Furthermore, all human communication comes with a human emotional context. There are vast amounts of information implied through tone, through what we choose not to say. Sometimes people say things in one emotional state that is not what they would say on another occasion.
To move the conversation forward, addressing the emotional payload behind the words used, matters more than the words used themselves.
There are a myriad reasons why humans are practically poorer for these tools.
Asking money to people in order to read stuff, and promoting the one people are actually ready to part with real money to read, is a first interesting step. (See: substack, Patreon,etc...)
I know this is going to sound horrible, but : how about asking money to contribute, period ? Maybe have a free tier of a couple comments, etc... But if you want to build a troll factory, sure... Show us the cash ?
I do believe that charging for it is one way to create some friction, but it's not enough.
Twitter is full of blue checks that are just bots and automated reply guys.
I'm treating now all these bots as a stressor on our defense systems, and we will end up having to learn how to build a real Web of Trust, and really up our game on the PKI side. We also need some good Zero Knowledge proof of humanity that people can tie to their Keyoxide profile, so that we can just filter out any message that is not provably associated with a human.
This could be positive. So far things were gamed and manipulated to some extent, with some fake content, but it was never too obvious, and a bit of a cat and mouse game with filters and whatnot. Now, it's so easy to fake content that robust systems will have to evolve, or most social media sites will become worthless, and advertisers will catch up eventually when they are paying for bot-only sites. The downside of course is that these robust systems are hard to imagine without complete loss of anonymity of the users.
Web of trust weakens anonymity, but doesn’t eliminate it.
- You know who your online invitees are, but not your invitees-of-invitees-of-…
- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)
Collapse of the Internet or collapse of the visual world wide web? tbh, I am a little curious to see what comes after clicking a button on a web page.
> Though it’ll be interesting to see what happens to ChatGPT and the like once the amount of quality content for them to consume slows to a trickle.
Creative loop moves inside the agentic chat room, where we do learning, work, art, research, leisure, planning, and other activities. Already OpenAI is close to 1B users and puts multiple trillion tokens per day into our heads, while we put our own tokens into their logs. An experience flywheel or extended cognition wheel of planetary size. LLMs can reflect and detect which of their responses compound better in downstream activities and derive RLHF-RLVR signalling from all our interactions. One good thing is that a chat room is less about posing than a forum, but LLMs have taken to sycophancy so they are not immune, just easier to deal with than forums. And you can more easily find another LLM than a replacement speciality forum.
Perhaps they migrate into Discord and Instagram once they acquire better visual and voice capabilities.
Yeah, we need human verification more than age verification.
Every website that was driven by traffic is also dying. I have put nearly a decade of work into mine, and AI overviews and ChatGPT have reduced traffic by over 60%. At some point I will need to give up and find a job, and that corner of the internet will get no new original information, just rehashed slop.
As someone who came of age before “the internet as you know it”, I am looking forward to all of the cancerous Web 2.0 OG slop and narcissism factories succumbing to their own fates. Let me tell you, the internet as we know it sucks, and the internet it ate 25-years ago is a marked improvement. We should be so lucky. Now go write a personal blog in plain text, and rejoice.
> Will people still use ChatGPT to get product recommendations without Reddit posts and Wirecutter providing good content for those recommendations?
They will try and OpenAI will sell favorable placement to manufacturers.
You mean a complete collapse of social media, not the whole internet. The internet is a telecom ecosystem and has a lot more to it than just forums and link aggregators.
I honestly believe it might not even be such a bad thing. People were arguably better without social networks and media, and it's perhaps better to let the cancerous thing just die and keep the internet just as a utility powering boring things like banking and academia.
What would you say are the major applications of the internet? It's used for business and academia in ways that aren't going away, yes. M2M communication will stay. Social media is the largest user-facing segment and it might not. I don't have a sense of how big these sectors are relative to each other. If the largest sectors of the internet disappear, the internet shrinks a lot.
That and most of the news being behind a paywall, which they can scrape anyway.
The internet archive is my safe haven these days, i can go back and remember the old internet.
Ha yeah, I quite like the 2003 vintage.
Unless you're allowed to say slurs without being banned, your forum will be overrun with bots. The sanitation of the internet is the perfect breeding ground for brand-safe AI promotion bots.
4chan has bots too.
Curious how you came to that conclusion. Anecdotally, places where you can slur to your heart's content like /r/conservative seem far more inundated with bots than other areas of Reddit. I feel like that's really saying something too, because Reddit has a really bad bot problem overall.
[dead]