← Back to context

Comment by CrzyLngPwd

16 hours ago

I run a niche creative community, and we outlawed AI-generated content in 2022 as it was easy to see how corrosive it would be to the community.

It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.

It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.

I fear losing the battle.

High quality anecdata are exactly the reason why I love HN. Thanks for posting about it.

    > fake AI accounts

First, how do you identify them? Is it strictly admins monitoring posts/server-side logs or do users report odd behaviour?

Second, what is the purpose of these accounts? Are they basically running submarine adverts, or are they just trolling (to harm the community)?

It realy is time for a Butlerian Jihad

  • Way back whenever I first read Dune, this seemed like such weird niche ban. I don't think I had a lot of respect for it.

    Now, like all good SciFi, it seems fairly prescient ....

Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.

  • When it comes to slow forum content, I think it's a fool's errand to try to determine if someone is using AI for their responses. Any of the tell-tale signs of AI are easily skirted by mentioning in their prompt to not do so. It goes back to how you can't sanitize human language which has been an issue with LLM's from the beginning.

    Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.

  • Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.

    • It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.

      We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.

      No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.

      Now I see it as the perfect tool for impostors.

      4 replies →

Hmm i'm curious how niche.

Or ... how small can a community be and still be drowned in AI slop?

Is it a community inside one of the major platforms, or it has its custom thing?

What about charging $1 or $5 for an account? Seems like you could stem the tide pretty easily with something like that.

  • Or applying for an account could involve sending a handwritten letter by post.

  • presumably most people running these bots are doing it for some financial gain. as long gain > cost the issue won't go away.

    It'll stop the ones doing it for the lols, but I imagine they're a minority anyway.

    • Would be great to have some sort of bot trap that would just drain a dollar here and there from AI slopologists and shadowban their accounts to only interact with other AI accounts.

  • If you head to Twitter right now, the vast majority of bots are blue checks. It seems to actually encourage the opposite, where you trusting that someone paying $8 for an account makes you even more likely to fall for slop

    • I think twitter is an odd-one-out here, twitter as a whole has been heading down hill ever since the acquisition, and I wouldn't be surprised if many of those blue checks are officially sanctioned bots. Especially given the way so many of them push the same narratives that Musk does at the same time he does.

      3 replies →

  • This does not work, for similar reasons why captchas piss off real humans.

    You add a barrier here. You think that your solution means that AI is reduced, but you also reduce real humans. I noticed this with other parts too, such as "you need to verify your identity before you can post to the ruby issue tracker". I can do so, but I need my tablet and this takes me more time than before, so I stopped using the ruby issue tracker altogether. (It's not the only reason, but adding barriers really makes me invest my time elsewhere - more likely to do so at the least.)

    You always need to consider all trade-offs. Charging money means you will also offset real humans at the same time. And it's not solely about the cost; it is simply a hassle to want to do so. For similar reasons I also rarely register at a phpbb forum - I need to store the password to not forget it etc... so more hassle. Using a password manager is also more of a hassle.

    • I can't access gnu.org, because their extreme measurements against the AI bots blocking my slightly older browser.

    • Yeah, I tried to sign up for instagram, but at the fourth captcha I gave up and left. How does instagram have any users with such a hostile sign-up barrier?

      2 replies →

    • > Charging money means you will also offset real humans at the same time.

      On completely different scales. Even if it not perfect, it is strong enough of a filter to turn a bot infestation into a mild annoyance.

      1 reply →

    • Metafilter and Something Awful both do this.

      Both sites have survived and continue to work well for their users.

      A small cost does definitely work for some sites.

      1 reply →

  • A lot of the "add a cost to stop bad actors" end up being a selection effect in favor of bad actors.

    Sure, it might stop 10% of the bad actors and lower the numbers, but it'll stop 80% of the good users who aren't experts at getting around the cost or don't have an income from using the service to just pay it as a cost of business.

>shrug off around 600 AI content creator accounts monthly. >I fear losing the battle.

I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.

These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.

Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)

I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.

A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.

I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.

I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).

YMMV

  • > I just don't think its possible to maintain such artificial rage for more than a few years.

    What makes you think the rage is artificial?

  • Have you considered the possibility that most non-programmer people mostly experienced the negative effects?

    Blockchain turned out to be an absolutely awful payment method, so most people only know it as 1) a way to do crimes like ransomware, 2) a get-rich-quick scam, 3) some buzzword companies threw in everything, 4) the thing that made GPUs unaffordable.

    AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?

    Their opinion about AI or blockchain most likely has absolutely nothing to do with you. They are just seeing the world noticeably get worse, and are desperately trying to protect their communities from it in any way they can.

    • >Their opinion about AI or blockchain most likely has absolutely nothing to do with you.

      Which is why I left before I was banned. I no longer felt comfortable and they probably likewise. They wanted a safe space to hate on people involved in AI art and my leaving contributed to that. That said, I doubt I could have posted content calling for the death of authors or honestly any other group in that space without being ostracised.

      Its a bit like saying "A witch might have burned down their house, so their reaction against witches is understandable" maybe in abstract. But that doesn't mean the subsequent actions are acceptable.

      > Have you considered the possibility that most non-programmer people mostly experienced the negative effects?

      Yeah absolutely. These people in particular, at the time, really on experienced it through 2 factors:

      1. They (like many people) posted a lot of their midjourney creations for a few months. (21/22 was like that)

      2. They saw an increase in low quality submissions.

      So gripes about AI art and low quality submissions seem perfectly valid.

      >Blockchain turned out to be an absolutely awful payment method >AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?

      Yeah so I am not complaining about people having negative opinions, I was sort of talking about the over meme, the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time. Basically used like a thought terminating cliche. I have problems with crypto, and I like things about crypto. I can have a great conversation with most people, but for 12 months or so, you couldn't have a conversation without people loudly shouting about how the power use was going to destroy the environment and that it was going to use X% of the power by Y date. They didn't want to talk about it, they had been given evidence that the discussion was over and everything was solved in favor of their beliefs. The AI debate has now roughly arrived in the same place, there's no longer really a discussion, but the zeitgeist has this one single mode that's constantly debated. To the point where you could be running a local LLM trained only on data from the 1800s and you can still be considered to be responsible for some data centre single handedly draining a lake.

      My point is, like crypto, this fixed idea will eventually erode and the hate train will move on. People with well thought out negative opinions are still going to exist past that time, they just wont have people screaming at fever pitch about it constantly.

      2 replies →

  • Genuinely dont know how this made at least 3 people angry enough to downvote but not suggest why.

    • I'm not angry, you just seem to be taking a very self-centered view on the general vibe in this specific forum you mentioned, and are interpreting general anti-AI/blockchain sentiment as personal attacks.

      So I downvoted.

      1 reply →

  • This is entirely vibes based on reading research on similar campaigns so I cant pull a paper with hard evidence about this specifically. But I believe chinese/North Korean infowar campaigns are behind these seeded talking points. They seed in these far left activist communities and then once they find one that sticks the real people in these communities start carrying the message out to other communities and then the CN/NK botnets amplify the messages and suppress the responses. They dont just do this on the left im just highlight left for this specific point.

    • Yeah, that's not it. China is heavily invested in AI and LLMs. Also this sentiment is organic, most people I talk yo about AI are anti-AI.

      The exceptions to the anti-AI sentiment are management and people with a vested interest.

The battle is lost. You never had a chance. There's nothing you can do against the constant torrent of AI content that's only getting started. The online communities that we know and love are going to change and there's nothing we can do about it. You can't keep AI out of any platform no matter what the community guidelines say or even if it seems locked down with no bot access.

The only solution is in person meetups, bringing back the 3rd places, joining a club. Maybe it's not such a bad outcome.