← Back to context

Comment by ylee

21 hours ago

Nothing has changed since Jerry Pournelle wrote 40 years ago when discussing online forums:

>I noticed something: most of the irritation came from a handful of people, sometimes only one or two. If I could only ignore them, the computer conferences were still valuable. Alas, it's not always easy to do.

This is what killed Usenet,[1] which 40 years ago offered much of the virtues of Reddit in decentralized form. The network's design has several flaws, most importantly no way for any central authority to completely delete posts (admins in moderated groups can only approve posts), since back in the late 1970s Usenet's designers expected that everyone with the werewithal to participate online would meet a minimum standard of behavior. Usenet has always had a spam problem, but as usage of the network declined as the rest of the Internet grew, spam's relative proportion of the overall traffic grew.

That said, there are server- and client-side anti-spam tools of varying effectiveness. A related but bigger problem for Usenet is people with actual mental illness; think "50 year olds with undiagnosed autism". Usenet is such a niche network nowadays that there has to be meaningful motivation to participate, and if the motivation is not a sincere interest in the subject it's, in my experience, going to be people with very troubled personal lives which their online behavior reflects. Again, as overall traffic declined, their relative contribution and visibility grew. This, not spam, is what has mostly killed Usenet.

[1] I am talking about traditional non-binary Usenet here

>This, not spam, is what has mostly killed Usenet.

Usenet had a nonstop spam generator called Google Groups that shit it up for years. It wasn't just intentional spam but clueless people came in through there and bumped 20+ year old threads.

The other factor related to the decline was ISP's stopped bundling usenet service in the 2000's.

There are sill a handful of active groups but unfortunately at least a third of the remaining active lost access when the Google spam service stopped.

  • It may have been reasonable to assume away or ignore that people with bad motives will be able to access the internet in the 80s and even 90s.

    But continuining to ignore it into the 2000s was clearly nonsensical.

One of the projects on my agenda is a classifier that detects those people on social media by detecting "signs of hostility." This was hung up for a while because I thought the process of making a training set would kill me [1] (not seeing these people was a major motivation for the project) but now I'm more optimistic. I still gotta make a generic ModernBERT + LSTM + calibration classifier though.

[1] https://www.cnn.com/2024/12/22/business/facebook-content-mod...

  • We had a very naive version of this at a company I worked for about 25 years ago. It was called “asshole detective”. We captured about 200 user comments and dredged through them by hand and scored particular words and phrases. Then we summed up the scores of each post in a thread. If a user was more than a couple of standard deviations outside the mean it’d flag them as an asshole. After reviewing this over a few weeks we found it was surprisingly good at singling out persistent assholes. It did however never action anything - that was up to a moderator to do.

    I imagine it’d be good at getting rid of a lot of modern plagues on social media as they seem to have a small, predictable and shitty vocabulary.

    • There's a lot of people that are condescending to others, but they wouldn't see themselves as being an asshole. I see this often in Ham Radio and Electronics.

      Their responses are curt, sure, but to them they are not outside the norm of the field.

      7 replies →

    • That's roughly what I'm planning. There are certain keywords and other signs (last time I looked 40,000 Bluesky users reposted and pinned a certain 'skeet') that I would say are "hostile" and with those I can seed a list of candidates of hostile/non-hostile people and then use active learning methods to expand and clean up the list.

      ... what I really need is a something that detects 'text in images', i mean, I don't mind if you took a photo of a sign in the real world but posting screenshots is a bad smell, only a tiny fraction are wholesome like this:

      https://bsky.app/profile/up-8.bsky.social/post/3lseycg7nl22p

  • I wish you the best of luck, but these days the main problems you're going to be facing are political, not technical. What makes people start to display "signs of hostility" these days is almost always tribal politics, and when you ban that, you are (at least from their POV), engaging in politically-motivated censorship. If it gets any kind of traction or visibility, your tool will be pinpointed as a weapon of The Enemy for suppressing truth and entrenching the powers that be, and you'll start getting threats to match.

    Not to say you shouldn't do it, but you should be aware of what you're signing up for.

Usenet killfiles work better than any tools that I see available for web forums.

Indeed. I found it strange that the paper (https://osf.io/preprints/psyarxiv/acbwg_v1 ) doesn't even mention the experiences from other social media and discussion fora, nor alternative tactics such as blocking users. The experiment was conducted until March 2024 so it's already outdated; nowadays, even if you unfollow Elon Musk's preferred accounts, you will be exposed to them anyway.

Hopefully there will be follow-up studies.