← Back to context

Comment by chemotaxis

2 days ago

> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.

I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.

On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.

Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.

Edit, just to be clear: I'm not saying HN should be unmoderated.

  • Yeah this is a critical difference, most of the issues are sidestepped because everyone knows nobody can force a custom frontpage tailored for a specific reader.

    So there’s no reason to try a lot of the tricks and schemes that scoundrels might have elsewhere, even if those same scoundrels also have HN accounts.

  • Only when certain people don't decide to band together and hide posts from everyone's feed by abusing "flag" function. Coincidentally those posts often fit neatly in the categories you outlined.

    • Abuse of the flagging system is probably one of the worst problems currently facing HN. It looks like mods might be trying to do something about it, as I've occasionally seen improperly-flagged posts get resuscitated, but it appears to take manual action by moderators, and by the time they get to it, the damage is done: The article was censored off the front page.

      3 replies →

    • I don’t know why this is being downvoted, I’ve witnessed it many times myself.

      It’s true that HN has a good level of discussion but one of the methods used to get that is to remove conversation on controversial topics. So I’m skeptical this is a model that could fit all of society’s needs, to say the least.

      3 replies →

I want to agree with this. Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.

Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.

  • >Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.

    I was too young for IRC/Usenet and started using the net/web in the late 90s, frequenting some forums. Agreed that anyone can come in and upset the balance.

    I'd say the difference is that on the open web, you're free to discover and participate in those social settings for the most part. With everything being so centralised and behind an algorithm the things you're presented are more 'push' than 'pull'.

  • MySpace was quite literally my space. You could basically make a custom website with a framework that included socialisation. But mostly it was just geocities for those who only might want to learn html. So it was a creative canvas with a palette.

  • Right, but that’s slightly different.

    I think the nuance here is that with algorithmic based outrage, the outrage is often very narrow and targeted to play on your individual belief system. It will seek out your fringe beliefs and use that against you in the name of engagement.

    Compare that to a typical flame war on HN (before the mods step in) or IRC.

    On HN/IRC it’s pretty easy to identify when there are people riling up the crowd. And they aren’t doing it to seek out your engagement.

    On Facebook, etc, they give you the impression that the individuals riling up the crowd are actually the majority of people, rather than a loud minority.

    Theres a big difference between consuming controversial content from people you believe are a loud minority vs. controversial content from (what you believe is from) a majority of people.

  • Or if the moderation was good someone would go “nope, take that bullshit elsewhere” and kick them out, followed by everyone getting on with their lives. It wasn’t obligatory for communities to be cesspits.

  • > Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually

    I’m not exactly old yet, but I agree. I don’t know how so many people became convinced that online interactions were pleasant and free of ragebait and propaganda prior to Facebook.

    A lot of the old internet spaces were toxic cesspools. Most of my favorite forums eventually succumbed to ragebait and low effort content.

When video games first started taking advantage of behavioral reward schedules (eg: skinner box stuff such as loot crates & random drops) I noticed it, and would discuss it among friends. We had a colloquial name for the joke and we called them "crack points." (ie, like the drug) For instance, the random drops that happen in a game like Diablo 2 are rewarding in very much the same way that a slot machine is rewarding. There's a variable ratio of reward, and the bit that's addicting is that you don't know whenever next "hit" will be so you just keep pulling the lever (in the case of a slot machine) or doing boss runs. (in the case of Diablo 2)

We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.

I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.

  • I think you touch on the crux of the issue here, that education ioos one of the most potent defenses against this kind of psychological manipulation.

    But not just any education. The humanities side of things, which are focused on the foundations of thought, morality and human psychology.

    These things are sadly lacking in technical degrees and it shows.

    It's also IMO why we see the destruction of our education systems as a whole as a element of control over society.

  • Thanks for this. I didn't realize until you said it why this issue might not be observable to a certain group of people. I think this is a cognitive awareness issue. You cant really see it until you have an awareness of it through experience. I came from a drug abuse background and my wife was never involved in the level of addiction I was involved in and she has a hard time seeing how algorithms like this effect behavior

  • >If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!"

    I don't think it's exactly wrong, you just have to look at it on a spectrum of minimal addictiveness to meth level addiction. For example in quarter fed games getting a high score displayed to others was quite the addictive behavior.

I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.

  • I think the curation of all media content using your own LLM that has been tuned using your own custom criteria _must_ become the future of media.

    We've long done this personally at the level of a TV news network, magazine, newspaper, or website -- choosing info sources that were curated and shaped by gatekeeper editors. But with the demise of curated news, it's becoming necessary for each of us to somehow filter the myriad individual info sources ourselves. Ideally this will be done using a method smart enough to take our instructions and route only approved content to us, while explaining what was approved/denied and being capable of being corrected and updated. Ergo, the LLM-based custom configured personal news gateway is born.

    Of course the criteria driving your 'smart' info filter could be much more clever than allowing all content from specific writers. It could review each piece for myriad strengths/weaknesses (originality, creativity, novel info, surprise factor, counter intuitiveness, trustworthiness, how well referenced, etc) so that this LLM News Curator could reliably deliver a mix of INTERESTING content rather than the repetitively predictable pablum that editor-curated media prefers to serve up.

    • That's the government regulation I want but it's probably not the government regulation we will get because both major constituencies have a vested interest in forcing their viewpoints on people. Then there's the endless pablum hitting both sides, giving us important vital cutting edge updates about influencers and reality TV stars whether we want to hear about them or not...

      We say we want to win the AI arms race with China, but instead of educating our people about the pros and cons of AI as well as STEM, we know more than we want to know about Kim Kardashian's law degree misadventures and her belief that we faked the moon landing.

  • It would just become part of the shitshow, cf. Grok.

    • Which is why you should cancel your Twitter account unless you're on the same page with the guy who owns it, but I digress.

      if a site wants to cancel any ideology's viewpoint, that site is the one paying the bills and they should have the right to do it. You as a customer have a right to not use that site. The problem is that most of the business currently is a couple of social media sites and the great Mastodon diaspora never really happened.

      Edit: why do some people think it is their god-given right that should be enforced with government regulation to push their viewpoints into my feed? If I want to hear what you guys have your knickers in a bunch about today, I will seek it out, this is the classic difference between push and pull and push is rarely a good idea.

      My social media feeds had been reduced to about 30% political crap, 20% things I wanted to hear about, and about 50% ads for something I had either bought in the deep dark past or had once Google searched plus occasionally extremely messed up temu ads. That is why I left.

I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.

  • Algorithm driven social media is a kind of pollution. As the density of the pollution on those sites increases it spills out and causes the neighbors problems. Think of 4chan style raids. It wasn't enough for them to snipe each other on their site, so they spread the joy elsewhere.

    And that's just one type of issue. You have numerous kinds of paid actors that want to sell something or cause trouble or just general propaganda.

The thing is, the people on those "algorithm-free" forums still get manipulated by the algorithm in the rest of their life. So it seeps into everything.

It is of course human nature. The problem is what happens when algorithms can reenforce, exaggerate, and amplify the effects of this nature to promote engagement and ad-clicks. It’s cancer that will at the very least erode the agency of the average individual and in the worst create a hive mind that we have no control over. We are living in the preview of it all I think.