← Back to context

Comment by Lapel2742

2 days ago

> it's how the algorithms promote engagement.

They are destroying our democratic societies and should be heavily regulated. The same will become true for AI.

> should be heavily regulated.

By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.

But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.

  • I’m surprised at how much regulation has become viewed as a silver bullet in HN comments.

    Like you said, the implicit assumption in every call for regulation is that the regulation will hurt companies they dislike but leave the sites they enjoy untouched.

    Whenever I ask what regulations would help, the only responses are extremes like “banning algorithms” or something. Most commenters haven’t stopped to realize that Hacker News is an algorithmic social media site (are we not here socializing with the order of posts and comments determined by black box algorithm?).

    • HN let's you choose what order (active, new, top[actual algorithm])

      That's not true of Facebook, new does not show you true posts in order of recency.

      Reddit still does, bit also injects ads that look like recent posts and actually aren't which is misleading.

      10 replies →

  • > But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape.

    For example, we can forbid corporations usage of algorithms beyond sorting by date of the post. Regulation could forbid gathering data about users, no gender, no age, no all the rest of things.

    > Calling it “regulation” is just a polite veneer over wanting control.

    It is you that may have misinterpreted what regulations are.

    • > or example, we can forbid corporations usage of algorithms beyond sorting by date of the post

      Hacker News sorted by "new" is far less valuable to me than the default homepage which has a sorting algorithm that has a good balance between freshness and impact. Please don't break it.

      > It is you that may have misinterpreted what regulations are.

      The definition of regulation is literally: "a rule or directive made and maintained by an authority." I am just scared about who the authority is going to be.

  • Control is the whole point. One person being in charge, enacting their little whims, is what you get in an uncontrolled situation and what we have now. The assumption is that you live in a democratic society and "the regulator" is effectively the populace. (We have to keep believing democracy is possible or we're cooked.)

  • By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.

    Just like the community organizations we had that watched over government agencies that we allowed to be destroyed because of profit. It's not rocket science.

    • > By a not-for-profit community organization that has 0 connect/interest in any for-profit enterprising that represents the stable wellbeing of society with a specific mandate to do so.

      Then you get situations like the school board stacked with creationists who believe removing the science textbooks is important for the stable wellbeing of society.

      Or organizations like MADD that are hell bent on stamping out alcohol one incremental step at a time because “stable wellbeing of society” is their mandate.

      Or the conservative action groups in my area that protest everything they find indecent, including plays and movies, because they believe they’re pushing for the stable wellbeing of society.

      There is no such thing as a neutral group pushing for a platonic ideal stable wellbeing of society. If you give a group of people power to control what others see, it will be immediately co-opted by special interests and politics.

      Singling out non-profit as being virtuous and good is utopian fallacy. If you give any group power over what others are allowed to show, it will be extremely political and abused by every group with an agenda to push.

  • It's really not that complicated:

    - Ban algorithmic optimization that feeds on and proliferates polarisation.

    - To heal society: Implement discussion (commenting) features that allow (atomic) structured discussions to build bridges across cohorts and help find consensus (vs 1000s of comments screaming the same none-sense).

    - Force the SM Companies to make their analytics truly transparent and open to the public and researchers for verification.

    All of this could be done tomorrow, no new tech required. But it would lose the SM platforms billions of dollars.

    Why? Because billions of people posting emotionally and commenting with rage, yelling at each other, repeating the same superficial arguments/comments/content over and over without ever finding common ground - traps a multitude more users in the engagement loop of the SM companies than people have civilised discussions, finding common ground, and moving on with a topic.

    One system of social media that would unlock a great consensus-based society for the many, the other one endless dystopic screaming battles but riches for a few while spiralling the world further into a global theatre of cultural and actual (civil) war thanks to the Zuckerbergs & Thiels.

    • That only treats the symptoms, not the cause. The purpose of algorithmic optimization farming engagement is to increase ad impressions for money. It is advertising that has to be regulated in such a way that maximizing ad impressions is not profitable or you will find that social media companies will still have every incentive to find other ways to do it that will probably be just as harmful.

    • > it's really not that complicated...

      Then lists at least four priorities which would require one multi page bill or more than likely several bills make their way through house, senate, and presidents desk while under fire from every lobbyist in Washington?

      1 reply →

  • I’d favour regulation towards transparency if nothing else. Show what factors influence appearance in a feed.

  • Recasting regulation as a desire for control is too reductive. The other point of regulation is compromise. No compromise at all is just a wasted opportunity.

My view is that they are just exposing issues with the people in the said societies and now is harder to ignore them. Much of the hate and the fear and the envy that I see on social networks have other reasons, but people are having difficulties to address those.

With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.

  • They seem to artificially create filter bubbles, echo chambers and rage. They do that just for the money. They divide societies.

    For example:

    (Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth)

    > First, there is a consistent observation across computational audits and simulation studies that platform curation systems amplify ideologically homogeneous content, reinforcing confirmation bias and limiting incidental exposure to diverse viewpoints [1,4,37]. These structural dynamics provide the “default” informational environment in which youth engagement unfolds. Simulation models highlight how small initial biases are magnified by recommender systems, producing polarization cascades at the network level [2,10,38]. Evidence from YouTube demonstrates how personalization drifts toward sensationalist and radical material [14,41,49]. Such findings underscore that algorithmic bias is not a marginal technical quirk but a structural driver shaping everyday media diets. For youth, this environment is especially influential: platforms such as TikTok, Instagram, and YouTube are central not only for entertainment but also for identity work and civic socialization [17]. The narrowing of exposure may thus have longer-term consequences for political learning and civic participation.

    https://www.mdpi.com/2075-4698/15/11/301

  • > Much of the hate and the fear and the envy that I see on social networks have other reasons

    Maybe so, but do you really think actively amplifying or even rewarding them has no effect on people whatsoever?

    • During history, people did lots of horrible things and/or felt miserable without social networks. Yes, amplifying or rewarding does not have a positive effect, but I would like to see further analysis over the magnitude.

      Think of slavery or burning of witches or genocides - those were considered perfectly normal not that long ago (on historical scale). I feel that focusing on social networks prevents some people to think "is that the root cause?". I personally think there other reasons of this generic "anger" that have a larger impact and that have different solutions than "less AI/less social networks", but that would be too off-topic.

  • Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?

    What if social media and the internet at large is now exposing people to things which before ha been kept hidden from them, or distorted? Are people wrong to feel hate?

    I know the time before the internet, when a very select few decided what the public should know and not know, what they should feel, what they should do and how they should behave. The internet is not the first mass communications, neither are social media or LLMs. The public has been manipulated and mind primed by mass media for over a century now.

    The largest bloodshed events World War I and II were orchestrated by lunatics screaming in the radio or screaming behind a pulpit, and the public eagerly being herded by them to the bloodshed.

    This comment isn't in opposition to yours, it's just riffing on what you said.

    • > Is hate, fear, or envy by themselves wrong, or only wrong when misdirected?

      I think they are natural feelings that appear due to various reason. People struggle for centuries to control their impulses and this was used for millennia in the advantage of whom could manipulate them.

      The second world war did not appear in a "happy world". It might even have started due to the great depression. For other conflicts, similarly - I don't think situation was great before them for most people.

      I am afraid that social networks just expose better what happens in people's heads (which would be worrying as it could predict larger scale conflicts) rather than making normal people angry (which would be solved by just reducing social media). Things are never black and white, so probably is something in between. Time will tell if closer to first or second.

I agree, but focusing on "the algorithm" makes it seems to the outsider like it must be a complicated thing. Really it just comes down to whether we tolerate platforms that let somebody pay to have a louder voice than anyone else (i.e. ad supported ones). Without that, the incentive to abuse people's attention goes away.

We've seen what happens when we pretend the market will somehow regulate itself.

  • Just because the free market isn't producing results you like doesn't mean that more regulation would make it better.