← Back to context

Comment by onlyrealcuzzo

1 month ago

How is any app/website that 1) appeals to kids, 2) sells attention, 3) does A/B testing and/or has a self-learning distribution algorithm NOT guilty of this?

It probably helps when you suppress research that shows you’re harming children and allow human traffickers to fester on your platform with 17 warnings or whatever.

  • The argument that research was suppressed and this is somehow damning is absurd on its face. The most obvious reason being that they obviously didn't do a very good job of suppressing it given that we hear this claim every day. The second being that they could have just not done this research at all and then there would have been nothing to "suppress" (this terminology is also very odd... if 3M analyzes different sticky notes and concludes that their competitors sticky notes are better than theirs but does not release the results, is that suppression?). The third is that studies with the same results have come out probably every year since 2010 and have been routinely cited in the mainstream press. Lastly, it ignores that many platforms have actually responded to research about potential harms of social media by implementing safeguards on teen accounts.

    Look at the plaintiff in this case: it's a mentally unstable person who blames her life problems on social media. Never mind the fact that she had been diagnosed with mental illnesses as an early teen, or that an overwhelming majority of people who use social media don't develop eating disorders or other mental illnesses as a result of it (and in fact the incidence of say bulimia peaked 30 years ago in spite of almost universal social media adoption among young people). This is not at all like smoking where 15% of smokers will get lung cancer.

    And due to some absurd legal reasoning the plaintiff was allowed to pseudonymously extort $3 million out of tech companies. Worst of all I see people on a technology forum applauding this out of some sort of resentment towards large companies!

    • Nobody ever accused these companies of being competent at suppressing the research (which includes third parties btw, not just internal).

      Companies do this research for all sorts of reasons (including legal compliance, demonstrating due diligence to regulators, to understand users and improve products, etc etc etc). For example, it's not like Zuck commissioned an internal study to show how they're harming children, more like some internal team was seeking to understand why kids love a certain feature which led them to conclusions that make the company look bad.

      To your third point, that research is usually leaked by whistleblowers or conducted by third parties, not because of the altruism of these companies.

      Finally, the platforms aren't doing enough and with this court case, it seems like they've persisted in finding ways to hook children because of financial incentives.

      The sources cited in this article are a good primer for understanding what these companies are doing: https://www.transparencycoalition.ai/news/meta-suppressed-re...

    • >This is not at all like smoking where 15% of smokers will get lung cancer.

      Unfortunately for you and social media sites, the legal standard for defective products has no "percentage" of people harmed to incur liability. Product liability is showing product was defectively designed and caused foreseeable harm to a specific plaintiff.

      > absurd legal reasoning

      It's certainly not surprising you think protecting minors in legal cases (she was a minor when the case was filed) is "absurd legal reasoning".

      Addressing the actual legal questions in the case might be more fruitful than hurling shit against a wall.

      1 reply →

    • > The argument that research was suppressed and this is somehow damning is absurd on its face.

      The argument is not that it is vaguely "somehow damning".

      The argument is that the existence of the research and its findings, and that it was in the hands of the firms, and that the actively chose to suppress it, is evidence of one specific fact relevant to liability—that, at the time that they made relevant business decisions that occurred around or after the review and decision to suppress the reports, they had knowledge of the facts contained in the report.

      > The most obvious reason being that they obviously didn't do a very good job of suppressing it given that we hear this claim every day.

      The success of suppression is not relevant to what the decision to suppress is used to prove.

      > The second being that they could have just not done this research at all and then there would have been nothing to "suppress"

      The fact that, had they made different decisions previously, they would not have had knowledge of the facts that they actually had when they made later business decisions is also not relevant to what the existence and suppression of the research is used to prove.

      > (this terminology is also very odd... if 3M analyzes different sticky notes and concludes that their competitors sticky notes are better than theirs but does not release the results, is that suppression?).

      It would obviously be suppression of the report (which isn't a legal term of art but a plain-language descriptive term), but unless they later made fact claims about their product that were contrary to what was in the suppressed report and were being sued for fraud or false advertising, that suppression probably wouldn't be useful as evidence of anything that would produce legal liability.

      > The third is that studies with the same results have come out probably every year since 2010 and have been routinely cited in the mainstream press.

      Which is addditional, though weaker, evidence of the firms knowledge of the same conclusions (weaker, because its pretty hard to prove that the firm had particular knowledge of any of those studies, but it is pretty easy to prove that they had knowledge of the studies that there is documentation of the commissioning, reviewing, discussing internally, and deciding to suppress.)

      But it doesn't in any way counter the weight of the evidence of the suppressed reports, it weighs in the same direction, just in much smaller measure.

    • The "overwhelming majority" standard for harm seems odd when you use 15% of smokers getting harmed as an example. 15% is not an overwhelming majority.

      2 replies →

I think there is a fourth portion that is probably more important:

Actively ignoring harm caused by your product. TV/radio has sold attention, but there were pretty strict rules on what you can/can't broadcast, and to whom. (ignoring cable for the moment) Its the same for services, things that knowingly encourage damaging behaviours are liable for prosecution.

  • Except cable is the more apt comparison here - broadcast rules exist because airwaves are an extremely finite resource and so we can argue that the government has a vested interest in what kind of speech can happen on them. No such scarcity exists with web services.

I think there's a little more nuance than that, but it seems roughly correct.

Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?

  • I think addiction is a redherring.

    Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.

    • Addictive patterns in games and other online activity is a bit less innocent than you are portraying it: knowingly causing harm is too low a standard. A lot of the profitability of online games, prediction markets, etc. comes from the whales. The whales are probably addicted. If your business is a whale hunt you are possibly causing harm at least to the extent that addiction is dangerous.

  • They'd find another method. Why are we allowing this in the first place?

    I don't have an answer to fix this whole mess, but it starts with our attitude towards addiction. We've built a system that rewards addiction in all sorts of places. Granted, every addiction is different, and I'm of the opinion that it's not (drug = bad), it's how you use it and react to it. We can control the latter, but we choose to ignore it because we're too busy with anything else. This is a tale as old as time...

    • > Why are we allowing this in the first place?

      Exactly what I keep coming back to.

      For me, it feels like you could cut this problem down substantially by eliminating section 230 protection on any algorithmically elevated content. Everywhere. Full stop.

      If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement. You want that content to be seen, for whatever odd reason, and if it's harmful to your users, you should be held responsible for it. It's one thing if some random asshole messages me on Telegram trying to scam me; there's little Telegram can do (though a fucking "do not permit messages from people not in my contacts" setting would be nice) but there is nothing at all that "makes" Facebook shovel AI bullshit at people, apart from it juices engagement, either by genuine engagement or ironic/ragebaiting.

      And AI bullshit is just annoying, I've seen "Facebook help" groups that are clearly just trawling to get people's account info, I've seen scam pages and products, all kinds of shit, and either it pisses people off so Facebook passes it around, or they give Facebook money and Facebook shoves it into the feeds of everyone they can.

      It's fucking disgusting and there's no reason to permit it.

      10 replies →

    • In the span of how long it takes for law to catch up to what’s going on, YouTube and Facebook has been around for a tiny amount of time.

      2 replies →

    • "Free market" and "entrepreneur spirit" fetishism and fear of collective social action against individual drives.

  • For context, facebook is so dystopian when I login once every few years that I’m not sure I’ll ever use it again. And, I hate wading through the YouTube cesspool to find some educational content I like. But, I don’t think it makes sense to ban a/b testing or optimization in general. Some company could use it, for example, to figure out how to teach math to kids in a way that’s as engaging as possible. This would be “more addictive” technically.

    • That's a good point, I'm not 100% sure it's worth throwing away the potentially beneficial uses. There might not be a solution that's both feasible to implement and avoids banning useful things. In the end I usually come back to it being the parent's responsibility to monitor usage, limit screen time, etc., but it hasn't been working so well in practice.

  • > more nuance

    Not enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because

    > It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair

How’s this different than tv that a kid might see that has ads and programming targeting kids?

I watched 80s horror movies when I was in elementary school and had nightmares for years. Should I sue now?

How about parents be held responsible for how they care for their kids or not? Maybe a culture that judged parents more strongly for how they let their kids spend their time would be an improvement.

  • Being able to find some basis for comparison between two things does not render them equivalent, and this is an extremely frequent fallacy I see with regard to technology discussion on HN.

    • When it comes down to it, I’m not sure how you differentiate an “addictive” product from a well-made product that I choose to keep using.

      When people say that Tetris and Civilization are “addictive” they aren’t implying anything malicious about the development, it’s more of a compliment about the game (and maybe a little lament about staying up too late).

      But the addictive nature of social media feels different and I can’t figure out what that distinction is.

      8 replies →

    • I understand what you’re saying, I personally don’t like or use social media, but I don’t agree that these companies are at fault after reading this article and others. I’d rather be wrong and learn something than think I’m right, so I welcome further criticism.

      3 replies →

  • Both things can be true. Parents can share responsibility. But it is also the case that Facebook actively suppressed research that showed that children using their platforms experience emotional harms. It is also the case that around the time you were in elementary school discussions about children’s programming had been ongoing for years and eventually regulations were put in place[0].

    0: https://en.wikipedia.org/wiki/Regulations_on_children's_tele...

    • I can agree that I think they acted to harm society knowingly. I used to think regulation could help and maybe it can, but if there were some way to shape the culture to value, for example, educational tv programming, I think that would be the most powerful influence on tech/media companies. Regulation could serve to inform parents “this programming/platform is known to rot your kids mind” like a nutrition label and some day hopefully parents will be more likely to disallow it like some do knowing how much sugar is in sodas.

  • > How’s this different than tv that a kid might see that has ads and programming targeting kids?

    Those ads didn't adjust themselves on a per-child basis to their exact interests.

  • The difference is largely in the way that the legal caste perceives themselves to be aligned with media but opposed to tech.

  • Parents ought to be held held responsible for how they care for their kids. This isn't just true of their use of social media and devices, but also when it comes to teaching them to look both ways when crossing the street; making sure they understand the concept of private parts, consent and personal space; making them understand the dangers of alcohol, and many other things.

    Does any of that obviate the need for safe urban design, anti-CSAM and anti-molestation laws, or laws prohibiting the local dive from serving a cold one to my 11 year old? Will simple appeals for "parental responsibility" suffice as an argument for undoing those child safety systems we put in place, or will they be met with derisive dismissal? Why should your "solution" be treated any differently? In fact you offer none. Yours is the non-solution solution, the not-my-problem solution, the go-away solution. Not good enough on its own, sorry.

    • For 30 (60's to 90's) years we told parents "It's 10pm do you know where your kids are", with an AD, on TV. We came home to empty houses and go in with a key around our neck.

      Now, we call the police, and arrest parents, if kids are outside, unsupervised. https://www.cnn.com/2024/12/22/us/mother-arrested-missing-so...

      When I was a child in the 80s and 90s, we had "jobs" as kids... Mowing lawns, Paper routes and so on. Now if you go offer to mow your neighbors lawn, the cops get called: https://www.fox8live.com/2023/07/26/officer-surprises-young-...

      Parents are afraid to let their kids out of their site, and for those of us who have been pragmatic because we understand the data (and not the fear) they tend to look down on us.

      Talk to any one who is Gen X and they will tell you that we basically got thrown out side all day (and had fun). Parents cant say "go outside and play" so kids end up getting handed devices... and they are going to play and explore and do the dumb things that gets them in trouble.

      > those child safety systems we put in place

      Except we have denormalized things that SHOULD be perfectly fine. And as fewer kids get to go outside unattended with friends, it pushes their peers to go "online" to socialize.

      Maybe the government needs to run commercials "Its 10am, why isnt your child outside playing with the neighbor kids unsupervised"

    • As sibling comments point out, parents are already overly held responsible for how they care for their kids. To an absurd amount.

      I have had CPS called on me by an overbearing school administrator. Have you had that happen to you? Let me tell you, it's not a fun experience.

      Enough of this "blame the parents" mentality! Ironic given that the goal for all these platforms is growth at all costs. Where do you think "growth" comes from, after all? If you make being a parent so goddamn difficult that it's more rational to just not do it, guess what, poof goes your sweet, sweet growth.

      So tired of this line of thinking. The parents are put into an impossible situation. Stuck between kids who by definition and by design will test the boundaries that they're given, and tech platforms that are propped up with not just trillions of dollars of valuation, but the societal expectation that you engage with them. Want your kids to compete in sports? Well, they need to have WhatsApp and Instagram to keep track of team events!

      Give me a break. Equating controlling social media and devices to "look both ways when crossing the street" is disingenuous at best. There are no companies that make billions of dollars in advertising revenue telling your kids to jaywalk. But Facebook gladly weaponizes their algorithm to drive "engagement" - and, surprise, children with still-forming prefrontal cortices are drawn to content that reinforce their natural self-criticisms and doubts. So now my child, who has to be on Instagram to keep track of sports schedules, is also force fed toxic content because that's what a mechanical algorithm thinks is most "engaging" based on my derived psychological and demographic profile.

      You want to talk about CSAM? X proudly proclaims that they have every right to produce deep-fake pornography with the faces of underage children. What action shall I, as an individual parent, take if my 15 year old girl's face is suddenly pasted onto sexually explicit video and widely shared thanks to xAI's actions? Shall I be held responsible for how I "let this happen" to my child?

      1 reply →

    • > Parents ought to be held held responsible for how they care for their kids.

      If YouTube detects that a child is watching 5 hours of video a day, should Google alert child protective services?

      2 replies →

  • > How’s this different than tv that a kid might see that has ads and programming targeting kids?

    It's not, that illegal as well. You cannot target kids with TV advertising.

  • We're a two parent household and my spouse had cancer and never really got all of their energy back, and works full time, so the entirety of home, land, and car maintenance comes to me.

    I homeschool our youngest because the school system here sucks, based on the experiences of our older two. I'm always exhausted. I solved this (the "parents must be more involved") by watching my kid play roblox, arguing with them about spending their money on gift cards instead of lego, posters, or whatever that isn't so fleeting; i also don't let them have a cellphone. They turn 10 in June. We don't have TV or CATV, i have downloaded most of the old TV programs that kids liked, and grandma doesn't watch kid's shows so he really doesn't have a perspective on what everyone else's viewing habits are. He watches YT on his Switch about fireworks, cars, and then also some of the idiots with too much money acting goofy, plus what i would call "vines compilations" of just noises and moving pictures, i don't get it, but it seems harmless. For the record, pihole no longer blocks youtube ads, so i was just told there are ads on the Switch, now.

    But anything beyond that, i can't watch nor do i want to watch their every interaction on a computer. I gotta cook, the weather isn't always conducive to send them outside to play, as well. When i was growing up and was bored, there wasn't too much i could do about it. Today, my youngest has virtually anything on the planet just peeking around the corner. America's Funniest home videos and a blue square shooting red squares at orange squares? yeah, ok.

    ===========

    It's getting to the point where i think people who have really strong opinions on topics like this need to disclose any positions they might have that influence their opinion. My disclosure is that i have no positions in any company or entity.

    Everyone in the US has been fed a lie that if we just work hard and don't interfere with the billionaire class, that someday, we, too, can be rich like them. It's a bum steer, folks. For each 1 billionaire that "came up from the slums" or whatever, there's 100 that are billionaires because their families did some messed up stuff, probably globally, sometime in the last 200 years. And offhand, knowing the stories of a bunch of billionaires: 10 in the US that were honestly self-made, didn't fraud, cheat, or skirt regulations to become that way seems almost a magnitude too high.

    i bring all of the above 2 paragraphs fore, because if one has a position in facebook, of course they're going to rail against facebook losing 230 protection for any part of their operation, instagram, FB feed, whatever. If a person has a position in GOOG, or Apple, or Tesla. What's that Upton Sinclair quote that's been mentioned twice? If someone believes that, given luck and grit, they too could make a "facebook" sized corp, but not if the government says "you can't addict children to sell ads", then i consider them a creep.

    record: my oldest two are early 20s, now.

A/B testing is one way to make things “addictive” but you can also make addictive products without it.

A really good designer could make a highly engaging app or an editor can write clickbait headlines all with without testing.

  • These products maximize revenue through engagement with advertisements. The outcome is built into their business model.

Because most are just no where near as good and effective at ruining a kid's mind as meta. If others were as good as meta at destroying whole generations of cognitive development, they'd probably also be liable.

Correct, selling attention inevitably leads to harm.

  • As a parent, the only solution is sticking to ad-free subscription services. PBS is a godsend here, but there's other good options out there too. Tragic that the public broadcasting funding was cut when there's clear harms in the free* commercial options.

    *Except for your time and mental health of course

    • Agreed. Libraries have books and DVDs, and you have things like the classical stations. You also have playgrounds and walks in the park, etc. (I'm also a parent of two young children.

      Always doing wholesome stuff with your kids is certainly not easy or trivial, but there is a cascading effect here. If your child does not expect to be able to just watch TV all the time it's easier to keep them interested in other things. Once that expectation is burned in you'll be fighting it for a while. And once that expectation is burned in, a small child will _never_ say "I've had enough youtube, I don't need any more."

      So I really don't want to be self-righteous about always doing wholesome stuff with your kids (we definitely do not succeed 100% of the time) -- but rather point out that letting them use addictive media has negative, cascading consequences that actually do make it harder for you as a parent. It's analogous to drinking to relax. You get relief now, and pay for it later. Not actually a good tradeoff much of the time.

      1 reply →

    • PBS is great if you are looking for a workable harm reduction strategy. Eliminating that type of entertainment is probably an even better goal.

I would argue that no app/website should be selling itself to kids. No corporation should be trying to tether its ARR to children's attention.

  • When my kids were young, we canceled our Disney Channel / etc cable subscription and showed them more PBS and similar.

    It was really annoying turning on a show for 30 minutes then for the next week hearing about that new toy they just have to get. It was exhausting.

I guess ultimately it depends on if the app/website authors do so "negligently" or not.

> Jurors were charged with determining whether the companies acted negligently in designing their products and failed to warn her of the dangers.

So if you do so while providing warnings and controls for people, that might make it OK in the eyes of the law?

Probably not much other than scale. Facebook is large enough that they can hire behavioral researchers to make this stuff more addicting while looking the other way and raking in the money. I think Roblox is just as bad (maybe worse) regarding addiction for kids. I’ve played hundreds of hours with my sister’s kids and the way all these low quality slop games handle grinding, progression, and pay gating is honestly disgusting.

But then again, I manage to get myself addicted to a video game usually once a winter for a few weeks, and don’t play games for the rest of the year. There’s really no solution to this, but I don’t want to live in a world where everyone is hopelessly addicted to shallow digital experiences.

It sounds like an adult was awarded $6 million because she watched a lot of youtube/instagram as a kid. Literally any social media site would be guilty of this; I hate to say it but we need better corporate protections if cases like this are allowed to enter court.

At least legal experts are critical of the decision: '“I don’t think it should have ever gotten to a jury trial,” said Erwin Chemerinsky, dean of the UC Berkeley School of Law'

A/B testing is very, very different to handing over control of your content to a reward function that optimizes for time spent over any other criteria.

We had 10 years+ plus of having products like Facebook, Twitter, YouTube, hell even LinkedIn with a basic content model of "you build your own graph of people who you pull content from" and their job was to show it to you and puts ads in there to fund the whole enterprise. If I decided to follow harmful content? That was a pact between me and the content creator, and YouTube was nothing more than a pipe the content flowed through. They were able to build multi-billion dollar businesses off of this. That's really important, this was enormously profitable. But then the problem happened that people's graphs weren't interesting enough, and sometimes they'd go on the thing and there were no new posts from people they followed, and this was leaving money on the table. So they took care of that problem by handing over control of the feed to the reward function.

More accurately, especially for Meta products: they completely took control away from you. You didn't even have the option to retain the old, chronological social graph feed anymore. And it was ludicrously profitable. So now the laws of capitalism dictate that everyone else has to follow suit. I now have extensions on my browser for Instagram and YouTube to disable content from anything I don't follow - because I still find these apps useful for that one original purpose they had when they blew up and became mainstream. Why are these browser extensions? Why can't I choose to not see this stuff in their apps? That's the major regulation hole that led to this lawsuit, imo.

It's the same thing you see with people blaming smartphones for brainrot. We've had 15 to 20 years of smartphones with more or less the same capabilities as they have today and for the vast majority of that time my phone didn't make books less interesting or make me struggle to do chores or manage my time. For a full decade or more I saw my phone as a net positive in my life, was proud to work for Twitter and generally saw technology like the Louis CK bit about the miracle of using a smartphone connected to WiFI on an airplane. But in the last five years or so, things have noticeably and increasingly gone to shit. Brainrot is a thing. All my real life friends who are the opposite of terminally online or technical are talking about it. I don't use TikTok but it seems like that is absolutely annihilating attention spans. The topic of conversation over drinks is how we've collectively self-diagnosed with ADHD and struggle with all kinds of executive function.. but also are old enough to remember a time when none of this existed. Complete normies are reading Dopamine Nation and listening to Andrew Huberman trying to free themselves.

I don't know what the exact solution is, but there's at least a simpler time we can point to when we all had smartphones and we were all connected via platforms and we all posted and consumed stupid pictures of each other and it wasn't.... _this_.

  • This is the clearest articulation of the problem I've seen in this thread. The chronological social graph feed era was fine. The handoff to engagement-optimizing algorithms is where things broke.

    I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.

    Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.

  • Great point RE the self-learning algorithms. That's what I intended originally, but didn't communicate clearly.

  • regarding brain rot, short form content is absolutely going to be the root physical cause - people could tolerate smartphones prior to the inception of short form content. on a cultural level, this level of destruction could be compared to the effects of a coordinated and targeted attack from enemy nation states - if not for the fact that we did this to ourselves in the name of profit. one can only hope that the old guard wakes up to systematically handle this issue that we have no familiarity with, otherwise our system will buckle under the pressure of 10-20 years worth of nonfunctional humans. i do find a technocratic dystopia far more likely, considering the aforementioned mentally castrated opposition ... hows a generation of kids going to win against trillions of dollars of zuckerberg 'engineering' steering them since birth? shame on the 'engineers' who engendered this mess, shame on their shepherd 'managers', and shame on the sociopaths at the top.