Meta and YouTube found negligent in landmark social media addiction case

1 month ago (nytimes.com)

There is a fairly low amount of details about the case in the article. This NPR article [0] has a bit more, but it's still fairly sparse. Though it's interesting how Zuckerberg thought it was a good idea to say: "If people feel like they're not having a good experience, why would they keep using the product?".

Given that this is a case about addiction, that feels like a shockingly bad thing to say in defense of your product. Can you imagine saying the same thing about oxycodone or cigarettes?

[0] https://www.npr.org/2026/03/25/nx-s1-5746125/meta-youtube-so...

  • As someone who values a liberal society, I hope we’d be exceedingly careful in what we label “addictive” in the same bucket as oxy or nicotine.

    I also hope the reasons are obvious.

    • Keep in mind that this case is about about a minor, not an adult. I don't think it's fair to ask children to resist social media through sheer willpower when there are legions of highly educated adults on the other side trying to increase engagement.

      It should be no surprise that children can be manipulated by highly intelligent adults.

      24 replies →

    • > I also hope the reasons are obvious.

      Based on the fact that many people here disagree about fundamental things, as well as the fact that “liberal” is a highly overloaded term, I think it should be obvious that it’s not obvious what you mean.

    • Dark patterns are real. Deceptive advertising is real. So-called prediction markets amount to unregulated gambling on any proposition. Many online businesses are whale hunts and the whales are often addicts.

    • Specifically when it comes to children, we need to show more restraint in giving them the liberty to partake in potentially addictive substances.

      It's one thing if an adult smokes and gambles, it's another thing if a child does. It seems to me that stuff you do in youth tends to stick around for life.

      1 reply →

    • I feel like people use the word “addiction” to refer to both chemical addiction and behavioral addiction, and that people understand that the latter is (usually) far less serious than the former.

      1 reply →

    • > As someone who values a liberal society, I hope we’d be exceedingly careful in what we label “addictive” in the same bucket as oxy or nicotine.

      The problem is that this runs directly into the evidence that is mounting from GLP-1 agonists.

      A lot more things are tied to the pathways we associate with "addiction" than we thought.

    • > I hope we’d be exceedingly careful in what we label “addictive” in the same bucket as oxy or nicotine.

      Not careful enough apparently: Nicotine isn't that addictive on its own, tobacco is.

      11 replies →

    • What's obvious to me likely isn't obvious to you or anyone else, therefore nothing is obvious.

      I wish we'd delete that word from the English language.

    • "I hope we’d be exceedingly careful in what we label “addictive”…"

      To be sure. But still an obviously dumb thing for a CEO to say though.

    • As someone who values a conservative society, I hope we'd be exceedingly careful before releasing products to consumers before knowing whether they're addictive or not.

    • Social media is addictive the same way anorexia is. If you think Anorexia isn't a form of addiction, then sure, you got your 'safety'.

    • What wording would you use then if the definition fit? You can use minor addiction or severe addiction but it's still an one.

    • Why is it that these philosophical ideas about supposed personal freedom again and again make an appearance when it’s about the freedom of corporations? It’s always that. Either that or with the Free User pushed infront of them like a shield.

    • There’s a big distance between libertarian and liberal societies. The libertarian tendencies of corporations are what tend to cause more harm.

    • Mmhmm those are words. Words that are hand wavy pretexts for conservatism rather than liberalism; as a lover of liberal society you hope it acts conservatively!

      This just comes off as poorly obfuscated self selection. You own a bunch of Meta, Alphabet and other media stocks?

  • > Can you imagine saying the same thing about oxycodone or cigarettes?

    No, but unfortunately I can very easily imagine people saying it, just like the people who made loads of money from pushing those products did. Also just like the people who are profiting from the spread of gambling are saying now.

    Why would someone choose to do a thing if it harms them? There are good arguments against laws that restrict personal freedoms, but this isn't one of them.

    • But what if we're talking about a product that you're giving away to children? I agree that for adults, cigarettes are fine. But in this case, you're actively designing to maximize tweens and teens engagement and the end result is them saying that they wan't to stop but can't.

      Though to be fair, I was mostly pointing out the fact that this was a pretty dumb thing to say for a case like this, especially in a jury trial.

      1 reply →

  • > "If people feel like they're not having a good experience, why would they keep using the product?"

    A statement that's been brought up even by HN commentators

    Facebook is not a free market where you can choose. You're compelled to use it for several different reasons (and before some wiseass comments "you're not forced to. you can delete it" yes I know)

    - They captured the early market. There was a small window of time in which to get users

    - They ruthlessly bought up the competition

    - They've deleted links to competitors

    - They outright hijacked people's email addresses. It makes it hard to transfer users to another service or to email them outside the walled garden

    - Even while they change privacy settings for users to make things more public, they wall off public pages. Your local neighborhood has a place where they post information? Even if everyone selects "Public" in the audience you can't see it without an account

    Edit: Oh, and shadow profiles. And making it nigh-impossible to delete an account permanently

  • From what I understand the argument is, and to miss quote Marshall McLuhan it is “the medium, and not the content is the addiction”.

    In other words is not the posts by the influencers, but techniques such as infinite schooling, and so on.

    This is why meta and google could not relay on User-Generated Content Safe Harbor (Section 230) part of the law.

  • Yeah, Zuck is really being a bit of d** there. You can't spend decades hiring the best engineers in the world and give them millions of dollars worth of resources with the sole aim of creating products specifically designed to retain attention and then simply shrug and say "if you don't like it, leave it". That's just not a fair fight.

    • Is designing for retention bad then? God forbid you write a story that captivates readers; if they don't stop reading after a few chapters, why, that's illegal mate

      1 reply →

  • If people feel that smoking causes lung cancer why do they keep smoking?

    • Because they use smoking to fill a hole in their lives. If they are somehow forced to stop, they will just switch to another vice as long as the actual problem isn't solved.

  • A lot of smokers don't feel they are having a good time and want to quit but can't. I'm not sure the same applies to youtube.

    • I know lots of adults who talk about "curing their phone addiction". I don't think someone would find it necessary to write a book "How To Break Up With Your Phone" (using what's referred to as a "digital detox program") if there weren't a substantial number of people who wanted to stop infinite scrolling behavior on their phone but found it difficult.

      https://www.reddit.com/r/nosurf/comments/k3vzaa/how_to_break... - used the reddit link because the existence of r/nosurf is another example of people who want to stop but find it difficult.

    • I knew someone who had exactly that feeling about YouTube. It was a genuine struggle for them to stop even though the amount of time they spent on it was negatively impacting their life and the content was making them more anxious.

      3 replies →

  • it's especially galling because he (or at least his wife) also funds neuroscience research at Stanford and elsewhere, and should have been well informed of the science behind addition, dopamine, and the reward pathways in the brain

  • "If people didn't like destroying the environment, why would they let lobbiests run their government"

    -- Billionaires

  • Why not make personal responsibility illegal whilst we are at it. It is egregious that an individual can be held accountable for their own behaviours.

    • How much personal responsibility should we expect children to have? Genuine question. Because there was a time where some people believed that it was ok for kids to drink alcohol or smoke cigarettes.

      6 replies →

  • The fact that you're comparing nicotine to Facebook really throws into sharp relief just how far from reality this whole "social media made me depressed" stuff has strayed.

At least even money that an appellate court throws this verdict out entirely. Reminder that the US is the only developed country that uses juries for civil trials- everywhere else, complex issues of business litigation are generally left to a panel of judges. It's not that hard to rile up a bunch of randomly impaneled jurors against Big Bad Corporation. The US is kind of infamous for its very large, very unpredictable civil verdicts. There's an incredibly long history of juries racking up shockingly large verdicts against companies, only for an appellate court to throw the whole case out as unreasonable. Not even close to the final word in the American judicial system.

Edit to include: I mean this is coming the same day as the Supreme Court throwing out the piracy case against Cox Communications 9-0. Remember that this case originated with $1 billion dollar jury verdict against them! Was reversed by an appeals court 5 years later and completely invalidated today. Juries should not handle complex civil litigation, I'm sorry

  • Thanks for this take. Also explains why this did not result in much stock price movement today

    • Also at least partially explained by being priced in. The trial was known about and given the conditions described in GP it's not surprising that the verdict went this way.

  • Yeah there are so many reasons this could be reversed on appeal. Whether the judge correctly held questions of section 230, and the First Amendment, is not obvious.

  • The shotgun approach (suing FB, TikTok, Snapchat, and Google simultaneously) makes this sound as ridiculous as the punchline "woman sues McDonalds for coffee being too hot" (distinct from that actual case, which was less ridiculous than the headline).

    Suing Facebook for systematically behaving badly is one thing, if you can prove it and prove it harmed you.

    Suing _everybody_ is one random person getting rich for… being mad at the world she was born into?

    • > the punchline "woman sues McDonalds for coffee being too hot" (distinct from that actual case, which was less ridiculous than the headline).

      Whenever the McDonald's coffee case comes up, I always see caveats about how the actual case was a lot less sensational than the "woman sues McDonald's for coffee being too hot" headline implies.

      I strongly disagree. I'm very familiar with the details of the actual case, and the Wikipedia article gives a good overview: https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau... . Yes, the plaintiff received horrific third degree burns when she spilled the coffee on herself, but lots of products can cause horrible harm if used incorrectly - people cut fingers off all the time with kitchen knives, for example.

      I find the headline "Woman sues McDonald's for their coffee being too hot" a completely accurate description of what happened, with no hyperbole and no "ridiculousness" at all.

      5 replies →

    • > Suing _everybody_ is one random person getting rich for… being mad at the world she was born into?

      Nothing wrong with getting mad at the world when the world is complete and utter garbage to you.

  • Maybe if most people would agree the corporation is big and bad and should have penalties, it’s more democratic to go with that decision that the decision nine unelected philosopher kings come up with.

    • Democracy is flawed which is why our system has checks and balances both democratic ones and non-democratic ones. Mob rule is not preferred thanks

  • >There's an incredibly long history of juries racking up shockingly large verdicts against companies, only for an appellate court to throw the whole case out as unreasonable.

    You might be blaming the wrong people. Looking at a lot of those "shockingly large verdicts", in that they would have bankrupted the company and forced it to be dissolved and reformed as perhaps a less objectionable version of itself: cool, shoulda done that. Sad we didn't.

    Are we conflating matters of merit with matters of judgment, here?

The solution to this would be a law forcing these sites to allow third-party suggestion algorithms, so that you can choose who and how content is being suggested to you.

It could be perhaps as simple as allowing third-party websites and apps for watching Youtube on your phone. And it's okay if this would be a premium paid feature, so there's no counter argument that "it costs them money to host videos".

This is not an entirely new idea either. Before Spotify became popular, people would integrate Last.FM into their media players to get music recommendation based on their listening history, and you could listen to music via YouTube directly on the last.fm website.

  • The solution to all of Big Tech's monopolies is actually pretty simple: Interoperability must become a law - this includes using custom algorithms or allowing other platforms (like your own app) to access YOUR data on whatever platform 'hosts' it.

    Cory Doctorow wrote a great article on it:

    "Interoperability Can Save the Open Web" https://spectrum.ieee.org/doctorow-interoperability

    > While the dominance of Internet platforms like Twitter, Facebook, Instagram, or Amazon is often taken for granted, Doctorow argues that these walled gardens are fenced in by legal structures, not feats of engineering. Doctorow proposes forcing interoperability—any given platform’s ability to interact with another—as a way to break down those walls and to make the Internet freer and more democratic.

    Most notably, he retells how early Facebook used to siphon data from its competitor MySpace and act on user's behalf on it (e.g. reply to MySpace messages via Facebook) - and then when the Zuck(er) was top dog, moved to made these basic interoperability actions illegal by law to prevent anyone doing to him what he did to others.

    • We can’t depend on these platforms to offer interoperability or even laws to force them to do so. The DMA forced Apple to allow 3rd party app stores in Europe and they still hampered it so rarely anyone uses it.

      We need platforms to offer that interoperability and simply connect to these “marketplaces.” Take Shopify for example, sellers use that platform to list on Amazon, Google Shopping, TikTok shop, etc. We need open source alternatives to those where the sellers own the platform and these marketplaces are forced to be interoperable or left behind by those that are.

      For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.

      It’s a tall task, but achievable and it will happen given enough time.

      4 replies →

    • The foundational problem with interoperability is that it can and will immediately be abused by bad actors as long as there is no price tag attached to every piece of communication.

      Among social media, Mastodon (and anything Fediverse) has it the worst, obviously, but Telegram and Whatsapp are rife with spams and scams, Twitter back when it still had third-party apps was rife with credential and token compromises (mostly used to shill cryptocurrencies).

      As for the price tag reference - we've seen that with SMS. It used to be the case that sending SMS cost real money, something like 20 ct/message. It was prohibitively expensive to run SMS campaigns. But nowadays? It's effectively free at scale if you go the legit route and practically free if you manage to get someone's account at one of the tons of bulk SMS providers compromised. Apple's iMessage similarly makes bad actors pay a lot, because access to it is tied to a legitimate or stolen Apple product serial.

      9 replies →

    • Breaking up these monopolies would be a good start. We aren't supposed to have those. There used to be something we called "regulations" but they got rid of that part I think. Elections have consequences.

      5 replies →

    • Be careful what you wish for. Making it easier to access your data in a standard way just means more companies and governments will ask for it.

  • It seems likely that'd result in even worse suggestions becoming the norm as people adopt the third-party that gives the quick dopamine rush. It's like suggesting tastier heroin to fix drug addiction.

    • There's a difference between addictiveness and enjoyment, and definitely between addictiveness and satisfaction.

      While the thing that gives you quick dopamine might win in the very short term, you can still step back and recognize when it's not satisfying in the long term and you're not even enjoying it that much.

      And people aren't stupid. Junk food exists, yet lots of people choose to eat more wholesome food as the majority of their diet.

      The problem with instagram or youtube is that you can't separate the good from the bad.

      It's like if every time you went to store Y to buy milk, you would be exposed to highly manipulative marketing trying to get you to buy junk food. You would probably want to go to a different store instead.

      What I'm suggesting is the possibilities of different stores, with different philosophies and standards, so that people can choose where they go. Corner stores (where almost everything is junk food) exist, yet people still choose to go to real supermarkets.

      1 reply →

    • Parent poster has some… interesting and popular but entirely false views on neuroscience. Specifically, an extremely outdated view on concepts like the role of dopamine and dopaminergic neuronal populations in human cognition. Rather than an understanding based on science and the idea that incentice salience and valence is modulated by such populations, he is attributing pleasure and enjoyment to them because of a meme.

    • Certainly not. People don’t want the slop they push, the anxiety provoking, salacious, clickbaity spam that it has devolved into. Anybody that used YouTube before the last few years can tell you the difference is pretty major. This is not content people want, it’s content that maximizes clicks and ad sales.

      10 replies →

  • Anything that’s a premium paid feature will be irrelevant. Most people don’t subscribe to YouTube premium, even though they know their kids are watching a ton of ads. Adoption has also been incredibly brisk on the ad tiers of the formerly ad-free TV services like Netflix and Hulu.

    I realize “less addictive algo” is a different thing to pay for than removing ads - but it’s, if anything, an even harder sell - I think the layperson wouldn’t even acknowledge that they are vulnerable to being psychologically manipulated. They think they spend so much time on these apps because it’s so enjoyable.

    From most parents’ point of view, paying a monthly bill for their children to have a less toxic experience on TikTok, or YouTube will be considered an extravagance instead of a responsible safety expense.

  • Third-party recommendation algorithms would be interesting, but I think they'd only address one layer of the addictive design the verdict is actually about. Autoplay, infinite scroll, notification timing, the variable reward patterns from likes and comments -- those are all independent of which algorithm picks the next video. You could swap in the most wholesome recommendation engine imaginable and a kid is still gonna sit there for hours if the UI is designed around endless content with no natural stopping points.

  • The real solution is going back to a chronological feed of people you actively choose to follow.

    • At the very least, that should certainly be an option that users can select. And when the user selects a feed algo, it should stay fucking set until that same user actively chooses to change it.

  • Bluesky does this. In fact, the For You algorithm is a community built algorithm and way more popular than the native Discover algo.

  • > Before Spotify became popular, people would integrate Last.FM into their media players

    I still scrobble to Last.fm from Spotify (and other media players). I rarely use it for discovery anymore, but it's occasionally interesting to look at my historical listening trends.

  • This seems like a clever (but perhaps overly clever) amendment to Section 230 protections for social media.

    However, I've always thought that it's pretty bizarre for Section 230 protections to apply when the social media company has extremely sophisticated algorithms that determine how much reach every user-generated piece of content gets. To me there's really no distinction between the "opinion" or "editorial" section of a traditional media publication and the algorithms which determine the reach of a piece of user-generated content on Twitter, YouTube, etc.

  • Or just stop suggesting content. The landing page is just a matrix of already followed accounts with the text "Start by following some accounts you like..." as a placeholder if it's a new account.

  • I’m quite bullish on disintermediating the algorithms. AI makes it very easy to plug in your own. We just haven’t figured out the plumbing yet.

    I’d be strongly in favor of interoperability laws to pry open the monopolies.

    (One dynamic you do need to be careful about especially at first - interoperability also means IG can pull your friend graph from Snapchat, so it can also make it easier for big companies to smother smaller ones that are getting momentum based on their own social graph growth due to their USP. I don’t think this is insurmountable, just something to be careful of when implementing.)

  • If the default algo/behavior is allowed to persist, it's going to be effectively no real change.

    Drop the algorithm altogether? I subscribe to channels for a reason.

  • How do you prevent a Cambridge Analytica exfiltration situation with third party algorithms?

    And how does this prevent addictive algorithms which will win through social selection?

    • The Cambridge Analytica stuff never got fixed, it just got hidden out of sight. The situation is worse than ever now.

  • That’s like saying the solution to cigarettes is that tobacco shops must be forced to sell clove cigarettes as a not-addictive alternative.

  • Yes please. Algorithms should be plug-in-and-play and not endemic to the app. You should be able to take popular algorithms and plug them into any app

    • That's just laundering the bad actions though a third-party.

      The winning third party algorithm will be the one that gives people the same rush the first party algorithms currently do, because people will use it for the same reasons; they get to see cute AI animals do crazy things forever.

      1 reply →

  • Virtually nobody would choose to pay a subscription for the non-addictive app version, and I'd even say this suggestion is a bit insulting to anyone who isn't high-income.

    • I will never pay a subscription for the current clickbaity slop. I might if the algorithm were better, closer to YouTube of 10 years ago, when it would suggest lectures, artfully done film shorts, and overall more interesting, high quality content.

      2 replies →

  • Seriously? You think they should allow random third parties to inject code into their platforms with all the possible security risks? Regardless the intent that is a terrible idea.

  • Or algorithms have to be submitted and approved by a government body before being allowed to be implemented and are frequently audited

    • I guess this is the only way. I don't think we need novel approach and I don't consider this a novel one since we already have government agencies verifying approved processes in other areas so why not content distrubution.

  • The only solution is to outlaw all recommendation algorithms. Accounts should only have access to a chronological feed they choose to follow. The host can promote whatever they want, but it has to be the same promotions for everybody.

    • I like recommendation algorithms. If someone on my friends list posted about a major life event a few days ago and I haven't seen it yet then I want that prioritized first, before more recent posts. Chronological feeds should be an option for those who want them but they shouldn't be forced on anyone.

  • I think a better solution would be to repeal section 230 protection for any kind of personalized or algorithmic feed. The algorithm makes you a publisher, and you should be liable for what you publish.

    That would make it very hard, nigh impossible, for a platform like YouTube or TikTok to exist as it does today, and would instead favor people self-curating mechanisms like RSS readers etc.

    • >and would instead favor people self-curating mechanisms like RSS readers etc.

      That isn't what would happen.

      What would happen is that only the platforms which can afford legal teams - in other words, the big platforms - would host user posted content under strict arbitration only terms, and every other platform (including Hacker News, which uses an algorithmic feed) would simply not. Removing one of the cornerstones of free speech on the web in favor of regulation will only centralize the web more.

      And you wouldn't see mass adoption of "self curating mechanisms" because most people aren't like Hacker News people and would find the premise of having to manually curate data feeds from every they visit to be a tedious waste of their time.

      I also think that platforms like Youtube and Tiktok shouldn't be illegal. I don't even think that personalized algorithms should be illegal - it's surprising that one has to point this out on a forum of programmers - but algorithms have no inherent moral dimension and the ability to use an algorithm to find and classify relevant content can be useful. The same algorithm that surfaces extremist content surfaces non-extremist content. The algorithm isn't the problem, rather the content and the policies of these platforms are the problem. And I don't think the solution to either is de facto making math illegal and free speech more difficult.

    • How is RSS self curating? It's just a way to get a feed from somewhere. And under the maximally external-locus-of-control culture this jury is using, those feeds would themselves be deemed evilly addictive.

      There is no solution for this kind of verdict beyond appeal, or changes to the law to rule such suits out, because it's not rooted in any logical or legal principle beyond the idea that people should not be responsible for their own actions (or their children's actions). But there's no limiting factor to that belief. You can't fix it with RSS or federation or making people select who they follow or chronological feeds. Those would just get blamed for "addiction" instead.

      4 replies →

I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species.

Anecdote, but it does seem like a lot of younger folks I speak with are exhausted by the dark patterns and dopamine extraction that top-k social media platforms create.

If agents/AI/bots inadvertently destroy the current incarnation of social media through noise, I think we'll be better for it.

  • > I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species.

    This sounds like the original internet.

    Before adtech took over.

    • The original internet wasn't about that at all, it was just in limbo while people were figuring out what it was going to be. It wasn't developed or optimized enough to be _anything_.

  • It will come. The problem is. So will the addictive stuff. The key is going to be real meaningful connection. Social media wasn't about community. Web 2.0 was. In 2005 we were connecting with real people we knew and probably up until 2011-2012 maybe we still were, but I guess friends of friends, colleagues, people in our network. Then it got really bad.

    Getting back to community is key.

  • > I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species.

    To me this statement reads as both inaccurate and ignorant of human nature. Social media was actually better when it was about individual ego (Myspace/LiveJournal); as obnoxious as that can be, today everything is worse because of petty tribalism. Most conflicts on social media are inter-tribal, whether it’s racial, political, national, or feuding “stan” culture groups. The worst problems come from groups who organize on platforms like Discord or Kiwi Farms to direct harassment campaigns against perceived enemies (or random “lolcow” victims).

    Simple observation of the present world and history will tell you that a platform focused on “collective improvement” will only appeal to a small subset of potential users. Of course such a platform would not be a bad thing. Places like this (such as The WELL) used to be common when the internet was dominated by academics, futurists, and tech enthusiasts. But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.

    • > To me this statement reads as both inaccurate and ignorant of human nature

      > But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.

      I'm not ignorant of human nature and tribalistic tendencies. The undercurrent of my comment is of an optimistic hope (or cope) that we can move past competitive individual validation programming. I'm aware that it's due to our nature, but also aware that it's exploited by dark patterns and extraction at scale through software.

      3 replies →

  • > I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species

    Do you have a mechanism for this in mind, incentives-wise? I can't see this making money.

    • I guess the real question is whether a website where you communicate with friends and close ones needs to be a multi-trillion dollar company in the first place... historically most of them have not been worth very much at all.

      13 replies →

    • A $4.99/mo subscription would yield more revenue than Facebook makes in ARPU from all that fancy, creepy, and intrusive ad tech. Paying YouTube to not advertise to you makes it a 10X better experience.

      1 reply →

    • Well, another example comes to mind. Coordinated efforts to preserve the biosphere for all mankind are probably not going to be great for GDP.

      We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?

      The "the incentive structure says we should all destroy our brains" thing is just a small aspect of that.

      2 replies →

    • It doesn't need to make money directly (and probably shouldn't).

      The incentives would be those which have motivated people throughout history: to create something which benefits humanity.

      3 replies →

  • I hear word that in some countries, the government makes it so that screen time is limited, and algorithms promote educational content. Fortunately we civilized peoples are free of such a brutal oppression ;)

  • > If agents/AI/bots inadvertently destroy the current incarnation of social media through noise, I think we'll be better for it.

    They are going to be (and AI slop already is) so much worse. Once they get ads to work well / seem natural the dark patterns will pop right back up and the money spigot will keep flowing upwards

How is any app/website that 1) appeals to kids, 2) sells attention, 3) does A/B testing and/or has a self-learning distribution algorithm NOT guilty of this?

  • It probably helps when you suppress research that shows you’re harming children and allow human traffickers to fester on your platform with 17 warnings or whatever.

    • The argument that research was suppressed and this is somehow damning is absurd on its face. The most obvious reason being that they obviously didn't do a very good job of suppressing it given that we hear this claim every day. The second being that they could have just not done this research at all and then there would have been nothing to "suppress" (this terminology is also very odd... if 3M analyzes different sticky notes and concludes that their competitors sticky notes are better than theirs but does not release the results, is that suppression?). The third is that studies with the same results have come out probably every year since 2010 and have been routinely cited in the mainstream press. Lastly, it ignores that many platforms have actually responded to research about potential harms of social media by implementing safeguards on teen accounts.

      Look at the plaintiff in this case: it's a mentally unstable person who blames her life problems on social media. Never mind the fact that she had been diagnosed with mental illnesses as an early teen, or that an overwhelming majority of people who use social media don't develop eating disorders or other mental illnesses as a result of it (and in fact the incidence of say bulimia peaked 30 years ago in spite of almost universal social media adoption among young people). This is not at all like smoking where 15% of smokers will get lung cancer.

      And due to some absurd legal reasoning the plaintiff was allowed to pseudonymously extort $3 million out of tech companies. Worst of all I see people on a technology forum applauding this out of some sort of resentment towards large companies!

      8 replies →

  • I think there is a fourth portion that is probably more important:

    Actively ignoring harm caused by your product. TV/radio has sold attention, but there were pretty strict rules on what you can/can't broadcast, and to whom. (ignoring cable for the moment) Its the same for services, things that knowingly encourage damaging behaviours are liable for prosecution.

    • Except cable is the more apt comparison here - broadcast rules exist because airwaves are an extremely finite resource and so we can argue that the government has a vested interest in what kind of speech can happen on them. No such scarcity exists with web services.

  • I think there's a little more nuance than that, but it seems roughly correct.

    Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?

    • I think addiction is a redherring.

      Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.

      1 reply →

    • They'd find another method. Why are we allowing this in the first place?

      I don't have an answer to fix this whole mess, but it starts with our attitude towards addiction. We've built a system that rewards addiction in all sorts of places. Granted, every addiction is different, and I'm of the opinion that it's not (drug = bad), it's how you use it and react to it. We can control the latter, but we choose to ignore it because we're too busy with anything else. This is a tale as old as time...

      15 replies →

    • For context, facebook is so dystopian when I login once every few years that I’m not sure I’ll ever use it again. And, I hate wading through the YouTube cesspool to find some educational content I like. But, I don’t think it makes sense to ban a/b testing or optimization in general. Some company could use it, for example, to figure out how to teach math to kids in a way that’s as engaging as possible. This would be “more addictive” technically.

      1 reply →

    • > more nuance

      Not enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because

      > It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair

  • How’s this different than tv that a kid might see that has ads and programming targeting kids?

    I watched 80s horror movies when I was in elementary school and had nightmares for years. Should I sue now?

    How about parents be held responsible for how they care for their kids or not? Maybe a culture that judged parents more strongly for how they let their kids spend their time would be an improvement.

    • Being able to find some basis for comparison between two things does not render them equivalent, and this is an extremely frequent fallacy I see with regard to technology discussion on HN.

      14 replies →

    • Both things can be true. Parents can share responsibility. But it is also the case that Facebook actively suppressed research that showed that children using their platforms experience emotional harms. It is also the case that around the time you were in elementary school discussions about children’s programming had been ongoing for years and eventually regulations were put in place[0].

      0: https://en.wikipedia.org/wiki/Regulations_on_children's_tele...

      1 reply →

    • > How’s this different than tv that a kid might see that has ads and programming targeting kids?

      Those ads didn't adjust themselves on a per-child basis to their exact interests.

    • The difference is largely in the way that the legal caste perceives themselves to be aligned with media but opposed to tech.

    • Parents ought to be held held responsible for how they care for their kids. This isn't just true of their use of social media and devices, but also when it comes to teaching them to look both ways when crossing the street; making sure they understand the concept of private parts, consent and personal space; making them understand the dangers of alcohol, and many other things.

      Does any of that obviate the need for safe urban design, anti-CSAM and anti-molestation laws, or laws prohibiting the local dive from serving a cold one to my 11 year old? Will simple appeals for "parental responsibility" suffice as an argument for undoing those child safety systems we put in place, or will they be met with derisive dismissal? Why should your "solution" be treated any differently? In fact you offer none. Yours is the non-solution solution, the not-my-problem solution, the go-away solution. Not good enough on its own, sorry.

      6 replies →

    • > How’s this different than tv that a kid might see that has ads and programming targeting kids?

      It's not, that illegal as well. You cannot target kids with TV advertising.

    • We're a two parent household and my spouse had cancer and never really got all of their energy back, and works full time, so the entirety of home, land, and car maintenance comes to me.

      I homeschool our youngest because the school system here sucks, based on the experiences of our older two. I'm always exhausted. I solved this (the "parents must be more involved") by watching my kid play roblox, arguing with them about spending their money on gift cards instead of lego, posters, or whatever that isn't so fleeting; i also don't let them have a cellphone. They turn 10 in June. We don't have TV or CATV, i have downloaded most of the old TV programs that kids liked, and grandma doesn't watch kid's shows so he really doesn't have a perspective on what everyone else's viewing habits are. He watches YT on his Switch about fireworks, cars, and then also some of the idiots with too much money acting goofy, plus what i would call "vines compilations" of just noises and moving pictures, i don't get it, but it seems harmless. For the record, pihole no longer blocks youtube ads, so i was just told there are ads on the Switch, now.

      But anything beyond that, i can't watch nor do i want to watch their every interaction on a computer. I gotta cook, the weather isn't always conducive to send them outside to play, as well. When i was growing up and was bored, there wasn't too much i could do about it. Today, my youngest has virtually anything on the planet just peeking around the corner. America's Funniest home videos and a blue square shooting red squares at orange squares? yeah, ok.

      ===========

      It's getting to the point where i think people who have really strong opinions on topics like this need to disclose any positions they might have that influence their opinion. My disclosure is that i have no positions in any company or entity.

      Everyone in the US has been fed a lie that if we just work hard and don't interfere with the billionaire class, that someday, we, too, can be rich like them. It's a bum steer, folks. For each 1 billionaire that "came up from the slums" or whatever, there's 100 that are billionaires because their families did some messed up stuff, probably globally, sometime in the last 200 years. And offhand, knowing the stories of a bunch of billionaires: 10 in the US that were honestly self-made, didn't fraud, cheat, or skirt regulations to become that way seems almost a magnitude too high.

      i bring all of the above 2 paragraphs fore, because if one has a position in facebook, of course they're going to rail against facebook losing 230 protection for any part of their operation, instagram, FB feed, whatever. If a person has a position in GOOG, or Apple, or Tesla. What's that Upton Sinclair quote that's been mentioned twice? If someone believes that, given luck and grit, they too could make a "facebook" sized corp, but not if the government says "you can't addict children to sell ads", then i consider them a creep.

      record: my oldest two are early 20s, now.

  • A/B testing is one way to make things “addictive” but you can also make addictive products without it.

    A really good designer could make a highly engaging app or an editor can write clickbait headlines all with without testing.

    • These products maximize revenue through engagement with advertisements. The outcome is built into their business model.

  • Because most are just no where near as good and effective at ruining a kid's mind as meta. If others were as good as meta at destroying whole generations of cognitive development, they'd probably also be liable.

  • Correct, selling attention inevitably leads to harm.

    • As a parent, the only solution is sticking to ad-free subscription services. PBS is a godsend here, but there's other good options out there too. Tragic that the public broadcasting funding was cut when there's clear harms in the free* commercial options.

      *Except for your time and mental health of course

      3 replies →

  • I would argue that no app/website should be selling itself to kids. No corporation should be trying to tether its ARR to children's attention.

    • When my kids were young, we canceled our Disney Channel / etc cable subscription and showed them more PBS and similar.

      It was really annoying turning on a show for 30 minutes then for the next week hearing about that new toy they just have to get. It was exhausting.

  • I guess ultimately it depends on if the app/website authors do so "negligently" or not.

    > Jurors were charged with determining whether the companies acted negligently in designing their products and failed to warn her of the dangers.

    So if you do so while providing warnings and controls for people, that might make it OK in the eyes of the law?

  • Probably not much other than scale. Facebook is large enough that they can hire behavioral researchers to make this stuff more addicting while looking the other way and raking in the money. I think Roblox is just as bad (maybe worse) regarding addiction for kids. I’ve played hundreds of hours with my sister’s kids and the way all these low quality slop games handle grinding, progression, and pay gating is honestly disgusting.

    But then again, I manage to get myself addicted to a video game usually once a winter for a few weeks, and don’t play games for the rest of the year. There’s really no solution to this, but I don’t want to live in a world where everyone is hopelessly addicted to shallow digital experiences.

  • It sounds like an adult was awarded $6 million because she watched a lot of youtube/instagram as a kid. Literally any social media site would be guilty of this; I hate to say it but we need better corporate protections if cases like this are allowed to enter court.

    At least legal experts are critical of the decision: '“I don’t think it should have ever gotten to a jury trial,” said Erwin Chemerinsky, dean of the UC Berkeley School of Law'

  • A/B testing is very, very different to handing over control of your content to a reward function that optimizes for time spent over any other criteria.

    We had 10 years+ plus of having products like Facebook, Twitter, YouTube, hell even LinkedIn with a basic content model of "you build your own graph of people who you pull content from" and their job was to show it to you and puts ads in there to fund the whole enterprise. If I decided to follow harmful content? That was a pact between me and the content creator, and YouTube was nothing more than a pipe the content flowed through. They were able to build multi-billion dollar businesses off of this. That's really important, this was enormously profitable. But then the problem happened that people's graphs weren't interesting enough, and sometimes they'd go on the thing and there were no new posts from people they followed, and this was leaving money on the table. So they took care of that problem by handing over control of the feed to the reward function.

    More accurately, especially for Meta products: they completely took control away from you. You didn't even have the option to retain the old, chronological social graph feed anymore. And it was ludicrously profitable. So now the laws of capitalism dictate that everyone else has to follow suit. I now have extensions on my browser for Instagram and YouTube to disable content from anything I don't follow - because I still find these apps useful for that one original purpose they had when they blew up and became mainstream. Why are these browser extensions? Why can't I choose to not see this stuff in their apps? That's the major regulation hole that led to this lawsuit, imo.

    It's the same thing you see with people blaming smartphones for brainrot. We've had 15 to 20 years of smartphones with more or less the same capabilities as they have today and for the vast majority of that time my phone didn't make books less interesting or make me struggle to do chores or manage my time. For a full decade or more I saw my phone as a net positive in my life, was proud to work for Twitter and generally saw technology like the Louis CK bit about the miracle of using a smartphone connected to WiFI on an airplane. But in the last five years or so, things have noticeably and increasingly gone to shit. Brainrot is a thing. All my real life friends who are the opposite of terminally online or technical are talking about it. I don't use TikTok but it seems like that is absolutely annihilating attention spans. The topic of conversation over drinks is how we've collectively self-diagnosed with ADHD and struggle with all kinds of executive function.. but also are old enough to remember a time when none of this existed. Complete normies are reading Dopamine Nation and listening to Andrew Huberman trying to free themselves.

    I don't know what the exact solution is, but there's at least a simpler time we can point to when we all had smartphones and we were all connected via platforms and we all posted and consumed stupid pictures of each other and it wasn't.... _this_.

    • This is the clearest articulation of the problem I've seen in this thread. The chronological social graph feed era was fine. The handoff to engagement-optimizing algorithms is where things broke.

      I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.

      Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.

    • Great point RE the self-learning algorithms. That's what I intended originally, but didn't communicate clearly.

    • regarding brain rot, short form content is absolutely going to be the root physical cause - people could tolerate smartphones prior to the inception of short form content. on a cultural level, this level of destruction could be compared to the effects of a coordinated and targeted attack from enemy nation states - if not for the fact that we did this to ourselves in the name of profit. one can only hope that the old guard wakes up to systematically handle this issue that we have no familiarity with, otherwise our system will buckle under the pressure of 10-20 years worth of nonfunctional humans. i do find a technocratic dystopia far more likely, considering the aforementioned mentally castrated opposition ... hows a generation of kids going to win against trillions of dollars of zuckerberg 'engineering' steering them since birth? shame on the 'engineers' who engendered this mess, shame on their shepherd 'managers', and shame on the sociopaths at the top.

Apps like instagram and YouTube should be required at least to give an option to disable reels and shorts

  • There should be a law to require the ability to disable algorithmic customization of content. If these apps are so compelling it shouldn't take a Spark cluster riffing on my private viewing habits to come up with content for me.

    I don't recall a lot of complaints about Facebook or Instagram when it was actually your friends' content. But now it's force-feeding everybody their own "guilty pleasure" viewing material 24 hours a day. It's fucking sick.

  • one of the benefits of being on android and being able to sideload apps. Look up "revanced youtube" and you'll be able to turn off shorts.

    ublock origin for blocking them on desktop. If you're on an iphone... uninstall youtube?

    my quality of life has increased substantially... although sometimes the app bugs out and shorts still make it on my home page. I spend like 10 minutes scrolling through shorts and get a weird shock "how the fuck did I end up here?", restart the app and boom shorts gone again.

  • Don’t forget WhatsApp. Kids are allowed to have WhatsApp as messaging but they get fed videos there too. There is no way to really disable them . Also this be allowed as parental supervision, not something that kids can override.

  • Perhaps we need more social activism (remember that?) to stop people falling into this kind of addiction. I remember anti-drugs campaigning , they were everywhere. Phone addictions are not taken nearly as seriously.

  • Youtube shorts will come back but you can just click the row each time to show less. Otherwise if you really don't want to see them on the desktop at least a browser extension works well.

In before someone says ‘blame the parents’ and not the multi-billion dollar companies who’ve spent decades targeting children for lifelong addiction, ignoring the negative effects on their mental health.

  • It need not be either-or.

    The guy who made the drugs is guilty. The guy who sold the drugs to kids is guilty. But parents who failed to warn kids about drugs and to oversee them properly are also guilty...

    • Generally in an article about arresting or sentencing a drug dealer, people don't bring up that the drug users are actually to blame.

      Now if we're in a discussion around the cartels, plenty of people do bring up (and there's also those that get annoyed by it) that the drug users are actually the ones funding the cartels via their drug use.

      Along these lines, I think another fun comparison might be opioid use and Purdue.

      3 replies →

  • The thing is, it should be both. Parents often give too little fucks for long term welfare of their children, often also guilty of same vices. Issue is, these addictions are way more destructive to young forming mind than to adults. Nobody having small kids now had fb or instagram access when they were 5, did they.

    Maybe you don't do this. Certainly I don't. But when looking around, its much less rosy and... lets say in blue collar families its too common to drug kids with screens so parents have off time. Heck, some are even proud how modern parents they are. Any good advice is successfully ignored, and ideas of passing some proper time with kids instead are skillfully avoided. People got lazy and generally expect miracles from life without putting in any miracle-worth efforts.

    Companies just maximize their profits till laws allows them (and then some more), and expecting nice moral behavior by default is dangerously naive and never true.

    • Consider that the insane growth in the cost of living - especially childcare - combined with wage stagnation means that now the vast majority of families have 2 parents with full-time jobs, keeping them away for their families for much longer than before. Consider that childcare is much, much harder to even get into now than in decades past. Consider also that "EdTech" means that nearly every child needs to be on an internet equipped-device at all times.

      But sure, "Parents often give too little fucks for long term welfare of their children", that's definitely it. Parents just hate their kids! What a useful perspective you've brought to the discussion.

      1 reply →

Read the book “Careless People” if you have a chance - according to the book, social media companies figured out they have real leverage with politicians since they can influence elections. As a result they are actively pushing for far right candidates to reduce their own taxation and regulation.

  • I don't think this accelerationism/fascism hobby of many tech bros is going to age well.

  • That book was so lame and the author leaves out how she profited millions and then only complained after she was fired.

    Its also funny how they “discovered” they were influencing elections after they influenced the 2008 and 2012 elections.

    How did the author not know this when she sought out and joined the company in like 2013!

    The parts about playing Settlers of Catan with Zuckerberg was funny. I wonder what his side of the story was and if people were really letting him win.

    • I'm not going to attack you but I do just want to highlight analogous comments of "she could have left"

      - She was trying to work to change things

      - She was pregnant and otherwise had young children and needed the money

      1 reply →

This just seems ripe for selective enforcement if not codified in law. I agree the algorithm they use can be addicting, but it's because it's simply good at providing content the user wants to consume.

Besides a general 'don't be too good' I'm really not sure what companies should do about it. It just seems like it'll lead to some judges allowing rulings against companies they don't like.

Television's goal was always viewer retention as well, they were just never able to target as well as you can on the internet.

  • I see it as similar to the public health crisis created when protonated nicotine salts made their way into vapes along with flavors allowing 2-10x more nicotine to be delivered and the innovation that made Juul so popular with children.

    The subsequent effects - namely being easier to consume and more addictive - eventually resulted in legislation catching up, and restrictions on what Juul could do. It being "too good" of a product parallels what we're seeing in social media seven years later.

    Like most[all] all public health problems we see individualization of responsibility touted as a solution. If individualization worked, it would have already succeeded. Nothing prevents individualization except its failure of efficacy.

    What does work is systems-level thinking and considering it an epidemiological problem rather than a problem of responsibility. Responsibility didn't work with the AIDS crisis, it didn't work on Juul, and it's not going to work on social media.

    It is ripe for public health strategies. The biggest impediment to this is people who mistakingly believe that negative effects represent a personal moral failure.

  • > it's because it's simply good at providing content the user wants to consume.

    Well, a drug addict wants to consume his drug. Because his drug is good at keeping abstinence syndrome at a bay and probably the tolerance hasn't build up to levels when the addict couldn't feel the "positive" effects of it.

    The user feels an impulse to consume the content, but whether they want it we can know only by questioning them. They can lie consciously or unconsciously, but there are no better ways to measure a desire to consume it. When talking about doom scrolling I never met a person who said they want to do it, but there are people who do it nevertheless.

    > This just seems ripe for selective enforcement if not codified in law.

    I agree. I'm not sure how they define "addiction" and how they measure "addictiveness". It is the most important detail in this story.

  • Companies that sell products to the public have managed this for a hundred years. Some are good at it, some are not, some completely disregarded their obligations. This is not all that new.

  • Lets just be honest, if you make enough money its legal in America.

    Unless you hurt children, then its mostly legal and a slap on the wrist.

  • > I'm really not sure what companies should do about it

    disassemble the intentionally addictive properties they built into their platforms to maximise engagement and revenue at the cost of the mental health of their users.

Coming from someone who hate social media (and has kids) this might seems like a good thing on the surface, but I worry it will be another case used to allow the government to limit speech on the internet for adults.

  • This is a civil trial between a regular person and corporations about product liability. It has nothing to do with the government.

I believe social media is on a collision course with an iceberg called Section 230.

Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.

A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.

I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.

Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.

How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.

I believe that all these platforms will end up being treated like publishers for this reason.

So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.

I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.

  • Please read:

    https://www.techdirt.com/2020/06/23/hello-youve-been-referre...

    • This is an opinion and I believe it's wrong. And you just have to look at the statute to see why [1]:

      > (c) Protection for “Good Samaritan” blocking and screening of offensive material

      > (2) Civil liability

      > (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

      "in good faith" is key here. Here's another opinion [2]:

      > One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

      So far the Supreme Court has sidestepped this issue despite cases making it to the Appeals Court. Until the Supreme Court addresses, none of us can say with any certainty what is and isn't protected.

      [1]: https://www.law.cornell.edu/uscode/text/47/230

      [2]: https://www.naag.org/attorney-general-journal/the-future-of-...

      2 replies →

  • Why do you believe that "Section 230 differentiates between publishers and platforms"?

    • Section 230(c)(i) [1]:

      > (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material

      > (1) Treatment of publisher or speaker

      > No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

      This is a protection for being a platform for third-party (including user-generated) content.

      Some more discussion on this distinction [2]:

      > Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.

      It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".

      By the way, my view here isn't a fringe view [3]:

      > One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

      This is exactly my view.

      [1]: https://www.law.cornell.edu/uscode/text/47/230

      [2]: https://bipartisanpolicy.org/article/section-230-online-plat...

      [3]: https://www.naag.org/attorney-general-journal/the-future-of-...

      4 replies →

Oh man if they think YouTube and Instagram are addicting they should see what Roblox does lol

  • As someone who maybe fired about Roblox once like three years ago, what does Roblox do that is way more addicting than YouTube and Instagram, and also I guess they're ignoring reports showing the harm even more than YouTube and Instagram, if I understand you correctly?

    • It's an interactive world - where games can be built by anyone (I personally know/met some of the devs) and all the games have some randomization/gambling mechanics involved. Lootboxes is just one tiny example. Infinite novely - there's literally infinite number of games one can play.

      I don't have time right now to provide a full/quality answer with more examples - you can do a bit of seraching online to learn more.

      Also from personal expeirence as well (from family and friends). When their kids comeover they have tiktok on their phone and roblox on their laptop

      2 replies →

  • There's also Prodigy which schools push on kids to practice math has the same thing including pay to win mechanics.

Notably a different case from the other one in New Mexico:

Jury finds Meta liable in case over child sexual exploitation on its platforms

https://news.ycombinator.com/item?id=47509984

  • And one with much deeper implications on how they operate. It's easy for Meta to just hire more moderators or treat reports of exploitation with higher priority; if this verdict stands, I think they have no realistic choice but to abandon usage targets.

    • Realistically they will hire expensive lawyers, pay out hundreds of millions to billions in settlements, fire lots of people (workforce is predominantly American), etc.

      Even if they do what you're saying, lots of people who've used any Meta property in the last 15 years has a potentially viable case, and no future work can swat those away

I've heard about "landmark" cases against these companies over and over again for the last decade. There seems to be at least one every couple of years. And yet literally nothing has ever happened or changed.

  • Since these are civil lawsuits, it just takes more people coming forward to sue. There are plenty of cases where a jury found a defendant liable for damages only for the defendant to continue the bad behavior and subsequent juries awarding ever-increasing and compounding punitive damages. Big Tobacco and Purdue Pharma (went bankrupt) are examples of this pattern. Monsanto was famously hit hard with massive "repeater" damages after they continued selling and marketing Roundup despite prior judgements.

    The exact same can happen to Big Tech. The goal is to get them to stop the bad behavior now.

  • I feel the same way. They're just going to appeal the case until they find a layer of the legal system where they have leverage.

Are there any takeaways here for builders of social media applications who are not Facebook or Google? Is this a warning to not make your newsfeed algorithm "too engaging" or is it only really relevant for big companies?

  • I'm not an authority on this matter. But if you say "I can stop any time", and it is not true, then you have a problem.

Short form video is a different beast altogether, and much more concerning. The fact that these platforms don't offer a way to avoid short form altogether is a big issue.

YouTube allows you to "show fewer shorts" but what if you don't want them popping up at all?

AI Slop is the best thing to happen to these platforms - because it will lower trust and engagement as people (hopefully) become tired of inauthenticity. Rage bait is potent when the event in the video _actually_ happened, but when you realize it was AI generated, the manipulation feels even more obvious (though it was always there).

These platforms should also allow users to understand how the algorithm has categorized them, and be able to configure it. YouTube, Instagram, et al. would be safer places for viewers if they allowed users to tell them what they want to be exposed to, and what they don't. Big tech is dodgy about this currently, because the more control the user has the lower the engagement (good for the user, bad for profit).

  • That "show fewer shorts" button doesn't do a damn thing. I click it, refresh the page and whala, shorts.

    • Previously I made a chrome extension that removes them from web... But I haven't updated it in a while. Basically just inspects the HTML/CSS patterns of the shorts components and removes them from the page. You could probably code/vibe code a similar extension in 10m.

I have a somewhat unusual vantage point on this.

I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.

So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.

Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.

The change has to come from people who understand these systems building better ones.

Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.

I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.

Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose

[1] https://www.emorahealth.com/clinical-insights/social-media-v...

  • > if you've ever tried telling a toddler "no" you know how well that works long-term

    Parent here. Acting like it’s impossible and you have no choice but to let them have their way is a cop-out. Telling kids “no” and enforcing boundaries is part of the job.

    > my toddler is already on YouTube Kids.

    > I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted

    I have a better solution that I use: If I can’t stay involved enough to monitor what the kids are choosing to watch, I don’t let them loose watching YouTube. They get to go play outside or with LEGOs or do puzzles or any of the other countless activities that are fun for kids.

    This isn’t a problem that is solved by creating advanced filtering that lets you block anything related to Blippi (whoever that is) isn’t going to solve the problems of letting your kids loose on YouTube. They’re going to find another cartoon you dislike. The solution is to parent, set boundaries, enforce them, and find other activities for them.

    • You're right that enforcing boundaries is the job. I'm not arguing otherwise. And yes, we do plenty of LEGOs and outside time.

      I believe you're conflating two things: parenting discipline and product design. The question isn't whether I can physically take the TV away. I do.

      When I say "block Blippi," I don't mean I dislike the content. I mean I'm done with screen time and the UX makes that transition harder than it needs to be. Autoplay is off, but the end-of-episode screen still shows a grid of next videos. Of course he wants the next one.

      So I block Blippi. Except Blippi's main channel cross-posts through Moonbug into hundreds of other channels. It's a hydra

      YouTube already does content fingerprinting for music industry DRM. The technology to let a parent say "block this creator everywhere, and let me turn it back on when I choose" exists today. They just haven't built it for parents. Because the system isn't designed for children. It's designed for engagement.

      So yes, parental responsibility matters. But "just don't use it" isn't a scalable answer when the product is specifically engineered to undermine your choices. That's the design problem I'm talking about.

  • Just a tangent, interesting that you brought up Blippi. Any issues that you have with Blippi if you don't mind me asking? :D

    • Ha — the guy is hyper. But I'll give him this: he introduces my kid to garbage trucks, excavators, fire trucks. I'm not physically taking my toddler to see all of those all the time

      My issue is with YouTube's UX. I watch an episode with my son, we're singing along, he's excited about putting out the fire. Episode ends. Even with autoplay off, the next recommended videos show up — and of course he wants to watch the next one.

      So I block Blippi. Except Blippi's main channel cross-posts into Moonbug, which cross-posts into hundreds of other channels. It's like trying to kill a hydra. Here's what gets me: YouTube already does content fingerprinting for DRM enforcement in the music industry.

      The technology to let me block Blippi across every channel — and turn it back on when I want to exists. They just haven't built it for parents. My point that we can build systems designed for children if we had the intent

Great news but this will probably the catalyst for more "age verification" nonsense. These algorithms are bad for everyone, not just kids.

I think all important public policy decisions should be left to random personal injury courtrooms, as long as the PI lawyers collect their customary fee. It's silly to let a regulator or legislative body butt in and so prevent the PI lawyers from collecting their cash. So what if you have no say in the result?

I think this is going to end up with a huge chunk paid out to state health departments for the foreseeable future.

Kind of like how tobacco companies now pay out billions every year and its a major source of funding for states.

Hopefully this means more health services available. But it will just serve like an ongoing tax.

  • AIUI, this particular case is a pilot for about 2,000 similar (but not similar enough to be combined into a class action) cases. They are not actions by state or local governments for damages they have received, as was the case with tobacco.

    • I expect state governments to follow up.

      The similar case about child predators was brought by NM’s attorney general.

two verdicts in two days, $375m in new mexico and $6m in LA. meta's insurance company already got cleared of covering these claims. if even ten more states follow, meta is paying out of pocket at a scale that actually shows up on the balance sheet.

This is the kind of stuff that is causing them to push for mandatory identity verification laws. If they are being held liable for the the desires of their users, they're being forced micromanage the affairs of their customers, which preclude anonymous usage.

  • Meta is not pushing for mandatory age verification laws.They are pushing for age verification burdens to be pushed to the OS / App Store layer.

  • Not only that, in my opinion the many positive reactions to this decision are a sign of a decline of personal responsibility and a desire of people to be managed by the government and treated like cattle. Blaming everyone else but themselves for personal problems and failures has become the default for many people.

    • Why is it bad to want the system to push people towards healthy behaviour but it's totally okay to want the system to push people towards unhealthy behaviour?

      2 replies →

This is real. No matter how much I configure content controls on YouTube for my daughter, she scrolls past everything and ends up on brainrot videos — and then she can't stop. I've felt for a long time that this is by design.

When I was a kid, tv commercials were heavily censored and the tv channel could and would be fined immediately if something inappropriate was shown.

How is it that these days social media can circumvent all these safeguards and then somehow blame the parents if a kid is watching something inappropriate on an app designed for kids (like YouTube kids)?

The issue is that politicians are beholden to social media companies because they can literally get them or their opponent elected. After reading Careless People, I was amazed at how leaders of so many countries wanted to meet Zuck because he wields so much power.

I really hope this ruling is the beginning of the end of the free rein they've had.

  • Don't get me started. So many existing laws just seem to be conveniently ignored because... it's 'digital'?

    In a lot of countries there are specific laws banning the deliberate targeting of advertising to children (and in contexts where you would reach children, heavily regulated), but for over a decade Meta would allow you to target within the ranges of 13 to 18 years old.

    That's to say nothing of the scams and deepfake celebrity ads they let run. Imagine if a deepfake ad of Warren Buffet promoting an investment opportunity ran on TV, the network would get sued into oblivion. On Meta though, there's no repercussions.

> During his first-ever appearance before a jury in February, Meta's chairman and chief executive, Mark Zuckerberg, relied on his company's longstanding policy of not allowing users under the age of 13 on any of its platforms.

> When presented with internal research and documents showing that Meta knew young children were in fact using its platforms, Zuckerberg said he "always wished" for faster progress to identify users under 13. He insisted the company had reached the "right place over time".

Soon there will be government IDs required to use social media sites because parent's can't take phones away from their kids.

I actually quit Instagram because I found it so addictive. Wild that there's a case. Parents need to just take away phones from children. Simple as that.

I can't help but feel these are "revenge" verdicts. Public perception of these companies is dirt low, and there are so few levers the average person has to change what they feel is an increase in atomization, loneliness, breakdown of civic discourse, Cambridge Analytica level political targeting, misinformation, etc.

Maybe the social media companies could do more to combat all these. They certainly have a level of profit compared to what they provide to the average person that makes people squirm.

But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction? It's like saying cable television is responsible for people who binge watch TV.

It's hard to square this circle while sports gambling apps and Polymarket / Kalshi are tearing through the landscape right now with no real pushback

  • >But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction?

    Yes? Is there an algorithm or not?

    • By this logic your Grocery store can be sued for you gaining weight because they use an algorithm to time notifications to advertise to you on your phone if you install the app

      2 replies →

  • Do more? They have not done anything. These trials have shown they have long had extremely detailed understanding of what is going on with their product, and instead of trying to mitigate the problems, they have intentionally made the problems worse in order to profit more.

So... should we all sue Youtube and Meta now? This is a semi-serious, follow this precedent to its logical conclusion, question.

Wow, so does this pave the way for massive class action lawsuits? Not familiar with how precedents like this play out long term.

Why this site doesn't let me enter? Why temporarily restricted?

This stop-bot thing can be annoying at times.

  • its gone too far half of browsing now gets blocked as a normal user

    • I found myself trying to fill out a captcha the other day whose letters were so skewed and crazy I really had no idea what they were. It took me four tries!

How about optimize for engagement with people you know irl and not influencers and media?

> which could expose the internet giants to further financial damages and force changes to their products This is so wildly untrue, it's either downright deception on the part of the NYT or they simply don't know how to do math. In this case the judgement was for $3M. To keep the numbers simple for the sake of this comparison, let's not get into the 70/30 split of this amount and just imagine that YouTube (Google) ((Alphabet)) had to pay the entire amount. Their revenue (again, keeping it simple, don't @ me) for 2025 was $350B. Humans don't typically conceptualize the difference between millions and billions very easily, so let's knock everything down a few orders of magnitude. At that rate they take in around a billion dollars a day. So imagine this as a person who takes in about a thousand dollars a day, and has a yearly salary of $350,000 (quite comfortable to live on, while not being obscene). Now apply the same math to the amount they have to pay, and what do you get? A grand total of $0.03. You'd have to do it with a bank transfer (or use a nickel and overpay considerably). I make considerably less than $350K/yr, and I can still confidently say that if I had to pay a fine of 3¢, that wouldn't just make me not take the fine and the court that issued it seriously, it would have the opposite effect it's intended to. I would now see that it costs me next to nothing to keep doing things the way I always have, whereas if I were to change things like the very nature of my business to avoid the fine, that could have a potential serious effect. What the court has done is demand that when Google and Meta spit on their customers, they also throw a few pennies in their change cup to show how sorry they are, while changing nothing and going full speed ahead.

IMO, parents share just as much blame here, if not more. Giving your kids independence doesn't mean being oblivious to what they're doing online. Too many parents confuse hands-off parenting with not parenting at all.

  • Have you met kids? They’re devious, tech knowledgeable, and scheming and can find ways around any rule. Plus, no matter how good of a parent you are, you’re somewhat at the mercy of their friends’ parents as well. I can block TikTok from my daughter’s phone, but can’t block her from watching her friend’s phone while she’s out of the house.

  • I dont think parents going up against psychologists, data scientists, product managers and software engineers with the best pay in the world is any kind of fair fight.

I’ve been thinking about this a lot while building Murmel (https://murmel.social). One thing we wanted to avoid from day one was the “infinite engagement machine” model, so instead of pushing algorithmic slop, we just surface links that are already being shared by people you follow on Bluesky and Mastodon.

It ends up feeling much closer to “what’s interesting in my corner of the web right now?” and much less like a system trying to keep you trapped inside it.

Small scope, obviously, but I think more social tools should feel like utilities, not casinos.

this has to be the first of many right? fingers crossed this leads to some meaningful change.

  • You mean it's the first of many appeals, I assume.

    Trial courts will decide pretty much anything. Then the case gets appealed over whether the trial court correctly interpreted things you probably perceive as uncomplicated, like the 1st Amendment.

  • It's a huge deal because it was the bellwether case for over 1,000 other similar cases.

    • ah yup:

      > It comes on the heels of a Delaware court decision clearing Meta’s insurers of responsibility for damages incurred from “several thousand lawsuits regarding the harm its platforms allegedly cause children” — a ruling that could leave it and other tech titans on the hook for untold future millions.

      6 replies →

I don’t feel good about this case- on the one hand, I’m all for sticking it to big corporations. On the other hand, nobody has claimed that Meta and YouTube were doing anything illegal, so this case is different from civil suits brought after a criminal case finds someone guilty. This is a case where the jury decided they don’t like how two corporations acted, and are just giving money to one person. Why does this plaintiff in particular deserve this money?

I’ve argued in the past that the right way to create the change in corporations we want is to change the laws, and people have made valid points that Congress has basically given up on doing that. But even so, civil cases with fines don’t seem like that way to make lasting change. In the analogues to the tobacco fights, there are LAWS that regulate tobacco company behaviors as a result. The civil case here isn’t going to result in any law. So what are companies supposed to do? Tiptoe around some ill defined social boundary and hope they don’t get sued? Because apparently the defense of, “no I didn’t target that person and I didn’t break any laws” is still going to get you fined. What happens when a company from a conservative location gets sued in a liberal location for causing a social ill? Oh, we’re cool with that. But what if a company from a liberal location gets sued in a conservative location for the same thing? Oh, maybe we don’t like that as much. I’m taking the libertarian side here. I know plenty of people who don’t watch TV, don’t use Facebook, and I know plenty of people that recognized that they were spending too much time on digital platforms and decided to quit or cut back. So a healthy person can self regulate on these apps, I’ve seen it and done it. I’m just not sure how much responsibility Meta and YouTube bear in my mind. If they’re getting fined $3M plus some TBD punitive amount, are we saying that this 20 year old person lost out on earning that much money in their life or would need to spend $3M on therapy because of Meta or YouTube? It feels a little steep off a fine for one person.

If Meta and YouTube really were/are making addictive products, wouldn’t a lot more people be harmed? Shouldn’t this be a class action suit where anyone with mental trauma or depression be included?

I don’t know the details of the case, but I highly doubt that this one plaintiff was targeted specifically, and I doubt their case is that unique. I read tons of news articles about cyber bullying, depression, suicide attempts, and tech addiction. Does every one get to sue Meta and YouTube for $3M now?

  • The case was brought under product liability law.

    If I sell you gizmo, and I know, or should know, that using the gizmo could seriously harm you, and I don't tell you or do anything about it, I am liable for damages you incur.

    • I’m not sure the plaintiff’s mental harm was caused by Meta and YouTube. You can be just as depressed without social media and online videos. And even if they were, other cases that are kind of similar to this have not found the corporation responsible. The parents of Sandy Hook didn’t get any money out of Remington, and their product is much more directly linked to harming people than an app. McDonald’s was not held liable in 2002 for making people fat. I am pretty sure the food at McDonald’s is more easily linked to our health outcomes than the link in this case.

      Should Apple or Samsung be held liable for making the phone that the plaintiff probably used to use these apps? How much responsibility do they bear?

      Further, Facebook/Instagram and YouTube are free products from the perspective of the plaintiff. These corporations didn’t sell anything to the plaintiff, so can they even be held liable? They did sell the plaintiff’s data to advertisers, which I think you might be able to hold them responsible if they misused that data, but this isn’t what the case was about.

      I’m not rooting for depression or suicidal thoughts or anything, but this doesn’t feel like the right direction we need to be moving in as society. We can’t simultaneously argue for free speech and freedom of choice and also claim that we aren’t capable of making our own choices to live our lives responsibly.

      2 replies →

There is no personal responsibility left in America. I have a child. It's my job to teach him and watch what he watches and does. I guess I am the only one who thinks this way. Good luck having the parental government raise your child. Parody: I let my child have cocaine and now they're addicted!!!!! Hilarious.

  • How old is your child? Younger than 6-8 it's easy to monitor what they're watching and enforce limits. By age 9-10 it isn't just about what they access in the home. Many schools in America are giving kids computer and tablet access, and kids are smart or curious enough to access social media there.

    I agree that a big part of this is educating children about these hazards, but that also doesn't mean we should allow these companies to data science the shit out of our attention and will power. Many adults have concerning relationships with social media too -- exposure, pressure, and manipulation are key ingredients that are difficult for anyone to deal with.

    • Yeah it's too bad there aren't any tools you can use to block any content at your home YOU personally deem irresponsible /s. Im not sure what your argument is here. If it's for regulation then please do some reading on regulatory capture before you hand over your ID card while logging in to respond to my comment

  • > Parody: I let my child have cocaine and now they're addicted!!!!! Hilarious.

    Cocaine is illegal because it is addictive.

    • LSD and hallucinogenic mushrooms aren't addictive and aren't legal. Cigarettes and alcohol are addictive and are legal.

    • Yet, I know many people who've done cocaine that are in other respects law abiding citizens. Making unjust laws makes us all criminals. The government cannot protect people from themselves, no one can. The best we can do is try to educate, and we can't even seem do that. Good luck out there buddy.

They were also designed to addict adults, just saying.

  • Right, but adults are assumed to be somewhat more responsible for themselves. This is why we don't let kids (legally) smoke or drink, but we do let adults do so. We expect that adults can, in general, say no, and that children are less able to do so.

    But it's not absolute. Some drugs are illegal for adults as well, for example. Why? Because they're too addicting.

    So are Instagram and Youtube just nicotine, or are they heroin?

Everyone now posting on social media about how the sentence "Social Media is Addictive" is going viral.

Doritos now liable for creating a good tasting chip? This is madness.

  • Yeah, people keep making the comparison to cigarettes but to me this is wildly different.

    Cigarettes directly cause physical harm and even death. Social media can sometimes, under certain circumstances, depending on who exactly you're interacting with on social media, indirectly contribute to emotional harm.

    Cigarettes are also physically addictive. Your body actually becomes dependent on them and will throw a fit if you try to stop using them. Social media is only "addictive" in the loose sense that all fun, mentally engaging activities are.

    I'm not saying social media is fine for kids and we shouldn't do anything to reduce their use of it (TV and video games can be equally unhealthy IMO). I'm not even necessarily against legislation on the subject. But there's a huge difference between fining a company for breaking a law, and fining them for making a perfectly legal product "too fun" because you let your kids spend all their time on it and that turned out to be unhealthy.

    This type of civil litigation where the courts effectively create and enforce ex post facto laws based on their opinion about whether perfectly reasonable, 100% legal actions indirectly contribute to bad outcomes is not a great aspect of our legal system IMO.

    • There are different kinds of addiction. The difference is physical vs. mental.

      The best example of this is heroin, which has both a severe physical and mental addiction component, and it's the mental addiction that makes relapse so common.

      Mental addictions rewire the brain's chemistry, causing the user to seek and only find joy in the substance. This is a better comparison for social media (albeit not as destructive and instantaneously harmful as narcotics)

      1 reply →

  • One could argue that the ultra processed food industry is doing exactly what the tobacco industry did wrt to making their food addictive.

    There is a difference in creating a food that tastes good vs creating a food that tastes good, but instantly wants you to eat the whole bag.

  • addictiveness != enjoyment

    Although to some extent they're correlated, sometimes the things that are most enjoyable you wouldn't describe as "addicting" and vice-versa.

    Eating a nice full meal is more enjoyable than eating doritos on your couch, but you wouldn't describe it as addicting.

    If anything, I find my experience of youtube today to be less enjoyable than in the past

[flagged]

  • "Libertarian demands companies have unlimited freedom until a corporation with unlimited freedom repeatedly eats their face with no consequences, wonders why the face eating leopards they voted for are actually allowed"

[flagged]

  • Is that completely based on their expressions and reactions? I mean, you might be right, but I feel like an expression of reaction is too little to base such a damning statement on.

  • I thought the same thing. I took solace in the fact that it may be appealed, and that I suspect lawyers and taxes will take a large chunk out of the settlement

  • Body language analysis of strangers is bunk pseudoscience and a great way to reinforce your prejudices.

"YouTube argued that it was not a social media company and that its features were not designed to be addictive."

Well, that's laughable.

This is ultimately about the inherently pernicious nature of unregulated capitalism. Businesses want money. They get that by manipulating you, the consumer, to consume their services. They are "ethically" bound by (given an excuse by) fiduciary duty to pursue profit callously.

The result, in these corner cases where eating people is profitable? Shelob.

I strongly doubt that "negligent" is the proper word for "carefully designed to induce as much addictive behavior".

When you put something out there, there's a question of ownership for how people end up using it. - Some think that "if you use it incorrectly, it's your fault" and probably agree with the statement that Palantir is not an evil software and that one must "change the administration". - Some think that "if you use it incorrectly, it's the creator's fault" and then you have safety labels on everything (see Prop 65).

It's a spectrum of risk between the user and the creator. My opinion is that there's enough scientific evidence that social media to show that it has a negative impact on kids and teenagers as their brains are still developing. I think a social media ban on kids is a good thing (similar to a driver's license or age of drinking).

  • If you deliberately design your platform to be addicting then you can't say people who become addicted are "using it wrong" though.

As long as we continue to value making money for shareholders above all else, such and possibly worst perversions will continue to happen. Capital has found all sorts of ways to make all sorts of questionable things addictive to sell.

I feel, and it's obvious to most that the only way a society can truly reform is by a shared consensus over their value system. This verdict could be thrown out by the appelette court(i feel it would be), so this is not the culmination of values resulting in what many hoped for.

It does not seem to me that this is a country where consensus on what, if anything, to put above capital will come about any time soon and with capital it's always been ask for forgiveness rather than permission.

The only time true justice that happens is when the harm becomes obvious being the shadow of a doubt(e.g. smoking) that even a monkey can tell it's time, game is up.

Perhaps if one day we can look into the brains of people with the clarity of glass and the precision of electrons and tell, will that time come when we all recognize how bad of an idea social media was.

I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.

So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.

Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.

The change has to come from people who understand these systems building better ones.

Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.

I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.

Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose

[1] https://www.emorahealth.com/clinical-insights/social-media-v...

  • > if you've ever tried telling a toddler "no"

    Parenting is rough! Good for you, for sticking to your guns.

    > The plaintiff, Kaley, started using YouTube at age 6 and Instagram at 11.

    Who was at the wheel here? If we call up all Kaleys teachers from this time frame and ask them "were Kaleys parents checked out" what do you think the answer would be? For as bad as education has gotten, I sympathize with with teachers because parents have gotten FAR worse.

    It's not like we don't know these things about peoples behavior on devices... maybe it's something that should be talked about in school, along with how credit works, and how to file taxes.

    Do we need to tell parents "it's 10am, have your kids touched grass yet?"... "It's 10pm did you take the tablet and phone away so they go the fuck to sleep?" --

    "touch grass" as a meme/slang is literally people poking fun at the constantly on line. It's "hazing" and "bullying" to drive social correction.

Is the addictiveness of social media great? No. But the blame shouldn't be placed squarely on the companies either. What happened to personal responsibility? I was addicted to Facebook, I realized it, and I disconnected from it. I had withdrawals for a while (pulling out my phone and trying to open the app I had deleted without really thinking about what I was doing) but I quit. I know I am addicted to YouTube shorts, so I stay away from them. Occasionally I'll go on a bender and a few hours will slip by without me realizing, but while I know YouTube is designing them to be addictive, I blame myself for falling for it.

There are plenty of things in life that can be addicting; drugs, sex, money, power, adrenaline, entertainment, technology... The list goes on. If we remove everything addicting from life, you better believe something else will rise up to take its place.

The solution therefore isn't to remove everything addicting from life, but rather to raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop.

  • Personal responsibility is important. But at the same time, we don't let people open up a heroin shop and then claim it's your personal responsibility to not buy it and use it. We don't put slot machines in schools but tell kids that they need self-control to not get addicted to gambling.

    I don't know what the answer is, but it feels wrong to lean _entirely_ on personal responsibility. We live in a world in which we were simply not evolved to live in. People literally make a good living by engineering and exploiting our weaknesses for profit.

    > raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop

    If only it were that easy. If you've ever known somebody who struggles with a serious addiction you'll know that even when they know it's destroying their life they still can't stop.

  • Maybe this applies more towards adults, but I don't think the correct answer for kids is only "just have self-control," something kids are notorious for not having. Certainly there's a lot of parental responsibility here but we can simultaneously hold companies responsible for their part too.

  • The problem is that internal communications inside these companies raised concerns about the manipulativeness, and even deceptiveness of the algorithms and tactics they were using.

    They weren't just consciously creating an attractive platform, they were consciously creating a manipulative platform.

  • Yes, personal responsibility is important. That doesn't mean we need to allow companies to attempt to addict as many people as they can.

    The question we should be asking: are these technologies a net-positive to society?

  • I’m glad you went through that and came out ok.

    It seems though, increasingly, that the ability to avoid addiction is less about pulling one up by one’s own bootstraps, and in many ways determined more by genetics. That is to say, what might have been possible for you is much harder for others.

    Look no further than GLP-1. People who have struggled for years - decades - with overeating are almost immediately able to cut back on addictive eating. It’s not that they suddenly discovered willpower. It’s a biochemical effect.

    It’s no wonder then that kids are more susceptible to addictive building behaviors. Their minds are pliable and teachable.

    Why would we not legislate things that take advantage of that?

  • If they are liable of making the thing addictive, it does mean it is their fault. In this case, it specifically says it's designed to be addictive to children, whose personal responsibility is probably not expected.

  • Don't blame yourself! You had an encounter in the world and were greatly affected. Anyone who had the same predisposition and same exposure as you would of fallen in the same situation, just as they would have pulled themselves out of it the same way.

    It is not, like, a moral thing to become addicted to something. And the ability to pull yourself out of it is determined, whether you are conscious of it or not, by your broader circumstances and by the same predispositions that brought you there in the first place. At the end of the day we are all fucked up animals reeling from the ongoing consequences of prematurational helplessness..

    We should feel together in our problems like this, not distinguish ourselves by how we might individually overcome them. You are not "better" finding yourself standing over a beggar addict, you are lucky, never forget that. If for no other reason that it's not a sustainable world view otherwise, it leads to insecurity, anger, and relapse.

    The dark truth of the world is that everyone is doing the best they can. How could they not? Why would they not? What is this thing that separates you from the addict or murderer? Unless you have maybe some spiritual convictions, I can't imagine what it is..

    Just really, I know you had a powerful personal journey, but don't let it establish to you that we are all fundamentally alone, because we are not, and its good to help people who maybe need more help.

  • On one hand: sure.

    On the other, it's very different when companies explicitly design their products to be as addictive as possible.

    We've been through this with Big Tobacco already. Nicotine and other tobacco substances are addictive on their own, but tobacco companies were prosecuted for deliberately making cigarettes as addictive as possible, besides also marketing to children. The parallels with Big Tech and social media are undeniable.

  • We can't raise other people. We can prohibit the addicting things like newsfeeded Facebook.