Many will cheer for any case that hurts Meta without reading the details, but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption:
> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
I think this is the way. Not control, but just make it simpler for parents to handle their childrens devices.
You dont have to make everyone share their age, you just make it so that parents can in a simpler way choose what the children should be able to access.
Make it easy to do right, dont add more control.
Its kind of the old anti-piracy copyprotections. The pirates always cracked it, and in the end, the ones who got to sit there trying to figure out what is the word in the manual is the user who actually paid for the game. So making it worse for the ones who paid, and better for the cracked version.
So, make it simple.
I think getting the age thing correct is key to get parental classification to work properly(I think now platforms just ask for a birth date which is lame) e.g
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
It’s very hard to control kids internet access. Impossible really. Even if you do it fine at home, once they go to school it’s whatever policies the school has. Most require laptops and provide internet access.
> Classifying accounts as child accounts (moderated by a parent)
Notice also that even if you do this, you still don't need the service to be able to decrypt the content, only the parent.
This could even be generically useful, e.g. you have a messenger used by business and then the messages can be read by the client company's administrator/manager but not the messaging company's.
That doesn't work, unless the system knows everyone's family relationships.
Not guesses. Not is told about and takes on trust. Knows.
There's nothing to stop a kid creating a fake adult account and using it as an adult, perhaps creating their own kid account for "official" use.
Ultimately this is an unsolvable problem without a single source of truth for verified ID and user age.
The only responsible way to do that is to create a global "ID escrow" agency, where ID details are private and aren't available to governments or corporations without a court order, but the agency can provide basic age checks and other privacy services of a limited nature.
Good luck with that idea in this culture.
Meanwhile we have the opposite - real ID is known to governments and corporations, personal habits and beliefs of all kinds can be tracked, there is zero expectation of privacy, and kids still aren't protected.
I understand the concern but then to make this available for adults you now have to provide proof of age to companies, which opens up another can of privacy worms.
In a way, this is like saying that one trusts total strangers in some random large tech company and total strangers in government agencies to read and/or manipulate conversations that kids have. This also paves the way to disallow E2EE for other classes of people based on arbitrary criteria. I don’t believe this is good for society overall.
You just need to provide the government with your name and address and the name and address of the counter party every time you send an encrypted message.
If you don't support this you're obviously a pedo nazi terrorist.
There is no reason kids should use so called smart devices, except making certain companies richer. Kids have had a healthy development without such crap for thousands of years. We don't discuss what percentage of alcohol should be allowed in beer and wine for kids.
Centralized organizations with proprietary software can never offer meaningful end to end encryption because they can just ship an app update to disable or backdoor it at any time.
It is better for them to be forced to turn off the security theater so people that need actual privacy can research alternatives.
We know that this isn't really going to reduce harm for children, we know Meta is not seriously going to suffer or change, and we know this is going to be used as a cudgel to beat down privacy and increase surveillance.
Why is it so important that kids have access to the internet anyway that we're willing to sacrifice both our privacy and freedom of speech rights for it when we already know it's damaging their mental health?
We don't need all this privacy invasion if we just didn't give kids a smartphone with a data plan.
This is a good thing for “social” media. If you use any social media app (especially those owned by Meta) you should assume that absolutely everything you do is for full public consumption. Maybe these changes will make everyone stop thinking that anything is private when using “social” media apps.
It's illegal to hand a minor harmful material. Meta did exactly that.
I support people's rights to make and buy sports cars, But it is illegal to hand the keys to a minor and leave them unsupervised.
As a platform operator I think end to end encryption does no good in free products. It just makes you blamed for liability that you couldn’t foreseen or
mitigate.
No. Meta is backtracking because the business case for and to end encryption is gone. They willingly will give the Trump administration whatever the want because they are not in the business of fighting authoritarian governments, they are in the virtue signalling business when governments are constrained by the rule of law.
The business case was to be able to say “we don’t know”. That case is gone.
Is it illegal or is it just illegal on general purpose platforms whose focus isn't extreme security?
We all know Meta can still read E2EE chats (otherwise they wouldn't do it) and they're using E2EE as an excuse to avoid liability for the things their platform encourages. Contrast this with something like Signal where the entire point is to be secure.
The first two E's in E2EE stand for end. From one end to the other. So no, Meta can't. Or put another way... if they can read those messages, then it's not E2EE.
Maybe I'm just getting old and cynical but, while I think current social media is bad for children, I'm very suspicious of the current international agreement that it's time to take action, especially with all the ID verification coming from multiple avenues
Two things can be true, and I am in the same boat. Should the next generation have their brains fried by ad-tech corporations and their algorithms? Absolutely not. Should the overdue off-ramp from this trend be the on-ramp to mass-surveillance and government overreach? Also a firm no.
I really wish this take was more prominent. I really don't buy that mass-surveillance should be required for age verification. There are plenty of very smart people who have created much more complicated things than a digital age verification that doesn't track every time you use it.
This also isn't helpful, but I think the sudden push of urgency isn't helping. The internet has existed without any kind of age verification or safety measures for about 30 years. We could have used that time to have a sensible conversation about policy trade offs, but instead we've waited till now to decide that everything has to be rushed through with minimal consideration.
A Kindernet would solve many problems. Hardware-gated access, local moderation and control, zero commerce or copyright, whatever you want to do to make the environment uninteresting to bad actors. Frame opposition to the concept as demand for access to your children.
Exactly. There's a clear alternative in my mind, one I'm sure is objectionable in its own way but I think is the least evil of the three: require providers to label their content and make them liable for it. This allows parents to do the censoring, which is functionally impossible now because no parent can fight the slippery power of multibillion dollar software investments designed to prevent them from having control over what their kids see.
> I'm very suspicious of the current international agreement that it's time to take action
Especially since, when you look at the behavior of younger people, they're way more careful about social media than millennials were. My teenage child an their friends keep all of their conversations in a massive but private group chat. Any social media consumed by them, is basically 'read only'. They don't post online, none of them of have social media accounts where they post pictures of themselves etc.
Same with all of my younger gen-z coworkers. If they have socials the post very selectively and all content is work friendly.
The people I see that need "protection" are aging millenials that don't really understand how wildly they're exposing themselves and families. I cringe when I see the amount of personal photos and information shared by the view millenials I know who still need their ego-boost from these platforms (and that number itself is much smaller).
Younger people don't share their opinion and anything resembling private photos online any more.
I definitely would not agree with this and the user metrics of platforms like tiktok and instagram definitely would argue otherwise to your anecdote. Many are showing far more of an alleged window to their lives than ever before, key word being alleged as its always greatly curated in an way that oft attempts to make everything look perfect and effortless.
Absolutely are a lot of gen z who avoid social media, but to pretend most are privately hunkered away is completely ignorant of today's social media usage.
given that it's happening simultaneously with the war on E2EE and general purpose computing, their goals are as transparent as it gets. the West is at this point only a decade behind China.
Governments always want censorship and speech control. That never changes. The only difference is that now the general populace has accumulated enough disgruntlement to social media to be used against themselves.
No the difference is that when governments are still constrained by the rule of law it’s cheap PR to fight the government on data access claims but once they are authoritarian fascist industrialists fall over themselves to feed everything into Palantir
I’m deeply worried by how uncritical these responses are. Meta is removing end-to-end encryption specifically because these lawsuits are trying to claim end-to-end encryption is a tool for child abuse.
The “think of the children” angle is the perfect angle to pressure companies to make communications readable by the government. And here tech audiences are welcoming it and applauding because they couldn’t read past the headline and they think anything that hurts Zuck is good.
How anyone can see this happening and not draw the connections to Discord and other services also pushing ID checks is beyond me. Believing that this will only apply to services that don’t effect you is short sighted.
There's no agreement other than maybe that social media is bad for children. To get kids off of there you need to identify who's a kid and who isn't. Same with alcohol and tobacco. Obviously people shouldn't give their ID to Meta and hopefully many will not but those that do, for me, as someone who doesn't use social media, that's a small price to pay to keep kids off. Again, Meta is completely optional, it's a platform to share stupid videos, no one NEEDS to be there.
Do you think Meta wouldn't want to be legally mandated to ask for your id? The improvement to ad targeting alone would be enough to pay for any lost users. They would probably want nothing more than to be in the same business as idema and the other online identity/age verification providers are.
Critically think about this for a second before believing some ChatGPT generated "OSINT" report on reddit. Otherwise, you'll allow corpos to use your mob hatered against you
The general public is being told they are faced with a crisis. This has been a problem for at least a decade, yet suddenly it's at the forefront and conveniently ties into ID verification for everyone to use general purpose computing.
I'm sorry but if you don't think there's a conspiracy I have a bridge to sell you. It was already unveiled that Meta has lobbied billions towards promoting this legislative change
Really? You still think you're the one looking at it all wrong? It's exactly what you think it is. Stop giving blatant malice the benefit of the doubt, especially the doubt they've directly instilled.
> The New Mexico attorney general’s office created multiple fake Facebook and Instagram profiles posing as children as part of its investigation into Meta. Those test accounts encountered sexually suggestive content and requests to share pornographic content, the suit alleges.
> The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
and
> “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
This is what it's about right? The article doesn't make it seem like encryption is meaningfully part of this case at all.
> Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
There's no indication that that decision, or the announcement, are directly related to the trial, just they just happened at the same time? It's a link drawn by CNN, without presenting any clear connection
They have been under a lot of pressure for years to disable e2e messaging because it prevents them from monitoring messages for child abuse. This was a central point of the trial. While they haven't given a reason for the change I think its reasonable to infer it is in response to this pressure.
However there is another possible explanation
> Tom Sulston, head of policy at Digital Rights Watch, said rather than acceding to law enforcement demands, the move was more likely due to Meta deciding against moving messaging on WhatsApp, Facebook and Instagram to a single platform.
That's not how the legal framework in society works. Victims are compensated. The business pays. The precedent of wrongdoing is specifically established which means that further infringements can be quickly resolved.
The legal system does not seek to destroy the business, or individual criminal. Instead it wants them to be able to continue doing their other non-criminal stuff.
The legal system has two goals - to compensate individuals harmed and to discourage further violations of the law. This lawsuit seems to have fulfilled the first goal but fell flat on its face when it comes to punitive damages.
Meta knowingly hurt children for profit. It worked.
If we are in any way serious about technocratic solutions to social problems, this would be untenable, the company would be bankrupted, a new company would fill its place. No tears would be cried, nothing of value would be lost, half of hacker news would be chafing at the bit to build a better alternative for the newly opened market.
But that's not what happened. We allowed children to be knowingly hurt for profit.
It's very hard to think they wouldn't do something harmful to children again if the economic incentives aligned. For corporations it's just so easy to say sorry, and in the worst case they know an irrelevant fine will be placed in order not "to destroy the business".
This represents 0.6% of meta's 2025 profits, or 0.2% of revenue. Though presumably it was based on harms from previous years, I haven't read the lawsuit.
Well hopefully now that there's precedent, it will open them up to recurring repeat-offender lawsuits and legal action. The goal is to get them to stop doing predatory things now.
That's good, but it can be read as: "Everyone can be a first time offender and get away with a slap on the wrist." -- where "everyone" is a tech company. Next they will find some other nefarious thing they don't need to check for properly, since that would be a new offense and again only get a wrist slap. There is no signal in this fine, other than "Hey it's OK, if you are big enough, you will get away with it. At least once, likely twice or more, depending on how big you are.".
Those two things are unrelated to each other. And yes, we can do without age verification and we can have E2E encryption. Age verification is causing more harm than good. It also doesn't meaningfully help with any of the problems mentioned in the article.
Well, assuming you won't also think it's okay for Meta to just be held liable anyway.
There are people who are against age verification just on principle and others who are against it because they know any realistic implementation is going to be abused.
With E2EE and no age verification, there is no way Meta could have any control over messages sent to children, so it does not make sense to hold them responsible.
I think we don't want mandatory age verification or banned encryption for everything. However, you can't hide behind "it's not the law" as a shield for everything. Thanks to ubiquitous spyware, Meta knows damn well the age of almost all of its users, and if someone who's 40 is sending first-contact messages to 10 unknown 13-year-olds every day, it seems important to know what those messages say. They know this stuff is happening and they care about not being liable, not about your security.
We can assume Meta has backdoored its E2EE somehow anyway.
Also, “the total civil penalty of $375m was reached after the jury decided there were thousands of violations of the act, each with a maximum penalty of $5,000.
Meta is also involved in a separate trial in Los Angeles, in which a young woman claims that she became addicted to platforms like Instagram and YouTube, owned by Google, as a child because of how they are intentionally designed.
There are thousands of similar lawsuits winding their way through the US courts.”
Wait, what? This case's central argument was about propagating and promoting child sexual abuse material, but the maximum penalty was set to only $5000 per violation? Why?
This fine from New Mexico is about 0.6% of Meta's annual profit.
If all 50 states sue at the same rate, that'll be a 30% dent, and I'm sure states can sue for more than 0.6% too. That would be historic action against malfeasance and would send a strong FAFO single to all corporates.
Social media for children should be moderated by their parents, full stop. End-to-end encryption exists. You cannot un-invent it. It is trivial to roll your own encrypted chat service.
I haven't read this article, but I can tell you for certain that no verdict was handed down that will punish them in any way that matters. They have and generate more money than they could ever spend and they're functionally above the law because of the money and lawyers they can afford. The law itself is broken in this country and when you get big enough you can literally get away with murder.
+1. If there's a dollar amount attached to a verdict for a company of this size, then it's just a complicated business expense and not an enforcement of a law.
> It's a $3 million verdict in compensatory damages. Even if reduced on appeal, that's a lot of money.
Where are you seeing that?
The article says:
> Jurors found there were thousands of violations, each counting separately toward a penalty of $375 million. That’s less than one-fifth of what prosecutors were seeking.
> Meta is valued at about $1.5 trillion and the company’s stock was up 5% in early after-hours trading following the verdict, a signal that shareholders were shrugging off the news.
> Juror Linda Payton, 38, said the jury reached a compromise on the estimated number of teenagers affected by Meta’s platforms, while opting for the maximum penalty per violation. With a maximum $5,000 penalty for each violation, she said she thought each child was worth the maximum amount.
They had to pay about $375 million. That's a lot of money, but I suspect that Facebook has made considerably more than that on targeting children.
I'm hardly the first person to use this logic, but if they make more money breaking the law than they have to pay in fines, then it's not a fine, it's a business expense.
Agree with your take. However, to put more perspective on the amount I think you have to consider this is just in New Mexico so the per capita fine is actually quite large and (big) if it were applied similarly nationally or globally it could be a significant impact to their business forcing some change.
This particular verdict is a long time coming. How it drives meaningful change is the bigger question.
One of the challenges we need to resolve is the race to the bottom for online communities - engagement metrics will always result in a PH level that supports more acerbic behavior.
There’s multiple analyses that you can find, if not your own experience, to believe that we should be able to do better with our information commons.
Just today, I found a paper that studied a corpus of Twitter discussions and found that bad-faith interactions constituted 68.3% of all replies (Twitter data).
The engineer and analyst side of us will always question these types of analyses.
I’ve read enough papers at this point for the methods to matter more than the conclusion.
1) meta, and the other tech platforms need to open up their research and data. NDAs and business incentives prevent us from having the boring technical conversations.
2) tech needs someone else to be the bogeyman - the way we did for tobacco. The profit incentive ensures profitable predatory features pass review. Expecting firms to ignore quarterly shareholder reviews for warm fuzzies is … setting ourselves up for failure.
Regulators (with teeth) need to be propped up so that the right amount of predictable friction (liability) is introduced.
3) tech firms need an opportunity or forum to come clean. The sheer gap between the practical reality of something like content moderation vs the ignorance of users and regulators - results in surprise and outrage when people find out how the sausage is made.
4) algorithm defaults decide the median experience for participants in our shred market place of ideas. The defaults need to be set in a manner that works for humans and society (whatever that might be).
Economies are systems to align incentives to achieve subjective goals.
When you see traffic between 40 yo man and 12 yo girl which don't have any common social connections and the messages are initiated by the man, you don't have to crack e2e to suspect dickpicks.
So you want the platform to be creepier and investigate connections more intensely? And you want to intercede on an arbitrary method you just made up, without examining all traffic first?
I seem to recall someone taking pictures of their baby, naked, because it was sick, and emailing them to the doctor -- and having their Apple account terminated. Terminated, with the father being labeled a pedophile, and the police contacted (all automatically).
Everyone was quite upset. Everyone felt it was too intrusive.
Frankly, communication platforms have no business trying to police anything at all. I wouldn't want the phone company recording all my conversations, hunting for trigger words, and then contacting the police or cutting off my phone if I sad "bad word".
Yet somehow it's OK to have this level of intrusion because.. um "computers".
The state has no business listening in on private citizen's communication.
Corporations have no business doing so.
To protect the 12 year girl, something called "her parents" need to pay attention and watch what she does. That's their job. They're her guardian.
Some random corporation has no business in that. Some random corporation has no business being an 'algorithmic parent', an automated machine with no appeal.
Here's something I'd support -- a way for parents to prevent children from registering for accounts, and, to be able to examine children's accounts.
But... then we get into ID verification. Of course, surely you support ID verification for platforms, because if you support platforms knowing the age of people (40 and 12, you listed), then you therefore must support a way to verify those ages.
I cheer any decision that holds any private web property (like Facebook) accountable for it's user actions.
It helps to reduce hegemony of large social platforms and promotes privately owned websites. For example, I know everyone who has permissions to post on my website (or pre-moderate strangers comments), and is ready to take responsibility for their posts, what my website publishes.
Currently the legal stance seems strange to me -- large media platforms are allowed to store, distribute, rank and sell strangers data, while at the same time they claim they are not responsible for it.
If you haven't already, you should look at the court case that prompted the creation of the current legal framework of Section 230. Prodigy was sued because of the things being said in public chatrooms. Should the host for an IRC server be responsible for everything said on the IRC server? Should they pre-moderate all the messages being said there? Should dang premoderate every post on this site?
The reality is that people who cheer for this stuff are going to be unreasonably shocked when it comes to bite them later. Once the government's done going after the big guys, the little guys are next, and unlike the big guys, they can't absorb a few fines and judgments.
Meta's own research (and its use of it) has shown that it repeatedly ignores well-substantiated facts about the harms of its products. Now that Section 230 seems like a flawed shield, I fear the takeaway for other companies will be: never conduct honest research in the first place to preserve plausible deniability.
Meta has always wanted the appearance of caring about safety (helps them attract talent and keep mission-related morale high), while nearly always prioritizing growth (save for tiny blips of time, like in 2017 when the fallout of the cambridge analytica stuff was hitting a crescendo), whereas companies like X are run by people explicitly disinterested in putting significant resources into safety, especially research.
I will also add that, for the past few years, Meta and X both have become extremely hostile to external researchers of their platforms, shutting down access to tools and data.
Corporate liability isolation has become absurd. People who make decisions that harm people should be held to account for those decisions even if they structured their decision making apparatus in a legal way that makes it look like they're just following the orders of the shareholders.
Zuckerberg has a brain, he decided to take this action, it is absurd he is not being hit with a personal penalty.
If Meta did advertise the "safety of its platforms for young users" then they should be held accountable for that. It seems clear from the whistleblowers that Meta had internal data that they knew they were not safe for young users, but Zuck gotta get those ads($$$) in front of young kids.
You can't realistically make a space that's free from predators. The real answer is teaching children to recognize unacceptable behavior. But most abuse is from inside--typically adults that the parents put in a position of trust or quasi-trust.
I do not fault Meta for there being predators, I fault Meta for pretending they're being kept out.
Can one be opposed to age verification in the OS and yet totally happy that Meta got this fine? There is a very big difference between e2e encryption /telephone and social media. Social media is more akin to a phone book. I do not recall there ever being any phone books listing minors. That's completely unacceptable and unnecessary. I am totally OK with phonebooks (or their modern digital equivalents which enable people discovery and user generated content discovery) to abide by the same KYC rules as banks. And be only for adults. Your kids using e2e encrypted messaging to communicate with their friends whom they have met in person? Nothing wrong with that, we all have the right to privacy. Kids listing their contact information publicly? Absolute no.
As part of the ongoing enshittification of the internet, tragedy of the commons etc., these big centralized internet platforms decided that instead of being responsible and making their products *slightly* less terrible it was better to maximize short term engagement metrics, and that, egotistically, the chance of there being real consequences for their actions was near zero. (Or, even more cynically, that their yearly performance review was more important).
Now I'm afraid they've screwed everyone over and the idea of an anonymous open internet is now dead- we're gonna see age (read, real ID) verification gating on every site and app soon....
The dumb thing is to look back and see how umimportant it is that Facebook feed algorithm be this addictive. They already had the network effects and no real competitors. They could have just left it alone.
What's horribly frustrating with the age ID stuff is that the issue at question with Meta wasn't that they didn't know what they were doing and that they were doing it to children. They did. This wasn't an issue of "If only they had the the age, then they could have done the right thing".
The laws being passed target exactly the wrong thing that wasn't a problem. They should have been passing "duty to care" laws aimed at social media companies not "give me your age" laws.
I may have missed it, but almost all these laws being passed for this issue have been pretty much solely around data collection rather than modifying the behavior of the worst businesses in the game.
It would be like seeing a car wreck kill a bunch of pedestrians and then passing a law that pedestrians need to carry IDs on them.
Yea, in the end there will basically be no consequences for Meta- Facebook is already mostly dead, and the ad revenue from that time has already been collected.
Now we're just moving on to a kind of moral panic think-of-the-kids kind of moment that is thinly-veiled state surveillance.
Watching Mark testify before the senate it honestly appears like it may have never occurred to him that it is an option to have not offered a feature. He treats the product as if it is some kind of inevitable outcome that was destined to exist.
Mass surveillance 'for your own good' instead of regulating social media in any way.
You can purchase a scam ad it'll be up in 10 minutes. Lie to every anxious child they have ADHD and need meth, lie to every dejected boy that they just need to manosphere up and buy supplements.
They think the public is stupid. They might be right.
I doubt that Zuckerberg really uses either Facebook or Instagram all that much. Maybe as a curated PR channel sure, but he's not doom scrolling Instagram at bedtime.
If you know what the platform is capable of, if you seen how the sausage is made, you're probably not using it.
People are also a little naive in not seeing that these platforms aren't just bad for children, they are bad for adults as well. I'm not oppose to not "selling" them to children, but we also need to label correctly for adults and have rules like those for alcohol, tobakko and gambling, so no or limited advertising. Scrub the public spaces of Facebook logos.
I'm not sure if it's naiveté, it's probably more that we are all complacent. If all Facebook/Instagram users (and perhaps, even if only those with children), stopped using, that would be an actual stick, wouldn't it.. But we don't (I'm not excluding myself).
Discussions from proper experts about absolute toxicity of social networks in their implementation are at least... 15 years old at this point? At least that, and I am not talking about rare article here and there but onslaught of articles in popular media from all sides. But parents... mostly didn't give a fuck.
Lets admit it, in same vein trump is a symptom of current US society, the approach and effects of social networks we allow them to be is a result of how lazy and thus addicted people got. On top of many of the parents doing exactly the same, then don't expect miracles.
One thing that I don't understand - even here, some folks call that sociopathic amoral piece of shit 'zuck' and treat his empire like some sort of semi-charity. When I attacked facebook company in the past, there was always a lot of defense (look at this open sourced stuff, look at that... which I presume came from either direct employees or clueless stock holders). People are people, deeply flawed and often weak without willingness to admit it to themselves.
Proportionally, it's as if an individual who makes $60K a year gets a speeding fine of $375. It might be moderately annoying, but it's not really going to be remembered in a month.
"We went a little over the line to figure out where the line is, so, we can now guarantee you, dear shareholder, that we're extracting the absolute maximum possible value! Isn't that splendid!"
Regulate and fine social media and adtech companies until its no longer economically feasible to generate the massive profits and stock valuations that is prompting this garbage.
Just have to read the quarterly conference calls between Zuck and Wall Street. Both groups are in total denial. And will be till we never hear from Zuck ever again.
Just break them all up via antitrust enforcement. It's increasingly becoming clear that society will degenerate into cyberpunk technofeudalism otherwise.
So... Question. Seeing as Zuck is the majority voting shareholder and highest ranked executive, why isn't there a piercing of the corporate veil going on? This isn't some distributed blame case. Ultimately, his decision making led to what the jury finds objectionable. I find it absurd that somehow, the corporate veil is able to absorb even this? Somebody accepted the risk. That somebody is at the top of the pyramid. Want to send a message? Get 'em.
I really would love to be in the mind of Meta spokespeople who have to craft messages that completely hide the truth, sound convincing, and have to live with it, to understand how they do it without blowing up. I think that's also quite damaging for someone's mental health.
I don't know who they have to pay it to but that's only for New Mexico, which has about two million people which works out to about $187.50 per person.
That's pretty cheap when it comes to deception.
The eyes of Texas should be upon this, which is 15X the size and should not settle for less than $1000 per person, where deceptive trade practice is much more serious than other places.
Now that would set a $30 billion example which may not be enough of a deterrent either.
But there are probably plenty of people for whom a $5000 one-time payment might not come close to being fair compensation for what's already happened, especially with Meta allowed to continue as an ongoing concern, that's got to be psychologically harmful.
To really fix it each state would have to follow "suit" while greatly upping the ante so there's at least hundreds of billions at stake.
Meta can afford it and who else is responsible for so much widespread sneaky deception at this scale for so long ?
>Now that would set a $30 billion example which may not be enough of a deterrent either.
Mark's personally worth more than 10x that, Facebook's got a 1.7 trillion market cap, so it really wouldn't move the needle for them. Cost of doing business and whatnot.
The same company intentionally driving minors towards this content (despite claiming to care about them) is also lobbying in secrecy for requiring all of us to scan our ID and face in order to use our phones and computers.
They don't care about child safety as long as it doesn't become so bad as to impact their revenue negatively. But they see that governments all over the world push for some kinds of age restrictions, and they know they are a prime target and it is hard for them to push back against that.
The reason they are (not so secretly) lobbying for requiring us to ID ourselves at the device level is that they don't want to be the gatekeepers. They want to make creating an account as effortless as possible and having to prove your age is a barrier that make turn off some people, including adults, and they may instead turn to services that don't require age verification. By moving the age verification in the OS, not only the responsibility shifts to the OS or hardware vendor, but it also removes the disadvantage they have against services that don't require age verification.
If you read between the lines, you will see that they have the same stance: "put age verification at the OS level, so that people don't discriminate against us". They know they are not in a position to argue against "child safety" laws, so instead, they lobby for making it worse for everyone instead of just themselves.
Meta is like one giant cancer that grew a few small tumors of benign[1] nature, like some of their efforts in open source and open research (React, Llama, etc.).
Cancer is a great metaphor because its a perversion of natural, healthy processes. So called social media is nearly that, but actually grotesquely unhealthy.
People are dramatically unwell when they are not social, but that unregulated process is also negative up to and including being lethal.
Actually. Meta is spending millions to push the age verification requirement off to the app store providers, such as Google and Apple. It's an attempt to shield Meta from liability, transfer it to the app providers.
Having clear laws about what's allowed and what isn't is a lot cheaper than getting repeatedly sued for hundreds of millions for not doing things there was never a clear legal requirement to do.
>to push the age verification requirement off to the app store providers,
and makes more sense, Apple and Google have your credit card , or if you are a parent that bought soem phone for you child then at first boot up as a parent should be your job to setup a child account.
Most sites are not going to implement this themselves.
I think they're in prime position to become a key broker of identity in the same way that a lot of people already log in with their meta or google account to unrelated websites.
They become very entrenched and get a ton of data that way.
As more and more people essentially lock themselves in with these identitybrokers tho I imagine it has a very stifling effect on speech tho. Imagine getting banned from those.
Isn't this conversation, not publishing scientific hypotheses, theories and findings?
If so, it is customarily permissible to use rhetoric and sarcasm to more strongly emphasize a point. Or, to leave the conclusion as an exercise for the reader.
I mean, their telemetry crap is on a lot of apps too. I remember someone DMing me something very niche on Discord, and by chance I opened up Facebook, it gave me ads for that very, very niche thing I have never even looked up on Google, or Facebook, it was like IMMEDIATE. I opened up Facebook by chance, and voila.
The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
There's some serious shenanigans going on with ad companies, and we just seem to handwave it around.
Coincidentally, I remember both experiences very very vividly, because this was the last time I used either platform in any meaningful capacity.
> The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
Option A: The Nest camera not only listened to the conversation and picked out "Airport Grade Tar" and decided it needed to show adverts about it to people, but the camera also identified you to the point it could isolate your FB account in order to serve you those adverts.
(I'm making some assumptions but...)
Option B: Your brother had done various searches for airport grade tar from his home (in order to know how expensive it was). You, whilst visiting his home, were on his Wifi and therefore shared the same external IP address, your phone did enough activity whilst at his house (FB app checked in to their servers in the background, or used Messenger, etc) to get the "thinking of buying airport grade tar" associated with his external IP address associated with your FB account that was temporarily on that IP.
I had a friend who was convinced that some device in his house was listening in on his conversations with his wife as he kept on getting adverts for things they'd been talking about buying the day before but he hadn't searched for. (But she was searching for it from their home wifi, which is why it appeared in his adverts afterwards.)
Basically these age attestation/verification laws are being pushed as a "save the children!" scenario. But if you read the laws - all they really do is shift responsibility around.
Currently, websites and apps are supposed to ensure they don't have kids under 13, or if they do - that they have the parents permission. That's federal law in the US.
These laws make the operating system or app store (depends on the particular law) responsible for being the age gate.
This doesn't stop the federal law from being enforced or anything, but the idea is apps/websites don't handle it directly, that's handled by the operating system or app store.
So now - companies like Meta can throw up their hands and say "hey, the operating system told us they were of age, not our fault." It also makes some things murkier. Now if Meta gets sued, can they bring Google/Apple/Microsoft in as some kind of co-defendent?
I think that murkiness is the point. They don't need to create the most bullet-proof set of regulations that 100% absolves them of all responsibility, they just need to create enough to save some money next time they get sued.
I can think of a ton of regulations we could create to better help protect kids. We could mandate that mobile phones, upon first setup, tell the user about parental controls that are available on the device and ask if they'd like to be enabled. Establish a baseline set of parental controls that need to be implemented and available by phone manufacturers, like an approval process that you need to go through to hit store shelves.
We could create educational programs. Remember being in school and having anti-drug shit come through the school? It could be like that but about social media (and also not like that because it wouldn't just be "social media is bad," hopefully).
Again all these laws do is take what should be Meta's burden, and make it everybody else's burden.
> is also lobbying in secrecy for requiring all of us to scan our ID and face in order to use our phones and computers.
You’re conflating different things. The OS-level age setting proposals are not the same as scanning IDs and faces.
I’m anti age check legislation, too, but the misinformation is getting so bad that it’s starting to weaken the counter-arguments.
> Their stated reason? Child safety.
> Their actual reason? You can figure that out.
We’re commenting under an article about one $375M lawsuit over child safety and many more on the way. They are obviously being pressured for child safety by over zealous prosecutors. This is why they reversed course and removed end-to-end encryption from Instagram because it was brought up as a threat to child safety.
Also your “you can figure that out” implication doesn’t even make sense. The proposal to move age verification to the OS level would give Meta less information about the user, because the OS, not Meta apps, would be responsible for gating age content. I’m not agreeing with the proposal, but it’s easy to see that it would be more privacy-preserving than having to submit your ID to Meta.
> The proposal to move age verification to the OS level would give Meta less information about the user, because the OS, not Meta apps, would be responsible for gating age content.
I find it hard to believe that meta doesn't already have a pretty good age estimate for 95%+ of their users.
What offloading the responsibility to the app stores (or OS vendors) gives Meta is exactly that, offloading responsibility. In a future lawsuit, they can say that someone else provided them with incorrect information.
It is most likely not them but they proxie for the US. Under another administration they would use an NGO to advance the agenda. The goal is to facescan the world.
To be fair, they're just an evil corporation making lemonade out of lemons. I'm sure they'd be happier pushing porn and nazism to hundreds of millions of underage users, but if certain governments want them to write all that bunk code to verify everyone's ID, they might as well make money off the data.
We used to believe in freedom of speech and freedom of association.
Since the dawn of the Internet era, we've had a legal principle that platforms are relatively shielded from liability for what their users do.
It's the Internet. There's sexual content and sketchy characters on it. Occasionally people will encounter them -- even if they're under 18.
Anyone who grew up in the mid-1990s or later, think back to your own Internet usage when you were under 18. You probably found something NSFW or NSFL, dealt with it, and came out basically OK after applying your common sense. Maybe it was shocking and mildly traumatizing -- but having negative experience is how we grow. Part of growing up is honing one's sense of "that link is staying blue" or "I'm not comfortable with this, it's time to GTFO". And it seems a lot safer if you encounter the sketchy side of humanity from the other side of a screen. Think about how a young person's exposure to the underbelly of humanity might have gone in pre-Internet times: Get invited to a party, find out it's in the bad part of town and there are a bunch of sketchy people there -- well, you're exposed to all kinds of physical risks. You can't leave the party as easily as you can put your phone down.
I stopped logging onto Facebook regularly around 2009; I only log in a couple times a year. I hate what Facebook has become in the past decade and a half.
But giving a site with millions of users a multi-hundred-million-dollar fine because some of those users behave badly seems...asinine.
If your kid is old enough and responsible enough to be given unsupervised Internet access, you'd better teach them how to deal with the skeevy stuff they might encounter.
That’s not really true. Pre-internet we had relatively much stricter content controls. Fairness doctrine springs to mind, plus significant regulation of the movie industry.
Letting companies sell addiction has pretty significant negative externalities. That’s why we regulate gambling and drugs. Facebook sells addiction, so it makes sense to regulate it like we do drugs and gambling.
Most Facebook users are basically teenagers, so it's no wonder it took them this long to add any real restrictions...or maybe they just wanted us to think they cared.
what is so hard to teach children not to e-messaging with strangers just like not to snail-mailing with strangers? also the parents should be able to join the conversation just like in the analogue world. call me backward but i dont want to outsource parenting neither to government nor to remote businneses.
In analogue world, shops and pubs are responsible for not giving kids alcohol, porn, gambling access and whatever else. In an analogue world, parents are not expected to do perfect surveillance every minute of kids lives.
Also, parents have in fact full control of snail mail.
does the post office and postmen are responsible to police kids' snail mails?
where does this "perfect surveillance" idea come from? i teach my children how to get acquaintances; first in more direct, more supervised way, later let them more and more self-driving. like anything else in parenting, eg. bicycle. but i guess urbanization diminished that skill as well. no need for "perfect surveillance"; no parent wants it. it's not only easier to pass on basic principles, but also makes supervision gradually less neccessary over time.
> parents have in fact full control of snail mail
what? children using e-messaging can just as do snail mails completely on their own (of course they don´t but it's not about going back to analogue world but to form the digital world on the same principles).
well, i can imagine in highly urbanized environment, where children are forbidden to go outside, but locked down together with family even making them more isolated, and trusting them "to the phone" to cope with the daily frustration, may easily lead to a situation where phone usage and e-messaging is completely unattended and undisclosed by and with parents, while posting an evelope is at a level of expertise for them.
parents ability to be in control of e-messaging is as much as of snail mails.
1. This fine is 1/100th the size it should be. Make them pay, and break up Meta/facebook.
2. Age verification pushes coming from several different actors across gov't and private sector is worrying. I trust no actor here, and neither should you.
3. Zuck should be in jail.
This is one of the first times the court found the platform itself can be liable, overruling frequent industry claims that they just host content and are never responsible for the content.
$375 million sounds big but is peanuts compared to their annual revenue. And of course Meta will appeal and then try to drag everything out for years and years. Expect copycat lawsuits.
These platforms expose minors to predators and bad actors, and Meta was proven lying about safety.
Meta can do more and should do more. I think that's the short of it. The company made 59 Billion last year. It's completely reasonable to expect that they expend effort and budget on reducing their harm to children.
Meta has a way to read your E2EE messages. I don't know what it is, but if they didn't then they wouldn't do it.
There's a difference between E2EE between friends who want to remain secure, and E2EE between strangers in an attempt for the platform to avoid legal liability for spam.
> Another poster child for Meta's lobbying (bribery) to encourage OS level age verification. (numerous recent references in HN posts)
The references I saw showed Meta had lobbied for some of the laws that require age verification be done by the site or by third party ID services. They did not show that Meta lobbied for any of the OS bills.
Some showed that Meta had lobbied in some of the states with those bills, but they just showed Meta's total lobbying budget for those states.
Make the fine scale, and fit the severity of the issue. This should be $375 Billion not $375 Million. These are our future generations they're destroying.
If I was to take my tinfoil hat off, one could see a world where Facebook let this happen in the first place in order to have a case to make for less security in communications.
I don't like Meta in any sense of the word and I think they've degraded humanity and society as a whole significantly for generations now to come. But I hope my conspiratorial mind is just over reacting.
Also various sources (websites) may present articles differently. Everything from fonts, colors, formatting, etc. to online ads and tracking to access restrictions (enable Javascript, CAPTCHAs, etc.) can vary across websites
Until the fines are large enough to impact business and cause heads to roll, and maybe we even see some prison time for executives, companies will continue to not give a fuck. This is chump change for Meta.
As much as everyone hates Meta for selling people's personal data, this is absolutely ridiculous. The hysteria regarding forcing companies do parents' job doesn't make any sense whatsoever.
Requiring ID to browse the internet is doing the parents jobs of managing what their kids are doing online.
Stopping misleading advertisments and mental health issues while claiming to be protecting children is not on the parents. The parents were given the false information to believe their kids would be safe.
I've never seen Meta advertising themselves as a kindergarten or a playground for kids. They have always been perceived as public square or forum. It's wild to leave your child alone in public place and expect safety.
Name and shame the managers and leadership at this time.
I dream of a world where they'd be recognized and shamed in the streets for all the damage they've done to society. Instead they get to do all kinds of side quests with their money.
I'd much rather they get personally fined and/or banned from holding leadership positions in the field (with varying timeframes depending on the level of responsibility).
Naming and shaming won't do much good. It could backfire and serve as a positive mark on their resume for other morally corrupt leaders.
Drop in the bucket for them. Giving Zuck some jail time would be the more appropriate message - there's no doubt he knows and approves of the kind of evil activity the New Mexico law enforcement dug up.
That would be a dream, but cannot see it happening.
But totally agree with your theory- platforms should face genuine legal exposure for algorithmic harm to minors (as tobacco companies did for health harm).
Unfortunately, as we found out recently, Meta's lobbyists are a powerful force to contend with and I do not trust our governments to stand up to them.
lol. And you think we will ever legalize drugs (and people can take responsibility), when large companies are being sued for being addicted to social media?
There's a vast difference between accurately advertising the effects of drugs and the risks involved in taking them, versus lying to you about the drugs and creating an environment that furthers addition.
It all boils down to consent.
I might want to take some drugs that have some harmful side effects. But i knew about them and i willingly made the choice because I valued the high more.
Contrast this with, I knew about the harmful side effects and told you they didnt exist and you should take more. And then i change the drug so its even MORE harmful because it also makes you BUY more. That's what these social media sites do.
They use engineered sociology and psychology to create addictive products, and then refine them to maximize profit at the cost of anything they can pull a lever on.
What bothers me the most is not the vampires at the top sucking out every dollar they can extract out of vulnerable people, but the fact that so many engineers are supporting this. So much for engineering ethics. Why even bother teaching it anymore?
If you take actions to deliberately weaponize your product against children in particular, whatever it is -- you shouldn't be surprised when liability attaches. That's what this verdict is about.
Alternative headline: household spyware cash machine forced to pay $20 for being bad.
If you want to punish Meta then you have to punish the wonder boy who runs it. Not even share holders can fight off the guy spending 80B on the metaverse.
Sadly I don't think it's enough for Meta to change, because they have no business model if they are forced to be serious about online safety. That's probably also why they are pushing so hard for age verification, make safety a problem for someone else.
Is that the only factor? Is insider trading objective? (hint: it's not, read the law). It's objective only when we can attribute a quantitative measure to it? What's the relative "value" of $1M profit from insider trading vs a single child's destroyed psyche? How much value could that child have contributed to the society had it not been for the harm done to it? Is there really much subjectiveness in terms of the harm done to those kids?
All that to say: I don't think "objectivity" should be the (main) factor resulting in existence of adequate punishment.
It is, I agree. My point is that the proportionality of consequences is not there. We seem to be good at criminalizing discrete, individual financial acts, but not systemic corporate decisions that cause diffuse harm. That's even when the aggregate harm is arguably far greater.
Meta should be disbanded for the damage it caused to mankind. Age verification tainting Linux also is heavily attributable to Meta buying legislation; systemd already quickly went that path, in order to appease their corporate-gods. Private user data to be released to random actors willy-nilly style - and the constant appeasement "no, this is not what is happening". Until it suddenly is happening precisely as people predicted it to be happening. Everyone runs a meta-agenda nowadays, Meta more than most others.
Many will cheer for any case that hurts Meta without reading the details, but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption:
> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
The New York case has explicitly gone after their support of end-to-end encryption as a target: https://www.reuters.com/legal/government/meta-executive-warn...
The correct nuance here is...
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
I think this is the way. Not control, but just make it simpler for parents to handle their childrens devices. You dont have to make everyone share their age, you just make it so that parents can in a simpler way choose what the children should be able to access. Make it easy to do right, dont add more control. Its kind of the old anti-piracy copyprotections. The pirates always cracked it, and in the end, the ones who got to sit there trying to figure out what is the word in the manual is the user who actually paid for the game. So making it worse for the ones who paid, and better for the cracked version. So, make it simple.
11 replies →
I think getting the age thing correct is key to get parental classification to work properly(I think now platforms just ask for a birth date which is lame) e.g
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
7 replies →
It’s very hard to control kids internet access. Impossible really. Even if you do it fine at home, once they go to school it’s whatever policies the school has. Most require laptops and provide internet access.
8 replies →
> Classifying accounts as child accounts (moderated by a parent)
Notice also that even if you do this, you still don't need the service to be able to decrypt the content, only the parent.
This could even be generically useful, e.g. you have a messenger used by business and then the messages can be read by the client company's administrator/manager but not the messaging company's.
I don’t agree we should Treat everyone as an adult by default online. We wouldn’t do that in any other circumstances.
4 replies →
> Classifying accounts as child accounts
It's ok to drive Dad's truck unless he catches you and tells you no.
1 reply →
That doesn't work, unless the system knows everyone's family relationships.
Not guesses. Not is told about and takes on trust. Knows.
There's nothing to stop a kid creating a fake adult account and using it as an adult, perhaps creating their own kid account for "official" use.
Ultimately this is an unsolvable problem without a single source of truth for verified ID and user age.
The only responsible way to do that is to create a global "ID escrow" agency, where ID details are private and aren't available to governments or corporations without a court order, but the agency can provide basic age checks and other privacy services of a limited nature.
Good luck with that idea in this culture.
Meanwhile we have the opposite - real ID is known to governments and corporations, personal habits and beliefs of all kinds can be tracked, there is zero expectation of privacy, and kids still aren't protected.
I’m actually okay with not letting under age people use e2e. I’m not okay with blocking everyone. I have 2 kids.
I'm not comfortable with the idea that children's private messages would be exposed to thousands of social media workers and government employees.
I understand the concern but then to make this available for adults you now have to provide proof of age to companies, which opens up another can of privacy worms.
32 replies →
I have kids. I don't want creeps and predators spying on their conversations with friends.
2 replies →
In a way, this is like saying that one trusts total strangers in some random large tech company and total strangers in government agencies to read and/or manipulate conversations that kids have. This also paves the way to disallow E2EE for other classes of people based on arbitrary criteria. I don’t believe this is good for society overall.
1 reply →
The problem is all these ‘for the children’ arguments contain collateral damage.
5 replies →
You just need to provide the government with your name and address and the name and address of the counter party every time you send an encrypted message.
If you don't support this you're obviously a pedo nazi terrorist.
1 reply →
There is no reason kids should use so called smart devices, except making certain companies richer. Kids have had a healthy development without such crap for thousands of years. We don't discuss what percentage of alcohol should be allowed in beer and wine for kids.
1 reply →
Centralized organizations with proprietary software can never offer meaningful end to end encryption because they can just ship an app update to disable or backdoor it at any time.
It is better for them to be forced to turn off the security theater so people that need actual privacy can research alternatives.
well, name an example of a thing that can never change then.
"research alternatives" meaning what exactly? You think open source is somehow not susceptible to the same issue, plus all of the malicious updates?
1 reply →
This is the core issue.
We know that this isn't really going to reduce harm for children, we know Meta is not seriously going to suffer or change, and we know this is going to be used as a cudgel to beat down privacy and increase surveillance.
Why is it so important that kids have access to the internet anyway that we're willing to sacrifice both our privacy and freedom of speech rights for it when we already know it's damaging their mental health?
We don't need all this privacy invasion if we just didn't give kids a smartphone with a data plan.
2 replies →
Rock meet hard place?
Harm to kids is actually happening, and this is always going to be a hot button topic.
E2E is critical for our current ability to communicate online, but will be a lower priority when pitted against child safety.
Fighting the good fight is one thing, fighting for the sake of it, without a plan that addresses the tactical reality is another altogether.
Personally, I think E2E will be defended, but it’s becoming a lightning rod for attention. As if removing encryption will solve the emerging issues.
I suspect providing alternatives to champion, such as privacy preserving ways to verify age, will force a conversation on why E2E needs to go.
> Many will cheer for any case that hurts Meta
Absolutely. Particularly where they've been found to be guilty.
> but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption
Why _social media_ companies are backtracking. I'm extremely nonplussed by this outcome.
> concerns that allowing teens
Yes, because that's what we all had in mind when considering the victims and perpetrators of these crimes.
The lawyers using the finding badly internally doesn’t mean the finding was fundamentally unsound and or won’t ultimately be a positive thing.
This is a good thing for “social” media. If you use any social media app (especially those owned by Meta) you should assume that absolutely everything you do is for full public consumption. Maybe these changes will make everyone stop thinking that anything is private when using “social” media apps.
It's illegal to hand a minor harmful material. Meta did exactly that. I support people's rights to make and buy sports cars, But it is illegal to hand the keys to a minor and leave them unsupervised.
If someone sends a child a dick pic by physical mail, is the post company responsible?
2 replies →
The Clipper chip is coming back.
How is the Clipper chip different from what online platforms claim have: a curated kids only section?
2 replies →
As a platform operator I think end to end encryption does no good in free products. It just makes you blamed for liability that you couldn’t foreseen or mitigate.
> (my emphasis) Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
Whatsapp and messenger are still fine, then.
No. Meta is backtracking because the business case for and to end encryption is gone. They willingly will give the Trump administration whatever the want because they are not in the business of fighting authoritarian governments, they are in the virtue signalling business when governments are constrained by the rule of law.
The business case was to be able to say “we don’t know”. That case is gone.
Only accounts that exist 14 year plus are elligible for e2e?
So a new service can't offer E2EE for 14 years?
Also, so an aspiring pedo who gets a job at the service can now read the messages of all the underaged kids?
However did we survive all of these years with unencrypted SMS or voice calls?!
[dead]
Is it illegal or is it just illegal on general purpose platforms whose focus isn't extreme security?
We all know Meta can still read E2EE chats (otherwise they wouldn't do it) and they're using E2EE as an excuse to avoid liability for the things their platform encourages. Contrast this with something like Signal where the entire point is to be secure.
> We all know Meta can still read E2EE chats
That can't be true, otherwise in what sense is it E2EE?
14 replies →
The first two E's in E2EE stand for end. From one end to the other. So no, Meta can't. Or put another way... if they can read those messages, then it's not E2EE.
Maybe I'm just getting old and cynical but, while I think current social media is bad for children, I'm very suspicious of the current international agreement that it's time to take action, especially with all the ID verification coming from multiple avenues
Two things can be true, and I am in the same boat. Should the next generation have their brains fried by ad-tech corporations and their algorithms? Absolutely not. Should the overdue off-ramp from this trend be the on-ramp to mass-surveillance and government overreach? Also a firm no.
I really wish this take was more prominent. I really don't buy that mass-surveillance should be required for age verification. There are plenty of very smart people who have created much more complicated things than a digital age verification that doesn't track every time you use it.
This also isn't helpful, but I think the sudden push of urgency isn't helping. The internet has existed without any kind of age verification or safety measures for about 30 years. We could have used that time to have a sensible conversation about policy trade offs, but instead we've waited till now to decide that everything has to be rushed through with minimal consideration.
12 replies →
A Kindernet would solve many problems. Hardware-gated access, local moderation and control, zero commerce or copyright, whatever you want to do to make the environment uninteresting to bad actors. Frame opposition to the concept as demand for access to your children.
Absolutely: I said something similar recently: https://news.ycombinator.com/item?id=46766649
Exactly. There's a clear alternative in my mind, one I'm sure is objectionable in its own way but I think is the least evil of the three: require providers to label their content and make them liable for it. This allows parents to do the censoring, which is functionally impossible now because no parent can fight the slippery power of multibillion dollar software investments designed to prevent them from having control over what their kids see.
So you're saying these corporations are responsible for verifying the age of their users without verifying the age of their users?
1 reply →
They’re the oil barons of our day. They frack our data and output psychological/social pollution.
[dead]
That's because we should be regulating the social media industry rather than regulating social media users.
Unfortunately, social media users don't have billions of dollars to spend on lobbying and related activities around the world.
> That's because we should be regulating the social media industry rather than regulating social media users.
These lawsuits and regulations are against the industry, not the users.
The regulations and lawsuits are driving the pressure to ID check users and remove end-to-end encryption.
The ask is to treat users differently based on age. How can they do that without verifying their users age?
11 replies →
Meta spent $2bn lobbying for this ID verification stuff:
https://news.ycombinator.com/item?id=47361235
> I'm very suspicious of the current international agreement that it's time to take action
Especially since, when you look at the behavior of younger people, they're way more careful about social media than millennials were. My teenage child an their friends keep all of their conversations in a massive but private group chat. Any social media consumed by them, is basically 'read only'. They don't post online, none of them of have social media accounts where they post pictures of themselves etc.
Same with all of my younger gen-z coworkers. If they have socials the post very selectively and all content is work friendly.
The people I see that need "protection" are aging millenials that don't really understand how wildly they're exposing themselves and families. I cringe when I see the amount of personal photos and information shared by the view millenials I know who still need their ego-boost from these platforms (and that number itself is much smaller).
Younger people don't share their opinion and anything resembling private photos online any more.
I definitely would not agree with this and the user metrics of platforms like tiktok and instagram definitely would argue otherwise to your anecdote. Many are showing far more of an alleged window to their lives than ever before, key word being alleged as its always greatly curated in an way that oft attempts to make everything look perfect and effortless.
Absolutely are a lot of gen z who avoid social media, but to pretend most are privately hunkered away is completely ignorant of today's social media usage.
given that it's happening simultaneously with the war on E2EE and general purpose computing, their goals are as transparent as it gets. the West is at this point only a decade behind China.
Governments always want censorship and speech control. That never changes. The only difference is that now the general populace has accumulated enough disgruntlement to social media to be used against themselves.
No the difference is that when governments are still constrained by the rule of law it’s cheap PR to fight the government on data access claims but once they are authoritarian fascist industrialists fall over themselves to feed everything into Palantir
I’m deeply worried by how uncritical these responses are. Meta is removing end-to-end encryption specifically because these lawsuits are trying to claim end-to-end encryption is a tool for child abuse.
The “think of the children” angle is the perfect angle to pressure companies to make communications readable by the government. And here tech audiences are welcoming it and applauding because they couldn’t read past the headline and they think anything that hurts Zuck is good.
How anyone can see this happening and not draw the connections to Discord and other services also pushing ID checks is beyond me. Believing that this will only apply to services that don’t effect you is short sighted.
There's no agreement other than maybe that social media is bad for children. To get kids off of there you need to identify who's a kid and who isn't. Same with alcohol and tobacco. Obviously people shouldn't give their ID to Meta and hopefully many will not but those that do, for me, as someone who doesn't use social media, that's a small price to pay to keep kids off. Again, Meta is completely optional, it's a platform to share stupid videos, no one NEEDS to be there.
A lot of the ID verification stuff is coming FROM those companies
I’ve just been stung by iOS 26.4’s implementation of the age-gate. My only option has been to rollback with a 26.3.1 IPSW.
I unlurked and made a thread last night, but I think it might be hidden due to account age: https://news.ycombinator.com/item?id=47511919
2 replies →
Meta is lobbying to push age verification to the OS level.
I have read the OSINT report from Reddit. The data it has is being interpreted as Meta orchestrating a global lobbying scheme.
However the data is equally if not more supportive of Meta simply taking advantage of global political sentiment to position itself better.
I’ve mentioned this elsewhere, but the HN zeitgeist seems to be resistant to the idea that tech is the “bad guy” today.
I work in trust and safety, and have near front row seats to all the insanity playing out today.
Do you think Meta wouldn't want to be legally mandated to ask for your id? The improvement to ad targeting alone would be enough to pay for any lost users. They would probably want nothing more than to be in the same business as idema and the other online identity/age verification providers are.
Critically think about this for a second before believing some ChatGPT generated "OSINT" report on reddit. Otherwise, you'll allow corpos to use your mob hatered against you
1 reply →
[dead]
because it is a false dilemma
[dead]
[dead]
Tech bros deliberately made digital crack for kids and corporations refuse to moderate online content.
There is no conspiracy the general public is faced with a crisis and they are desperate for a solution.
The teen suicide statistics do not lie.
> The teen suicide statistics do not lie.
Teen suicide rates in the US are lower now than they were in the 1990s.
8 replies →
The general public is being told they are faced with a crisis. This has been a problem for at least a decade, yet suddenly it's at the forefront and conveniently ties into ID verification for everyone to use general purpose computing.
I'm sorry but if you don't think there's a conspiracy I have a bridge to sell you. It was already unveiled that Meta has lobbied billions towards promoting this legislative change
6 replies →
Really? You still think you're the one looking at it all wrong? It's exactly what you think it is. Stop giving blatant malice the benefit of the doubt, especially the doubt they've directly instilled.
> The New Mexico attorney general’s office created multiple fake Facebook and Instagram profiles posing as children as part of its investigation into Meta. Those test accounts encountered sexually suggestive content and requests to share pornographic content, the suit alleges.
> The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
and
> “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
This is what it's about right? The article doesn't make it seem like encryption is meaningfully part of this case at all.
> Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
There's no indication that that decision, or the announcement, are directly related to the trial, just they just happened at the same time? It's a link drawn by CNN, without presenting any clear connection
They have been under a lot of pressure for years to disable e2e messaging because it prevents them from monitoring messages for child abuse. This was a central point of the trial. While they haven't given a reason for the change I think its reasonable to infer it is in response to this pressure.
However there is another possible explanation
> Tom Sulston, head of policy at Digital Rights Watch, said rather than acceding to law enforcement demands, the move was more likely due to Meta deciding against moving messaging on WhatsApp, Facebook and Instagram to a single platform.
375 million awarded at $5000 per child harmed. Implying that only 75,000 children were harmed.
Got away with it again, good profit, will repeat.
That's not how the legal framework in society works. Victims are compensated. The business pays. The precedent of wrongdoing is specifically established which means that further infringements can be quickly resolved.
The legal system does not seek to destroy the business, or individual criminal. Instead it wants them to be able to continue doing their other non-criminal stuff.
The legal system has two goals - to compensate individuals harmed and to discourage further violations of the law. This lawsuit seems to have fulfilled the first goal but fell flat on its face when it comes to punitive damages.
9 replies →
The function of a system is what it does.
Meta knowingly hurt children for profit. It worked.
If we are in any way serious about technocratic solutions to social problems, this would be untenable, the company would be bankrupted, a new company would fill its place. No tears would be cried, nothing of value would be lost, half of hacker news would be chafing at the bit to build a better alternative for the newly opened market.
But that's not what happened. We allowed children to be knowingly hurt for profit.
The system is functioning as intended.
8 replies →
It's very hard to think they wouldn't do something harmful to children again if the economic incentives aligned. For corporations it's just so easy to say sorry, and in the worst case they know an irrelevant fine will be placed in order not "to destroy the business".
>The legal system does not seek to destroy the business, or individual criminal.
The legal system, to this day, does in fact seek to destroy individual criminals on a regular basis.
8 Xboxes is a pretty small compensation for a sexual abuse case.
1 reply →
[flagged]
Just so I'm clear: What is Meta's non-criminal business?
They have enough lawyers that they can easily find another criminal avenue that doesn't step on the previous path.
2 replies →
This represents 0.6% of meta's 2025 profits, or 0.2% of revenue. Though presumably it was based on harms from previous years, I haven't read the lawsuit.
> This represents 0.6% of meta's 2025 profits
By coincidence, New Mexico represents 0.6% of America's population.
Well hopefully now that there's precedent, it will open them up to recurring repeat-offender lawsuits and legal action. The goal is to get them to stop doing predatory things now.
That's good, but it can be read as: "Everyone can be a first time offender and get away with a slap on the wrist." -- where "everyone" is a tech company. Next they will find some other nefarious thing they don't need to check for properly, since that would be a new offense and again only get a wrist slap. There is no signal in this fine, other than "Hey it's OK, if you are big enough, you will get away with it. At least once, likely twice or more, depending on how big you are.".
1 reply →
$5000 is not even enough for trauma counseling, unless you have expensive insurance!
That’s half a day worth of revenue for Meta. Why don’t companies get fined billions?
We don't want age verification, and we do want E2E encryption. Yet, because Meta is an evil company, we cheer on this judgement.
Reality, folks: you can't have both.
Those two things are unrelated to each other. And yes, we can do without age verification and we can have E2E encryption. Age verification is causing more harm than good. It also doesn't meaningfully help with any of the problems mentioned in the article.
Well, assuming you won't also think it's okay for Meta to just be held liable anyway.
There are people who are against age verification just on principle and others who are against it because they know any realistic implementation is going to be abused.
Why can we not have both? I don't see the assertion backed up at all.
With E2EE and no age verification, there is no way Meta could have any control over messages sent to children, so it does not make sense to hold them responsible.
I think we don't want mandatory age verification or banned encryption for everything. However, you can't hide behind "it's not the law" as a shield for everything. Thanks to ubiquitous spyware, Meta knows damn well the age of almost all of its users, and if someone who's 40 is sending first-contact messages to 10 unknown 13-year-olds every day, it seems important to know what those messages say. They know this stuff is happening and they care about not being liable, not about your security.
We can assume Meta has backdoored its E2EE somehow anyway.
Can we just agree we just don’t want Meta?
Fines like this only work if they're large enough to change behavior. $375M for a company Meta's size is more of an accounting entry than a deterrent.
What is Meta’s revenue in New Mexico?
Also, “the total civil penalty of $375m was reached after the jury decided there were thousands of violations of the act, each with a maximum penalty of $5,000. Meta is also involved in a separate trial in Los Angeles, in which a young woman claims that she became addicted to platforms like Instagram and YouTube, owned by Google, as a child because of how they are intentionally designed.
There are thousands of similar lawsuits winding their way through the US courts.”
Wait, what? This case's central argument was about propagating and promoting child sexual abuse material, but the maximum penalty was set to only $5000 per violation? Why?
1 reply →
While true, this is just one pretty small state. There are others.
This fine from New Mexico is about 0.6% of Meta's annual profit.
If all 50 states sue at the same rate, that'll be a 30% dent, and I'm sure states can sue for more than 0.6% too. That would be historic action against malfeasance and would send a strong FAFO single to all corporates.
Let's lobby for it.
Why stopped at the 50 states? Loop in the rest of the world
That fine is missing a few zeros on the right side
It's not a fine it's a fee
It takes 7 clicks to turn off ads that promote eating disorders. Thats enough proof.
What's an example of an ad promoting an eating disorder? Are ads for eating disorders more difficult to turn off than other types of ads?
You can click infinite times not interested on dangerous or adult reels and they’ll just show up more and more.
[dead]
Social media for children should be moderated by their parents, full stop. End-to-end encryption exists. You cannot un-invent it. It is trivial to roll your own encrypted chat service.
I haven't read this article, but I can tell you for certain that no verdict was handed down that will punish them in any way that matters. They have and generate more money than they could ever spend and they're functionally above the law because of the money and lawyers they can afford. The law itself is broken in this country and when you get big enough you can literally get away with murder.
If history is any indication, only demonstrable threat of personal erasure will affect the behavior of people on this scale.
By "erasure," I'm not referring to the death of the involved; I'm referring to the elimination of the individual's social capital.
When the privileged lose their ability to influence others, they tend to get rather distressed.
How would we do that here? Make Zuckerberg divest from FB or Meta as a whole? Would that be possible?
3 replies →
+1. If there's a dollar amount attached to a verdict for a company of this size, then it's just a complicated business expense and not an enforcement of a law.
they should give voting stock out as punishment.
It's a $3 million verdict in compensatory damages. Even if reduced on appeal, that's a lot of money.
This is really bad for Meta.
> It's a $3 million verdict in compensatory damages. Even if reduced on appeal, that's a lot of money.
Where are you seeing that?
The article says:
> Jurors found there were thousands of violations, each counting separately toward a penalty of $375 million. That’s less than one-fifth of what prosecutors were seeking.
> Meta is valued at about $1.5 trillion and the company’s stock was up 5% in early after-hours trading following the verdict, a signal that shareholders were shrugging off the news.
> Juror Linda Payton, 38, said the jury reached a compromise on the estimated number of teenagers affected by Meta’s platforms, while opting for the maximum penalty per violation. With a maximum $5,000 penalty for each violation, she said she thought each child was worth the maximum amount.
Meta has a net profit over $140 million _per day_. $3 million is absolutely nothing to them.
how many minutes of revenue is that?
they did $200 billion in revenue and $60 billion in net income last year.
a $3 billion fine would be barely more than a slap on the wrist.
4 replies →
They had to pay about $375 million. That's a lot of money, but I suspect that Facebook has made considerably more than that on targeting children.
I'm hardly the first person to use this logic, but if they make more money breaking the law than they have to pay in fines, then it's not a fine, it's a business expense.
Agree with your take. However, to put more perspective on the amount I think you have to consider this is just in New Mexico so the per capita fine is actually quite large and (big) if it were applied similarly nationally or globally it could be a significant impact to their business forcing some change.
Proportionally, it's as if an individual who makes $60K/yr got a speeding fine of $375. Kind of a drop in the bucket.
Especially if they were making $4,000 from street racing.
This particular verdict is a long time coming. How it drives meaningful change is the bigger question.
One of the challenges we need to resolve is the race to the bottom for online communities - engagement metrics will always result in a PH level that supports more acerbic behavior.
There’s multiple analyses that you can find, if not your own experience, to believe that we should be able to do better with our information commons.
Just today, I found a paper that studied a corpus of Twitter discussions and found that bad-faith interactions constituted 68.3% of all replies (Twitter data).
The engineer and analyst side of us will always question these types of analyses.
I’ve read enough papers at this point for the methods to matter more than the conclusion.
1) meta, and the other tech platforms need to open up their research and data. NDAs and business incentives prevent us from having the boring technical conversations.
2) tech needs someone else to be the bogeyman - the way we did for tobacco. The profit incentive ensures profitable predatory features pass review. Expecting firms to ignore quarterly shareholder reviews for warm fuzzies is … setting ourselves up for failure.
Regulators (with teeth) need to be propped up so that the right amount of predictable friction (liability) is introduced.
3) tech firms need an opportunity or forum to come clean. The sheer gap between the practical reality of something like content moderation vs the ignorance of users and regulators - results in surprise and outrage when people find out how the sausage is made.
4) algorithm defaults decide the median experience for participants in our shred market place of ideas. The defaults need to be set in a manner that works for humans and society (whatever that might be).
Economies are systems to align incentives to achieve subjective goals.
So... end to end message encryption means meta can't see messages child molesters are sending to children.
As it should. If they can read those messages they can read anyone's messages.
When you see traffic between 40 yo man and 12 yo girl which don't have any common social connections and the messages are initiated by the man, you don't have to crack e2e to suspect dickpicks.
So you want the platform to be creepier and investigate connections more intensely? And you want to intercede on an arbitrary method you just made up, without examining all traffic first?
I seem to recall someone taking pictures of their baby, naked, because it was sick, and emailing them to the doctor -- and having their Apple account terminated. Terminated, with the father being labeled a pedophile, and the police contacted (all automatically).
Everyone was quite upset. Everyone felt it was too intrusive.
Frankly, communication platforms have no business trying to police anything at all. I wouldn't want the phone company recording all my conversations, hunting for trigger words, and then contacting the police or cutting off my phone if I sad "bad word".
Yet somehow it's OK to have this level of intrusion because.. um "computers".
The state has no business listening in on private citizen's communication.
Corporations have no business doing so.
To protect the 12 year girl, something called "her parents" need to pay attention and watch what she does. That's their job. They're her guardian.
Some random corporation has no business in that. Some random corporation has no business being an 'algorithmic parent', an automated machine with no appeal.
Here's something I'd support -- a way for parents to prevent children from registering for accounts, and, to be able to examine children's accounts.
But... then we get into ID verification. Of course, surely you support ID verification for platforms, because if you support platforms knowing the age of people (40 and 12, you listed), then you therefore must support a way to verify those ages.
15 replies →
> which don't have any common social connections
How would you actually know this? Facebook is a surveillance company, but they are not omniscient.
The fine is just one of the costs of doing business for these megacorps.
It's price in
Do we have to wait for any appeals before the performative mail out settlement checks for $1 routine?
Or the settlements goes to the state and no one ever sees a dollar.
$375M isn't even a slap on the wrist for a company that raked in $60B last year.
I cheer any decision that holds any private web property (like Facebook) accountable for it's user actions.
It helps to reduce hegemony of large social platforms and promotes privately owned websites. For example, I know everyone who has permissions to post on my website (or pre-moderate strangers comments), and is ready to take responsibility for their posts, what my website publishes.
Currently the legal stance seems strange to me -- large media platforms are allowed to store, distribute, rank and sell strangers data, while at the same time they claim they are not responsible for it.
If you haven't already, you should look at the court case that prompted the creation of the current legal framework of Section 230. Prodigy was sued because of the things being said in public chatrooms. Should the host for an IRC server be responsible for everything said on the IRC server? Should they pre-moderate all the messages being said there? Should dang premoderate every post on this site?
https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
The reality is that people who cheer for this stuff are going to be unreasonably shocked when it comes to bite them later. Once the government's done going after the big guys, the little guys are next, and unlike the big guys, they can't absorb a few fines and judgments.
Meta's own research (and its use of it) has shown that it repeatedly ignores well-substantiated facts about the harms of its products. Now that Section 230 seems like a flawed shield, I fear the takeaway for other companies will be: never conduct honest research in the first place to preserve plausible deniability.
Meta has always wanted the appearance of caring about safety (helps them attract talent and keep mission-related morale high), while nearly always prioritizing growth (save for tiny blips of time, like in 2017 when the fallout of the cambridge analytica stuff was hitting a crescendo), whereas companies like X are run by people explicitly disinterested in putting significant resources into safety, especially research.
I will also add that, for the past few years, Meta and X both have become extremely hostile to external researchers of their platforms, shutting down access to tools and data.
Wasn't Zuckerberg caught red handed in emails signing off on this? When is he going to be facing consequences?
Corporate liability isolation has become absurd. People who make decisions that harm people should be held to account for those decisions even if they structured their decision making apparatus in a legal way that makes it look like they're just following the orders of the shareholders.
Zuckerberg has a brain, he decided to take this action, it is absurd he is not being hit with a personal penalty.
Consequences are for poor people.
If Meta did advertise the "safety of its platforms for young users" then they should be held accountable for that. It seems clear from the whistleblowers that Meta had internal data that they knew they were not safe for young users, but Zuck gotta get those ads($$$) in front of young kids.
Yup, this is the real issue.
You can't realistically make a space that's free from predators. The real answer is teaching children to recognize unacceptable behavior. But most abuse is from inside--typically adults that the parents put in a position of trust or quasi-trust.
I do not fault Meta for there being predators, I fault Meta for pretending they're being kept out.
Modern cigarette companies
Can one be opposed to age verification in the OS and yet totally happy that Meta got this fine? There is a very big difference between e2e encryption /telephone and social media. Social media is more akin to a phone book. I do not recall there ever being any phone books listing minors. That's completely unacceptable and unnecessary. I am totally OK with phonebooks (or their modern digital equivalents which enable people discovery and user generated content discovery) to abide by the same KYC rules as banks. And be only for adults. Your kids using e2e encrypted messaging to communicate with their friends whom they have met in person? Nothing wrong with that, we all have the right to privacy. Kids listing their contact information publicly? Absolute no.
Why can’t penalties be tied to a percentage of Revenue?
You think if mom and pop shop did they same they’d be charged the same?
GDPR.
As part of the ongoing enshittification of the internet, tragedy of the commons etc., these big centralized internet platforms decided that instead of being responsible and making their products *slightly* less terrible it was better to maximize short term engagement metrics, and that, egotistically, the chance of there being real consequences for their actions was near zero. (Or, even more cynically, that their yearly performance review was more important).
Now I'm afraid they've screwed everyone over and the idea of an anonymous open internet is now dead- we're gonna see age (read, real ID) verification gating on every site and app soon....
The dumb thing is to look back and see how umimportant it is that Facebook feed algorithm be this addictive. They already had the network effects and no real competitors. They could have just left it alone.
What's horribly frustrating with the age ID stuff is that the issue at question with Meta wasn't that they didn't know what they were doing and that they were doing it to children. They did. This wasn't an issue of "If only they had the the age, then they could have done the right thing".
The laws being passed target exactly the wrong thing that wasn't a problem. They should have been passing "duty to care" laws aimed at social media companies not "give me your age" laws.
I may have missed it, but almost all these laws being passed for this issue have been pretty much solely around data collection rather than modifying the behavior of the worst businesses in the game.
It would be like seeing a car wreck kill a bunch of pedestrians and then passing a law that pedestrians need to carry IDs on them.
Yea, in the end there will basically be no consequences for Meta- Facebook is already mostly dead, and the ad revenue from that time has already been collected.
Now we're just moving on to a kind of moral panic think-of-the-kids kind of moment that is thinly-veiled state surveillance.
Watching Mark testify before the senate it honestly appears like it may have never occurred to him that it is an option to have not offered a feature. He treats the product as if it is some kind of inevitable outcome that was destined to exist.
It's not just avoiding any responsibility?
Management comp is tied to numbers go up
You start slow, then push it the limits
Netflix, never ads to some ads, then eventually its just Adflix, after 20 years.
Each new manager wants that comp up. So ads up by 5% every year.
Mass surveillance 'for your own good' instead of regulating social media in any way.
You can purchase a scam ad it'll be up in 10 minutes. Lie to every anxious child they have ADHD and need meth, lie to every dejected boy that they just need to manosphere up and buy supplements.
They think the public is stupid. They might be right.
>They already had the network effects and no real competitors.
Meta's biggest competitor was users' personal lives, not any other web service. They have been ruthless in crushing that competition.
the leaders of these companies don'tlet their kids use it.
I doubt that Zuckerberg really uses either Facebook or Instagram all that much. Maybe as a curated PR channel sure, but he's not doom scrolling Instagram at bedtime.
If you know what the platform is capable of, if you seen how the sausage is made, you're probably not using it.
People are also a little naive in not seeing that these platforms aren't just bad for children, they are bad for adults as well. I'm not oppose to not "selling" them to children, but we also need to label correctly for adults and have rules like those for alcohol, tobakko and gambling, so no or limited advertising. Scrub the public spaces of Facebook logos.
I'm not sure if it's naiveté, it's probably more that we are all complacent. If all Facebook/Instagram users (and perhaps, even if only those with children), stopped using, that would be an actual stick, wouldn't it.. But we don't (I'm not excluding myself).
1 reply →
Discussions from proper experts about absolute toxicity of social networks in their implementation are at least... 15 years old at this point? At least that, and I am not talking about rare article here and there but onslaught of articles in popular media from all sides. But parents... mostly didn't give a fuck.
Lets admit it, in same vein trump is a symptom of current US society, the approach and effects of social networks we allow them to be is a result of how lazy and thus addicted people got. On top of many of the parents doing exactly the same, then don't expect miracles.
One thing that I don't understand - even here, some folks call that sociopathic amoral piece of shit 'zuck' and treat his empire like some sort of semi-charity. When I attacked facebook company in the past, there was always a lot of defense (look at this open sourced stuff, look at that... which I presume came from either direct employees or clueless stock holders). People are people, deeply flawed and often weak without willingness to admit it to themselves.
and also https://news.ycombinator.com/item?id=47514916 It might be good to roll all the comments together.
two separate cases.
Both articles cite a New Mexico case about the Unfair Practices act.
Though I don't see a link to a specific case in either article, I don't think they're separate cases.
1 reply →
Cost of doing business...
This. Meta made $60B in net income in 2025.
Proportionally, it's as if an individual who makes $60K a year gets a speeding fine of $375. It might be moderately annoying, but it's not really going to be remembered in a month.
Has anyone in leadership at Meta faced even the prospect of jail time for what they've done over all these years?
1 reply →
If you can make 60B and occasionally pay a few hundred million in fines, the math kind of answers itself
"We went a little over the line to figure out where the line is, so, we can now guarantee you, dear shareholder, that we're extracting the absolute maximum possible value! Isn't that splendid!"
More like “we found a company doing business in the EU who has deep pockets. I bet we can get 500 mil from them and they won’t leave.”
1 reply →
Regulate and fine social media and adtech companies until its no longer economically feasible to generate the massive profits and stock valuations that is prompting this garbage.
Just have to read the quarterly conference calls between Zuck and Wall Street. Both groups are in total denial. And will be till we never hear from Zuck ever again.
Just break them all up via antitrust enforcement. It's increasingly becoming clear that society will degenerate into cyberpunk technofeudalism otherwise.
So... Question. Seeing as Zuck is the majority voting shareholder and highest ranked executive, why isn't there a piercing of the corporate veil going on? This isn't some distributed blame case. Ultimately, his decision making led to what the jury finds objectionable. I find it absurd that somehow, the corporate veil is able to absorb even this? Somebody accepted the risk. That somebody is at the top of the pyramid. Want to send a message? Get 'em.
They earn this in around 16 hours.
This is a good flag that you should be rolling your own safety checks. It's not hard, here's a writeup of an ancillary problem/solution: https://mixpeek.com/blog/ip-safety-pre-publication-clearance
I really would love to be in the mind of Meta spokespeople who have to craft messages that completely hide the truth, sound convincing, and have to live with it, to understand how they do it without blowing up. I think that's also quite damaging for someone's mental health.
I don't know who they have to pay it to but that's only for New Mexico, which has about two million people which works out to about $187.50 per person.
That's pretty cheap when it comes to deception.
The eyes of Texas should be upon this, which is 15X the size and should not settle for less than $1000 per person, where deceptive trade practice is much more serious than other places.
Now that would set a $30 billion example which may not be enough of a deterrent either.
But there are probably plenty of people for whom a $5000 one-time payment might not come close to being fair compensation for what's already happened, especially with Meta allowed to continue as an ongoing concern, that's got to be psychologically harmful.
To really fix it each state would have to follow "suit" while greatly upping the ante so there's at least hundreds of billions at stake.
Meta can afford it and who else is responsible for so much widespread sneaky deception at this scale for so long ?
>Now that would set a $30 billion example which may not be enough of a deterrent either.
Mark's personally worth more than 10x that, Facebook's got a 1.7 trillion market cap, so it really wouldn't move the needle for them. Cost of doing business and whatnot.
0.6% of last year's profits.
> 0.6% of last year's profits
New Mexico is 0.6% of the U.S. population [1].
[1] https://en.wikipedia.org/wiki/New_Mexico 2.13mm
[2] https://www.census.gov/popclock/ 342mm
https://lite.cnn.com/2026/03/24/tech/meta-new-mexico-trial-j...
The same company intentionally driving minors towards this content (despite claiming to care about them) is also lobbying in secrecy for requiring all of us to scan our ID and face in order to use our phones and computers.
Their stated reason? Child safety.
Their actual reason? You can figure that out.
The actual reason: child safety regulations
They don't care about child safety as long as it doesn't become so bad as to impact their revenue negatively. But they see that governments all over the world push for some kinds of age restrictions, and they know they are a prime target and it is hard for them to push back against that.
The reason they are (not so secretly) lobbying for requiring us to ID ourselves at the device level is that they don't want to be the gatekeepers. They want to make creating an account as effortless as possible and having to prove your age is a barrier that make turn off some people, including adults, and they may instead turn to services that don't require age verification. By moving the age verification in the OS, not only the responsibility shifts to the OS or hardware vendor, but it also removes the disadvantage they have against services that don't require age verification.
For a similar issue, PornHub is currently blocked in France, because they don't want to comply with the law related to age verification. Here is their argument: https://www.aylo.com/newsroom/aylo-suspends-access-to-pornhu...
If you read between the lines, you will see that they have the same stance: "put age verification at the OS level, so that people don't discriminate against us". They know they are not in a position to argue against "child safety" laws, so instead, they lobby for making it worse for everyone instead of just themselves.
The other "real reason" is the solution will end up looking like a super cookie and enable machine-level tracking across every app.
Meta is like one giant cancer that grew a few small tumors of benign[1] nature, like some of their efforts in open source and open research (React, Llama, etc.).
[1]: I could be wrong thinking those are benign.
>Meta is like one giant cancer
Cancer is a great metaphor because its a perversion of natural, healthy processes. So called social media is nearly that, but actually grotesquely unhealthy.
People are dramatically unwell when they are not social, but that unregulated process is also negative up to and including being lethal.
19 replies →
Facebook was the Eternal September of the Web. Netiquette died when it was made generally available, as did the culture that spawned it.
13 replies →
Everything consumer facing from meta is like a toxic waste hazard. It makes me sad seeing people stuck on those platforms.
1 reply →
I think Zstandard would be the most benign example.
1 reply →
A few weeks after they expanded access beyond .edu domains, I deleted my account. Haven't looked back since. Not an ounce of regret.
1 reply →
React benign? That’s the first time I’ve seen this suggestion on HN. Usually it’s held responsible for great crimes and wrongs.
2 replies →
Actually. Meta is spending millions to push the age verification requirement off to the app store providers, such as Google and Apple. It's an attempt to shield Meta from liability, transfer it to the app providers.
Having clear laws about what's allowed and what isn't is a lot cheaper than getting repeatedly sued for hundreds of millions for not doing things there was never a clear legal requirement to do.
They are winning.
In the UK, you cannot use App Store and iPhone (your own phone) without verifying your identity:
https://x.com/WindsorDebs/status/2036727466597712008
3 replies →
>to push the age verification requirement off to the app store providers,
and makes more sense, Apple and Google have your credit card , or if you are a parent that bought soem phone for you child then at first boot up as a parent should be your job to setup a child account.
4 replies →
Of course it's for the protection of the children!
Why else would they want to sneakily add facial recognition to smart glasses?! /s https://www.businessinsider.com/meta-ray-ban-smart-glasses-f...
My guess: to discriminate whether traffic is from a humam or bot to improve ad delivery metrics.
Most sites are not going to implement this themselves. I think they're in prime position to become a key broker of identity in the same way that a lot of people already log in with their meta or google account to unrelated websites. They become very entrenched and get a ton of data that way.
As more and more people essentially lock themselves in with these identitybrokers tho I imagine it has a very stifling effect on speech tho. Imagine getting banned from those.
Aren't they incentive to treat bot impressions as real?
2 replies →
> Their actual reason? You can figure that out.
This is unfalsifiable. Just say what you think it is explicitly.
Isn't this conversation, not publishing scientific hypotheses, theories and findings?
If so, it is customarily permissible to use rhetoric and sarcasm to more strongly emphasize a point. Or, to leave the conclusion as an exercise for the reader.
8 replies →
Why defend Zuck??
1 reply →
I mean, their telemetry crap is on a lot of apps too. I remember someone DMing me something very niche on Discord, and by chance I opened up Facebook, it gave me ads for that very, very niche thing I have never even looked up on Google, or Facebook, it was like IMMEDIATE. I opened up Facebook by chance, and voila.
The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
There's some serious shenanigans going on with ad companies, and we just seem to handwave it around.
Coincidentally, I remember both experiences very very vividly, because this was the last time I used either platform in any meaningful capacity.
> The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
Option A: The Nest camera not only listened to the conversation and picked out "Airport Grade Tar" and decided it needed to show adverts about it to people, but the camera also identified you to the point it could isolate your FB account in order to serve you those adverts.
(I'm making some assumptions but...)
Option B: Your brother had done various searches for airport grade tar from his home (in order to know how expensive it was). You, whilst visiting his home, were on his Wifi and therefore shared the same external IP address, your phone did enough activity whilst at his house (FB app checked in to their servers in the background, or used Messenger, etc) to get the "thinking of buying airport grade tar" associated with his external IP address associated with your FB account that was temporarily on that IP.
I had a friend who was convinced that some device in his house was listening in on his conversations with his wife as he kept on getting adverts for things they'd been talking about buying the day before but he hadn't searched for. (But she was searching for it from their home wifi, which is why it appeared in his adverts afterwards.)
2 replies →
No surprise there, Discord sells user data to Meta and X.
Easy: regulation always favors incumbents.
Only as long as corps are allowed to lobby or introduce financial incentives into policy making
1 reply →
I can't figure it out so please enlighten me.
Basically these age attestation/verification laws are being pushed as a "save the children!" scenario. But if you read the laws - all they really do is shift responsibility around.
Currently, websites and apps are supposed to ensure they don't have kids under 13, or if they do - that they have the parents permission. That's federal law in the US.
These laws make the operating system or app store (depends on the particular law) responsible for being the age gate.
This doesn't stop the federal law from being enforced or anything, but the idea is apps/websites don't handle it directly, that's handled by the operating system or app store.
So now - companies like Meta can throw up their hands and say "hey, the operating system told us they were of age, not our fault." It also makes some things murkier. Now if Meta gets sued, can they bring Google/Apple/Microsoft in as some kind of co-defendent?
I think that murkiness is the point. They don't need to create the most bullet-proof set of regulations that 100% absolves them of all responsibility, they just need to create enough to save some money next time they get sued.
I can think of a ton of regulations we could create to better help protect kids. We could mandate that mobile phones, upon first setup, tell the user about parental controls that are available on the device and ask if they'd like to be enabled. Establish a baseline set of parental controls that need to be implemented and available by phone manufacturers, like an approval process that you need to go through to hit store shelves.
We could create educational programs. Remember being in school and having anti-drug shit come through the school? It could be like that but about social media (and also not like that because it wouldn't just be "social media is bad," hopefully).
Again all these laws do is take what should be Meta's burden, and make it everybody else's burden.
1 reply →
Just remember that these capacities will never be used to exonerate - only crucify.
> is also lobbying in secrecy for requiring all of us to scan our ID and face in order to use our phones and computers.
You’re conflating different things. The OS-level age setting proposals are not the same as scanning IDs and faces.
I’m anti age check legislation, too, but the misinformation is getting so bad that it’s starting to weaken the counter-arguments.
> Their stated reason? Child safety.
> Their actual reason? You can figure that out.
We’re commenting under an article about one $375M lawsuit over child safety and many more on the way. They are obviously being pressured for child safety by over zealous prosecutors. This is why they reversed course and removed end-to-end encryption from Instagram because it was brought up as a threat to child safety.
Also your “you can figure that out” implication doesn’t even make sense. The proposal to move age verification to the OS level would give Meta less information about the user, because the OS, not Meta apps, would be responsible for gating age content. I’m not agreeing with the proposal, but it’s easy to see that it would be more privacy-preserving than having to submit your ID to Meta.
> The proposal to move age verification to the OS level would give Meta less information about the user, because the OS, not Meta apps, would be responsible for gating age content.
I find it hard to believe that meta doesn't already have a pretty good age estimate for 95%+ of their users.
What offloading the responsibility to the app stores (or OS vendors) gives Meta is exactly that, offloading responsibility. In a future lawsuit, they can say that someone else provided them with incorrect information.
[dead]
I get the frustration, but I think it's worth separating two things: failing at moderation vs pushing for stricter identity controls
It is most likely not them but they proxie for the US. Under another administration they would use an NGO to advance the agenda. The goal is to facescan the world.
[dead]
To be fair, they're just an evil corporation making lemonade out of lemons. I'm sure they'd be happier pushing porn and nazism to hundreds of millions of underage users, but if certain governments want them to write all that bunk code to verify everyone's ID, they might as well make money off the data.
They're a lot more likely to push socialism than nazism. Hence all the socialism and the lack of nazism.
Oh no those pesky Europeans extorting money from US tech companies. No, wait..
We used to believe in freedom of speech and freedom of association.
Since the dawn of the Internet era, we've had a legal principle that platforms are relatively shielded from liability for what their users do.
It's the Internet. There's sexual content and sketchy characters on it. Occasionally people will encounter them -- even if they're under 18.
Anyone who grew up in the mid-1990s or later, think back to your own Internet usage when you were under 18. You probably found something NSFW or NSFL, dealt with it, and came out basically OK after applying your common sense. Maybe it was shocking and mildly traumatizing -- but having negative experience is how we grow. Part of growing up is honing one's sense of "that link is staying blue" or "I'm not comfortable with this, it's time to GTFO". And it seems a lot safer if you encounter the sketchy side of humanity from the other side of a screen. Think about how a young person's exposure to the underbelly of humanity might have gone in pre-Internet times: Get invited to a party, find out it's in the bad part of town and there are a bunch of sketchy people there -- well, you're exposed to all kinds of physical risks. You can't leave the party as easily as you can put your phone down.
I stopped logging onto Facebook regularly around 2009; I only log in a couple times a year. I hate what Facebook has become in the past decade and a half.
But giving a site with millions of users a multi-hundred-million-dollar fine because some of those users behave badly seems...asinine.
If your kid is old enough and responsible enough to be given unsupervised Internet access, you'd better teach them how to deal with the skeevy stuff they might encounter.
That’s not really true. Pre-internet we had relatively much stricter content controls. Fairness doctrine springs to mind, plus significant regulation of the movie industry.
Letting companies sell addiction has pretty significant negative externalities. That’s why we regulate gambling and drugs. Facebook sells addiction, so it makes sense to regulate it like we do drugs and gambling.
I think the difference is scale and targeting
>we've had a legal principle that platforms are relatively shielded from liability for what their users do.
...when they've made a good faith effort to address harms.
Most Facebook users are basically teenagers, so it's no wonder it took them this long to add any real restrictions...or maybe they just wanted us to think they cared.
"We remain confident in our record of protecting teens online," said the company that clearly was not punished enough to hurt their confidence.
That’s good! We need to protect our children.
But who gets the $375 million dollars? Anyone know the cut the law firm will get from this incredible amount of money?
$375M - That’s it?!
It should be a couple of billions or 15% of the profit.
what is so hard to teach children not to e-messaging with strangers just like not to snail-mailing with strangers? also the parents should be able to join the conversation just like in the analogue world. call me backward but i dont want to outsource parenting neither to government nor to remote businneses.
In analogue world, shops and pubs are responsible for not giving kids alcohol, porn, gambling access and whatever else. In an analogue world, parents are not expected to do perfect surveillance every minute of kids lives.
Also, parents have in fact full control of snail mail.
does the post office and postmen are responsible to police kids' snail mails?
where does this "perfect surveillance" idea come from? i teach my children how to get acquaintances; first in more direct, more supervised way, later let them more and more self-driving. like anything else in parenting, eg. bicycle. but i guess urbanization diminished that skill as well. no need for "perfect surveillance"; no parent wants it. it's not only easier to pass on basic principles, but also makes supervision gradually less neccessary over time.
> parents have in fact full control of snail mail
what? children using e-messaging can just as do snail mails completely on their own (of course they don´t but it's not about going back to analogue world but to form the digital world on the same principles). well, i can imagine in highly urbanized environment, where children are forbidden to go outside, but locked down together with family even making them more isolated, and trusting them "to the phone" to cope with the daily frustration, may easily lead to a situation where phone usage and e-messaging is completely unattended and undisclosed by and with parents, while posting an evelope is at a level of expertise for them. parents ability to be in control of e-messaging is as much as of snail mails.
Happy to see it, but if a fine is the only consequence then they’re going to go back to doing the exact same thing tomorrow.
Age verification isn’t misleading is it?
Still just a drop in the bucket compared to their quarterly profits. When will regulators get wise?
"told to pay"? As in, they're not even fined? What a horrible choice of headline.
People trust third parties too much to manage the security of their communication
Why do we call this company "Meta"? It's the same old "Facebook".
The name of the company is literally Meta. That’s why people call it that…
Given that they just shuttered their "metaverse", I'm guessing we won't have to for much longer...
Tststs.. it's only allowed to harm adults and the environment for profit.
Don't forget democracy
1. This fine is 1/100th the size it should be. Make them pay, and break up Meta/facebook. 2. Age verification pushes coming from several different actors across gov't and private sector is worrying. I trust no actor here, and neither should you. 3. Zuck should be in jail.
This is one of the first times the court found the platform itself can be liable, overruling frequent industry claims that they just host content and are never responsible for the content. $375 million sounds big but is peanuts compared to their annual revenue. And of course Meta will appeal and then try to drag everything out for years and years. Expect copycat lawsuits.
These platforms expose minors to predators and bad actors, and Meta was proven lying about safety.
The state has a solution - force age verification for everyone on the platforms.
The state will ask Biedscheid to direct Meta to make changes to its platforms, including adding effective age verification
After you’ve been complicit in genocide, lesser charges are just not that shocking.
They immunised us.
I wonder if this stand and if it will lead to more suits against Meta.
Seems insufficient to keep Social Security solvent after 2040.
Are the kids alright?
Meta can do more and should do more. I think that's the short of it. The company made 59 Billion last year. It's completely reasonable to expect that they expend effort and budget on reducing their harm to children.
Another poster child for Meta's lobbying (bribery) to encourage OS level age verification. (numerous recent references in HN posts)
They very much want to push this liability off onto someone else...
As far as end-to-end encryption, on SM sites (social media or SadoMasochism, however you want to read it) I don't really see the need.
> As far as end-to-end encryption, on SM sites (social media or SadoMasochism, however you want to read it) I don't really see the need.
You don't see any benefit to allowing people to encrypt their private communications in a way that can't be accessed by the company?
It's weird to see tech news commenters swing from being pro-privacy to anti-privacy when the topic of social media sites come up.
Meta has a way to read your E2EE messages. I don't know what it is, but if they didn't then they wouldn't do it.
There's a difference between E2EE between friends who want to remain secure, and E2EE between strangers in an attempt for the platform to avoid legal liability for spam.
2 replies →
> Another poster child for Meta's lobbying (bribery) to encourage OS level age verification. (numerous recent references in HN posts)
The references I saw showed Meta had lobbied for some of the laws that require age verification be done by the site or by third party ID services. They did not show that Meta lobbied for any of the OS bills.
Some showed that Meta had lobbied in some of the states with those bills, but they just showed Meta's total lobbying budget for those states.
You were downvoted, but right. Meta wants to be able to say, "hey, the OS said she was 18!" and not get in trouble for it.
Online child exploitation should be a strict liability offense.
How does this apply to, say, Signal?
2 replies →
[dead]
[dead]
Make the fine scale, and fit the severity of the issue. This should be $375 Billion not $375 Million. These are our future generations they're destroying.
Right, and the same with private citizens. Don’t fine a month’s income to some and a minute’s to someone else because it’s “fair” and evenly applied.
As usual the company is going to financially shield those responsible, while they in turn shield the company from societal blame.
This is less than 4 days of profit.
Lots of negative meta sentiment the past few months. Feeling a bit like 2021 and wondering if it’s time to buy?
Who is getting paid the $375m?
The state of New Mexico presumably as they brought the suit.
...so, not only the EU does this kind of thing.
1 reply →
Calculated risk cost by them
If I was to take my tinfoil hat off, one could see a world where Facebook let this happen in the first place in order to have a case to make for less security in communications.
I don't like Meta in any sense of the word and I think they've degraded humanity and society as a whole significantly for generations now to come. But I hope my conspiratorial mind is just over reacting.
Another item on the subject of this verdict that, at present, has more points is
https://news.ycombinator.com/item?id=47519625
One is a story by a journalist at CNN, the other is a story by a journalist at the LA Times
Multiple articles on the same topic can sometimes offer different facts and opinions, different perspectives
Also various sources (websites) may present articles differently. Everything from fonts, colors, formatting, etc. to online ads and tracking to access restrictions (enable Javascript, CAPTCHAs, etc.) can vary across websites
Does this mean Apple, Nintendo, and Disney are at risk too?
I would love to see some justice.
can someone explain how the fine size is calculated?
Head of a chicken is cut off over a giant dart board, and wherever its headless body lands determines the fine
That penalty is about a couple orders of magnitude too small
Until the fines are large enough to impact business and cause heads to roll, and maybe we even see some prison time for executives, companies will continue to not give a fuck. This is chump change for Meta.
As much as everyone hates Meta for selling people's personal data, this is absolutely ridiculous. The hysteria regarding forcing companies do parents' job doesn't make any sense whatsoever.
By this logic tobacco companies did nothing wrong when they pretended like smoking didn’t cause cancer for decades. Misleading users is harm.
Requiring ID to browse the internet is doing the parents jobs of managing what their kids are doing online.
Stopping misleading advertisments and mental health issues while claiming to be protecting children is not on the parents. The parents were given the false information to believe their kids would be safe.
I've never seen Meta advertising themselves as a kindergarten or a playground for kids. They have always been perceived as public square or forum. It's wild to leave your child alone in public place and expect safety.
Oh please! It’s not about parenting, it’s a cancer on society and now affecting the youngest and also the seniors.
What is so fucked up about this is that THEIR WHOLE RAISON D'ÊTRE is knowing more about you than you do.
You think they need this to know your age? Your gender? Your home, your birthplace, your political stance?
What about X?
and who gets that money ^^
the fine is 0.6% of last year's profit. the lobbying budget probably costs more.
Now sue them for lobbying against GNU/Linux with CSA, their front lobby.
Name and shame the managers and leadership at this time. I dream of a world where they'd be recognized and shamed in the streets for all the damage they've done to society. Instead they get to do all kinds of side quests with their money.
I'd much rather they get personally fined and/or banned from holding leadership positions in the field (with varying timeframes depending on the level of responsibility).
Naming and shaming won't do much good. It could backfire and serve as a positive mark on their resume for other morally corrupt leaders.
Short prison sentences would be a good deterant for white collar crime, rather than fines.
1 reply →
meh. hit the C suite and the board with life-altering punitive damages.
“Pay them, in the scheme of things it’s a speeding ticket”
[dead]
[dead]
[dead]
[dead]
Drop in the bucket for them. Giving Zuck some jail time would be the more appropriate message - there's no doubt he knows and approves of the kind of evil activity the New Mexico law enforcement dug up.
That would be a dream, but cannot see it happening. But totally agree with your theory- platforms should face genuine legal exposure for algorithmic harm to minors (as tobacco companies did for health harm).
Unfortunately, as we found out recently, Meta's lobbyists are a powerful force to contend with and I do not trust our governments to stand up to them.
[dead]
[flagged]
LLM slop
lol. And you think we will ever legalize drugs (and people can take responsibility), when large companies are being sued for being addicted to social media?
There's a vast difference between accurately advertising the effects of drugs and the risks involved in taking them, versus lying to you about the drugs and creating an environment that furthers addition.
It all boils down to consent.
I might want to take some drugs that have some harmful side effects. But i knew about them and i willingly made the choice because I valued the high more.
Contrast this with, I knew about the harmful side effects and told you they didnt exist and you should take more. And then i change the drug so its even MORE harmful because it also makes you BUY more. That's what these social media sites do.
They use engineered sociology and psychology to create addictive products, and then refine them to maximize profit at the cost of anything they can pull a lever on.
What bothers me the most is not the vampires at the top sucking out every dollar they can extract out of vulnerable people, but the fact that so many engineers are supporting this. So much for engineering ethics. Why even bother teaching it anymore?
If you take actions to deliberately weaponize your product against children in particular, whatever it is -- you shouldn't be surprised when liability attaches. That's what this verdict is about.
Alternative headline: household spyware cash machine forced to pay $20 for being bad.
If you want to punish Meta then you have to punish the wonder boy who runs it. Not even share holders can fight off the guy spending 80B on the metaverse.
You're not wrong, but the problem for Meta is that this, along with their other fine for mental harm is setting a precedence.
This fine is somewhat larger, at $375 million, but the other one (https://www.msn.com/en-us/health/other/meta-and-youtube-fine...) basically open the gates for millions of people suing.
Sadly I don't think it's enough for Meta to change, because they have no business model if they are forced to be serious about online safety. That's probably also why they are pushing so hard for age verification, make safety a problem for someone else.
Shareholders: Worth it!
Repeal section 230
Careful what you wish for https://www.techdirt.com/2020/06/23/hello-youve-been-referre...
Why do you dislike the Internet?
I love the internet. I hate what a lack of liability for platforms has done to the internet.
1 reply →
Why do we have prison sentences for insider trading, which is arguably (much) less harmful to the society, and not for this?
because the damage done is relatively objective?
Is that the only factor? Is insider trading objective? (hint: it's not, read the law). It's objective only when we can attribute a quantitative measure to it? What's the relative "value" of $1M profit from insider trading vs a single child's destroyed psyche? How much value could that child have contributed to the society had it not been for the harm done to it? Is there really much subjectiveness in terms of the harm done to those kids?
All that to say: I don't think "objectivity" should be the (main) factor resulting in existence of adequate punishment.
Insider trading is incredibly toxic to society. It is not a victimless crime. It is tantamount to stealing.
It is, I agree. My point is that the proportionality of consequences is not there. We seem to be good at criminalizing discrete, individual financial acts, but not systemic corporate decisions that cause diffuse harm. That's even when the aggregate harm is arguably far greater.
That's peanuts.[a]
[a] https://dictionary.cambridge.org/us/dictionary/english/peanu...
Meta should be disbanded for the damage it caused to mankind. Age verification tainting Linux also is heavily attributable to Meta buying legislation; systemd already quickly went that path, in order to appease their corporate-gods. Private user data to be released to random actors willy-nilly style - and the constant appeasement "no, this is not what is happening". Until it suddenly is happening precisely as people predicted it to be happening. Everyone runs a meta-agenda nowadays, Meta more than most others.