I’ve recently had to deal with my father cognitive decline & falling for scams left & right using Meta’s apps. This has been so hard on our family. I did a search the other day on marketplace and 100% of all sellers were scams, 20-30 of them.
Meta is a cancer on our society, I’m shutting down all my accounts. Back when TV/Radio/News paper were how you consumed news, you couldn’t get scams this bad at this scale. Our parents dealt with their parents so much easier as they cognitively declined. We need legal protections for elders and youth online more than ever. Companies need to be liable for their ads and scam accounts. Then you’d see a better internet.
My grandmother has been through the same thing. She was scammed out of all of her savings by accounts impersonating a particular celebrity. Thankfully the bank returned all of the money, but the perpetrators will never be caught, they operate out of Nigeria (one of them attached their phone to her Google account.)
Unfortunately these fake celebrity accounts are swarming her like locusts again. We tried to educate her about not using her real name online, not giving out information or adding unknown people as friends, but there's a very sad possibility that she doesn't fully understand what she's doing.
It was emotionally difficult going through her laptop to gather evidence for the bank. They know exactly how to romance and pull on heart strings, particularly with elderly people.
Meta's platforms are a hive of scammers and they should be held accountable.
The number of my outer circle of friends who fall for the “copied profile” adding of unknown people or accept a friend request from the attractive young woman who somehow is interested in them is shocking. (I’m gauging this from looking at the “mutual friends” in the friend request.)
Why can’t you do a power of attorney(?) over her finances or move them into a living trust, etc. seems like there are legal protections out there if you can convince her it’s in her best interest to let her family manage her estate so she can focus on enjoying final years (obviously don’t say it like that)
Unfortunately I have a similar experience. If someone's working at Meta right now, and has been in the past 10 years, they're willingly and actively contributing to making society worse. Some open-source tech is not going to undo any of this, nor any of the past transgressions. I get the pay is probably great, but have some decency.
I suggested a hiring ban on anyone who ever worked at Meta some years back. It was not met with open arms. Going to try again here...
I think it's a valid suggestion that might result in people rethinking working for Meta if it was taken seriously.
Working for Meta is ethically questionable. The company does unspeakable damage to our country. It harms our kids, our elders, our political stability. Working for it, and a number of similar companies, is contributing to the breakdown of the fabric of our society.
Why not build a list of Meta employees and tell them they're not eligible for being hired unless they show some kind of remorse or restitution?
It could be an aggregation of LinkedIn profiles and would call attention to the quandary of hiring someone with questionable ethics to work at your organization. It might go viral on the audacity of the idea alone. That might cause some panic and some pause amongst prospective Meta hires and interns. They might rethink their career choices.
One must also check what YouTube recommends their elderly parents, because it is easy for them to slide into getting recommended harmful content, mostly things like psychological, religious or alternative-medicine topics. Note that not all of them are harmful, but most of them are published by very odd channels.
Opening YouTube on a new machine / OS / browser / without login is eye opening in terms of the awful stuff that gets recommended by default and how quickly it tilts worse if you watch any of it.
In case anyone needs to help a relative without a Google account block YouTube channels or videos, the subreddit for uBlock Origin has a wiki that can help. You can block videos by channel or video title or URL using CSS rules. Removing the clickbait and watching a few videos of decent content with them helps a lot.
Have you seen some of the ads between the videos? There are some shady get rich quick types of influencers selling stuff that might really set them back financially as well.
One third of all scams in the US are operated on Meta platforms.
They have a policy that if a scammer’s ad spend makes up more than 0.15% of Meta revenue, moderators must protect the scammer instead of blocking it.
Meta is working hard to scam your dad for ad spend. It’s hugely profitable for them and they are helping to grow it per internal policy. They are only interested in fostering big-time scammers.
I would like to understand the downvotes: is it from doubting these facts? If so, I will post the sources (which were recent mainstream news on the front page of HN). Or is it because of the negative sentiment about Meta? Or disagreement that Meta has any responsibility over moderating scams they promote?
These are verified facts that make up the substance of my message:
- Meta protects their biggest scammers, per internal policy from leadership
- Meta makes a huge profit from these scammers (10% of total revenue; or in other words, their scam revenue is approximately 5x larger than the total Oculus revenue)
- The scams that Meta promotes represent one-third of the total online scams in the US
> One third of all scams in the US are operated on Meta platforms.
And 100% of all internet scam traffic in the US goes through either US ISPs or US cell carriers.
Should those entities be held liable instead? Or maybe, Meta instead should scan users' private messages on their platforms and report everything that might seem problematic (whatever the current US administration in power considers as problematic) to the relevant authorities?
My personal take: there should be more effort in going after the actual scammers, as opposed to going after the "data pipes" (of various abstraction levels) like Meta/ISPs/cell carriers/etc.
So many of us have been there - it is brutal. These platforms are ripping us apart from each other, providing criminals easy access to the most vulnerable, and concentrating wealth to an unimaginable degree.
My dad had fallen for two scams - one through WhatsApp, the other texts.
I’m not sure how much we can blame individual companies for this. Obviously they should be doing more - shutting down accounts that message people at random, for instance, but I feel like the scammers will find a way.
I also don’t know what else we can do. It should be easier for kids (or anyone else) to shut down their parent’s accounts at least once this happens, stop all wire and crypto transfers, etc.
I don't mean to be rude or anything - and I don't disagree with what you're suggesting - but don't you think at some point you have a responsibility to stop them accessing these platforms yourself?
Our own attempts to do something about (successful) scammers were meant with utter indifference by my parent's state's (Arizona) attorney general, county sheriffs, local police.
At this point, I think all of the big tech companies have had some accusations of them acting unethically, but usually, the accusations are around them acting anticompetitively or issues around privacy.
Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit. The mix is usually: grow at all costs mindset, being "data-driven", optimizing for engagement/addiction, and monetizing via ads. The center of gravity of this has all been Meta (and social media), but that thinking has permeated lots of other tech as well.
It's a well worn playbook by now. But Meta seems to be the only one where we now have proof of internal research being scuttled for showing the inconvenient truth.
What do you think the social effects of large scale advertising are? The whole point is to create false demand essentially driving discontent. I've no idea if Google et al have ever done a formal internal study on the consequences, but it's not hard to predict what the result would be.
The internet can provide an immense amount of good for society, but if we net it on overall impact, I suspect that the internet has overall had a severely negative impact on society. And this effect is being magnified by companies who certainly know that what they're doing is socially detrimental, but they're making tons of money doing it.
I agree false demand effects exist. But sometimes ads tell you about products which genuinely improve your life. Or just tell you "this company is willing to spend a lot on ads, they're not just a fly-by-night operation".
One hypothesis for why Africa is underdeveloped is they have too many inefficient mom-and-pop businesses selling uneven-quality products, and not enough major brands working to build strong reputations and exploit economies of scale.
The positive benefits in education, science research and logistics are hard to understate. Mass advertising existed before the internet. Can you be more explicit about which downsides you thibk the additional mass advertising on the internet caused that can come anywhere close to the immeasurable benefits provided by the internet?
> Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit
Them doing nothing about hate speech that fanned the flames for a full blown genocide is pretty terrible too. They knew the risks, were warned, yet still didn't do anything. It would be unfair to say the Rohingya genocide is the fault of Meta, but they definitely contributed way too much.
> Meta are the only case where we have substantiated allegations of a company being aware of a large, negative impact on society
Robinhood has entered the chat
Why would one specific industry be better? The toxic people will migrate to that industry and profit at the expense of society. It’s market efficiency at work.
I do think an industry is often shaped by the early leaders or group of people around them. Those people shape the dominant company in that space, and then go off to spread that culture in other companies that they start or join. And, competitors are often looking to the dominant company and trying to emulate that company.
Also, tobacco companies and oil companies famously got into trouble from revelations that they were perfectly aware of their negative impacts. For the gambling and alcohol industry, it probably wouldn't even make the news if some internal report leaked that they were "aware" of their negative impact (as if anyone thought they would not be?)
Social media is way down on the list of companies aware of their negative impact. The negative impact arguably isn't even central to their business model, which it certainly is for the other industries mentioned.
The leaders and one of the announcers of Radio Télévision Libre des Mille Collines got 30 years to life sentences for their part in the Rwandan genocide.
We all know this. As people in the tech industry. As people on this website. We know this. The question is, what are we going to do about it? We spend enough time complaining or saying "I'm going to quit facebook" but there's Instagram and Threads and whatever else. And you alone quitting isn't enough. We have to help the people really suffering. We can sometimes equate social media to cigarettes or alcohol and relate the addictive parts of that but we have to acknowledge tools for communication and community are useful, if not even vital in this day and age. We have to find a way to separate the good from the bad and actively create alternatives. It does not mean you create a better cigarette or ban alcohol for minors. It means you use things for their intended purpose.
We can strip systems like X, Instagram, Facebook, Youtube, TikTok, etc of their addictive parts and get back to utility to value. We can have systems not owned by US corporations that are fundamentally valuable to society. But it requires us, the tech savvy engineering folk to make those leaps. Because the rest of society can't do it. We are in the position of power. We have the ability.
Platforms that have the useful stuff from social media without the addictive part already exist:
Forums, micro-blogging, blogs, news aggregators, messaging apps, platforms for image portfolios, video sharing platforms.
And most of them have existed before the boom of social media, but they just don't get as huge because they are not addictive.
The useful part of a social media is so small that if you put it on it's own you don't get a famous app, you have something that people use for a small part of their day and otherwise carry on with their life.
A social media essentially leverages the huge and constant need that umans have to socialize, and claims that you can do it better and more through the platform instead of doing it in real life, and they do so by making sure that enough people in your social circle prioritise the platform over getting together in real life.
And I believe this is also the main harmlful part of them, people not getting actual real social time with their peers and then struggling with mental health.
At the moment the biggest hope I have is there’s client side tech that protects us from these dark patterns. But I suspect they’ll have their own dark patterns to make them profitable.
I guess we can speculate or theorise on potential strategies but beyond hope we should also try to do something. I have seen some X clones with variations but a lot of the same behaviour plays out when you have no rules around posting, moderation, types of content, etc. Effectively these platforms end up in the same place of gamification and driving engagement through addictive behaviours because they want users. Essentially I think true community is different, true community keeps each other accountable and in check. Somehow we need to get back to some of that. Maybe co-operative led tools. Non profits. I think Mastodon meant well and didn't end up in the right place. Element/Matrix is OK but again doesn't feel quite right. Maybe we should never try to replicate what was, I don't know. BitChat (https://bitchat.free/) is an interesting alternative from Jack Dorsey - who I think is trying to fix the loss of Twitter and the stronghold of WhatsApp.
Just not use those services. X is addictive, but otherwise utterly unnecessary. It seemed useful about 8 years ago when you could get tech insights form industry veterans on a daily basis and then use them in your own company. Those days are long gone.
Just. Don't. Use. Those. Services.
Easiest life-hack ever for a happier and more productive life.
> Companies can't really be expected to police themselves.
Not so long as we don't punish them for failure to. We need a corporate death penalty for an organization that, say, knowingly conspires to destroy the planet's habitability. Then the bean counters might calculate the risk of doing so as unacceptable. We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Corporate death penalty as in terminate the corporation?
Why not the actual death penalty? Or put another way, why not sanctions on the individuals these entities are made up of? It strikes me that qualified immunity for police/government officials and the protections of hiding behind incorporation serve the same purpose - little to no individual accountability when these entities do wrong. Piercing the corporate veil and pursuing a loss of qualified immunity are both difficult - in some cases, often impossible - to accomplish in court, thus incentivizing bad behavior for individuals with those protections.
Maybe a reform of those ideas or protocols would be useful and address the tension you highlight between how we treat "individuals" vs individuals acting in the name of particular entities.
As an aside, both protections have interesting nuances and commonalities. I believe they also highlight another tension (on the flip-side of punishment) between the ability of regular people to hold individuals at these entities accountable in civil suits vs the government maintaining a monopoly on going after individuals. This monopoly can easily lead to corruption (obvious in the qualified immunity case, less obvious but still blatant in the corporate case, where these entities and their officers give politicians and prosecutors millions and millions of dollars).
As George Carlin said, it's a big club. And you ain't in it.
> We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Surely "plot the downfall of civilization" is an exaggeration. Knowing that certain actions have harmful consequences to the environment or the humanity, and nevertheless persisting in them, is what many individuals lawfully do without getting together.
The group of pretty much all humans is such a group because we all conspire to burn fossil fuels. Do you really think a global civilization death penalty is a good idea? That's throwing out the baby with the bathwater.
Maybe more parallels to tobacco companies. Incredible amount of taxes and warnings and rules forbidding kids from using it are the solutions to the first problem and likely this second one too.
1. "The Tobacco Institute was founded in 1958 as a trade association by cigarette manufacturers, who funded it proportionally to each company's sales. It was initially to supplement the work of the Tobacco Industry Research Committee (TIRC), which later became the Council for Tobacco Research. The TIRC work had been limited to attacking scientific studies that put tobacco in a bad light, and the Tobacco Institute had a broader mission to put out good news about tobacco, especially economic news." [0]
2. "[Lewis Powell] worked for Hunton & Williams, a large law firm in Richmond, Virginia, focusing on corporate law and representing clients such as the Tobacco Institute. His 1971 Powell Memorandum became the blueprint for the rise of the American conservative movement and the formation of a network of influential right-wing think tanks and lobbying organizations, such as The Heritage Foundation and the American Legislative Exchange Council."
The problem is that our current ideology basically assumes they will be - either by consumer pressure, or by competition. The fact that they don't police themselves is then held as proof that what they did is either wanted by consumers or is competitive.
Deniers should watch the movie "The White House effect". It's a great documentary that shows where and how the strategies of the oil companies changed.
> Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold."
I don’t get it. Is sex trafficking driven user growth really so significant for Meta that they would have such a policy ?
The "catching" is probably some kind of automated detection scanner with an algo they don't fully trust to be accurate, so they have some number of "strikes" that will lead to a takedown.
There is always a complexity to this (and don't think I'm defending Meta, who are absolutely toxic).
Like Apple's "scanning for CSAM", and people said "Oh, there's a threshold so it won't false report, you have to have 25+ images (or whatever) before it will"... Like okay, avoid false reporting, but that policy is one messy story away from "Apple says it doesn't care about the first 24 CSAM images on your phone".
Of course it's not. We could speculate about how to square this with reason and Meta's denial; perhaps some flag associated with sex trafficking had to be hit 17 times, and some people thought the flag was associated with too many other things to lower the threshold. But the bottom line is that hostile characterizations of undisclosed documents aren't presumptively true.
We don’t know. But as you read from the article, Meta’s own employees were concerned about it (and many other things). For Zuck it was not a priority, as he said himself.
We can speculate. I think they just did not give a fuck. Usually limiting grooming and abuse of minors requires limiting the access of those minors to various activities on the platform, which means those kids go somewhere else. Meta specifically wanted to promote it’s use among children below 13 to stimulate growth, that often resulting in the platform becoming dangerous for minors was not seen as their problem.
If your company is driven by growth über alles à la venture capitalism, it will mean the growth goes before everything else. Including child safety.
Reading Careless People by Sarah Wynn Williams is eye opening here, and it's pretty close to exactly that.
> I think they just did not give a fuck.
It's that people like Zuck and Sandberg were just so happily ensconced in their happy little worlds of private jets and Davos and etc., that they really could not care less if it wasn't something that affected them (and really, the very vast majority of issues facing Meta, don't affect them, only their bonuses and compensation).
Your actions will lead to active harm? "But not to me, so, so what, if it helps our numbers".
One of the worst outcomes of the last 20 years is how Big Tech companies have successfully propagandized us that they're neutral arbiters of information, successfully blaming any issues with "The Algorithm" [tm].
Section 230 is meant to be a safe harbor for a platform not to be considered a publisher but where is the line between hosting content and choosing what third-party content people see? I would argue that if you have sufficient content, you could de facto publish any content you want by choosing what people see.
"The Algorithm" is not some magical black box. Everything it does is because some human tinkered with it to produce a certain result. The thumb is constantly being put on the scale to promote or downrank certain content. As we're seeing in recent years, this is done to cozy up to certain administrations.
The First Amendment really is a double-edged sword here because I think these companies absolutely encourage unhealthy behavior and destructive content to a wide range of people, including minors.
I can't but help consider the contrast with China who heavily regulate this sort of thing. Yes, China also suppresses any politically sensitive content but, I hate to break it to you, so does every US social media company.
Your solution to the government putting pressure on social media companies to censor is to give the government more power over them by removing section 230?
I'm saying social media companies are using Section 230 as a shield with the illusion of "neutrality" when they're anything but. And if they're taking a very non-neutral stance on content, which they are, they should be treated as a publisher not a platform.
I predict that in much sooner than 100 years social media will be normalized and it will be common knowledge that moderating consumption is just as important as it is with video games, TV, alcohol, and every other chapter of societies going through growing pains of newly introduced forms of entertainment. If you look at some of the old moral panic content about violent video games or TV watching they feel a lot like the lamentations about social media today. Yet generations grew up handling them and society didn’t collapse. Each time there are calls that this time is different than the last.
In some spaces the moral panic has moved beyond social media and now it’s about short form video. Ironically you can find this panic spreading on social media.
We moderate consumption of alcohol, sugar, gambling, and tobacco with taxes and laws. We have regulations on what you can show on TV or films. It is complete misuse of the term to claim a law prohibiting sale of alcohol for minors is ‘moral panic’. It is not some individual decision and we need those regulations to have a functioning society.
Likewise in few generations we hopefully find a way to transfer the cost in medical bills of mental health caused by these companies to be paid by those companies in taxes, like we did with tobacco. At this point using these apps is hopefully seen to be as lame as smoking is today.
I don't think any of those items have had the significance and decisiveness of social media, or have been controlled by billionaires who have corrupted the election systems.
Social media seems far more dangerous and harder to control because of the power it grants its "friends". It'll be much harder to moderate than anything else you mentioned.
In 100 years time they will be so fried by AI they won't be capable of being shocked. Everyone will just be swiping on generated content in those hover chairs from Wall E.
In Mad Men, we have these little moments of mind=blown by the constant sexism, racism, smoking, alcoholism, even attitudes towards littering. In 2040 someone's going to make a show about the 2010s-2020s and they'll have the same attitude towards social media addiction.
So does this apply to all social medias? (Threads, X, Bluesky, IG, etc) how come they didn’t have this evidence from their users well? Or maybe they didn’t bother to ask..
I suppose the harm from social networks is not as pronounced (since you generally interact only with people and content you opted to follow (e.g. Mastodon).
The harm is from designing them to be addictive. Anything intentionally designed to be addictive is harmful. You’re basically hacking people’s brains by exploiting failure modes of the dopamine system.
If I remember correctly, other research has shown that it's not just the addictive piece. The social comparison piece is a big cause, especially for teenagers. This means Instagram, for example, which is highly visual and includes friends and friends-of-fiends, would have a worse effect than, say, Reddit.
I had a similar thought. I wonder if any social media on a similar scale as FB/IG would have the same problems and if it's just intrinsic to social media (which is really just a reflection of society where all these harms also exist)
I think group chats (per interest gathering places) without incentives for engagement are the most natural and least likely to cause harm due to the exposure alone.
I quit Facebook in the early to mid 2010s, well before social media became the ridiculously dystopian world it is today.
Completely coincidentally, I had quit smoking a few weeks before.
The feelings of loss, difficulty in sleeping, feeling that something was missing, and strong desire to get back to smoking/FB was almost exactly the same.
And once I got over the hump, the feelings of calm, relaxation, clarity of thought, etc were also similar.
It was then that I learnt, well before anyone really started talking about social media being harmful, that social media (or at least FB…I didn’t really get into any other social media until much later), was literally addictive and probably harmful.
I never really liked fb or any other big application that much, so kicking them after 2016 was not that bad, but I used to be heavy user or forums and kicking some of them felt pretty similar to kicking tobacco back in the day.
We are super social insane monkey creatures that get high on social interaction, which in many ways is a good thing, but can turn into toxic relationships towards family members or even towards a social media application. It is not very dissimilar how coin slot machines or casinos lure you into addiction. They use exactly the same means, therefore they should be regulated like gambling.
Which is why I found it so comparable to quitting smoking.
A smoker doesn’t feel “better” after quitting smoking. Even over a decade after having quit I bet if I smoked a cigarette right now I would feel much nicer than I did right before I smoked it. However, I would notice physiological changes, like a faster heart rate, slight increase in jumpiness, getting upset sooner, etc.
Quitting FB was similar. I didn’t feel “better”, but several psycho-physiological aspects of my body just went down a notch.
"Priorities" quote:
Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything. As an example, I doubt any of us would appreciate any novelist of not focusing on saving children more than on writing books...
> You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything
In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
Fair point, but the fuller context is absurd—the OP's rendering is correct in tone and emphasis.
> In a 2020 research project code-named “Project Mercury,” Meta (META.O), opens new tab scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook and Instagram, according to Meta documents obtained via discovery.
Did they pick people at random and ask those people to stop for a while, or is this about people who choose to stop for their own reasons?
"Social media harm" sounds like one of these nebulous things which has no real definition
"Social media was a mistake, just like the internet" oh ok so we should just give up our gmails and reddits and everything because people insist on the widest possible swathe of categories
But actually when it comes to Metabook... I don't think Zuckerberg cares about anybody, and more to the point they refuse to give you a chronological service just for starters
People have died and their friends haven't known about it because the algorithm never showed them. People have noticed messages they've got from people trying to get in touch with them years later, because Zuck feels you should be using Facebook all the time, not email https://news.ycombinator.com/item?id=4151433
When your company is run by a megalomaniac this is what you get...
> To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.
I don't think it's even a stretch at this point to compare Meta to cigarette companies.
Complete with the very expensive defence lawyers, payoffs to government, and waxing poetic about the importance of the foundation of American democracy meaning they must have the freedom to make toxic, addictive products and market them to children, whilst they simultaneously claim of course they would never do that.
Journalist love that study but tend to ignore the likely causal reason for the improved outcomes, which is that users who were paid to stop using Facebook had much lower consumption of news and especially political news.
Big oil, big tobacco, big social, there seems to be a clear pattern of burying evidence of negative impacts of their products to satisfy some personal greed. These people are mentally ill and we need to help them.
These discussions never discuss the priors, is this harm on a different scale then what preceded it? Like is social media worse than MTV or teen magazines?
I loved MTV as a kid but it was as different to social media as can be.
Half the time you would turn it on and not like the video playing then switch the channel. Even if you liked the video that was playing, half the time the next video was something you didn't like so you would switch the channel.
Now imagine if MTV had used machine learning to predict the next video to show me personally that would best cause me to not change the channel.
That is not even really a different scale but a different category.
Why does it matter? We can’t go back and retroactively punish MTV for its behavior decades ago. Not to mention we likely have a much better understand of the impact of media on mental health now than we did then.
The best time to start doing the right thing is now. Unless the argument here is “since people got away with it before it’s not fair to punish people now.”
It matters because it points towards a common failure mode which we've seen repeatedly in the past. In the 1990s, people routinely published news articles like the OP (e.g. https://www.nytimes.com/1999/04/26/business/technology-digit...) about how researchers "knew" that violent video games were causing harm and the dastardly companies producing them ignored the evidence. In the 1980s, those same articles (https://www.nytimes.com/1983/07/31/arts/tv-view-the-networks...) were published about television: why won't the networks acknowledge the plain, obvious fact that showing violence on TV makes violence more acceptable in real life?
Is the evidence better this time, and the argument for corporate misconduct more ironclad? Maybe, I guess, but I'm skeptical.
What policy proposals would you have made with respect to MTV decades ago, and how would people at the time have reacted to them? MTV peaked (I think) before I was alive or at least old enough to have formative memories involving it, but people have been complaining about television being brain-rotting for many decades and I'm sure there was political pressure against MTV's programming on some grounds or another, by stodgy cultural conservatives who hated freedom of expression or challenges to their dogma. Were they correct? Would it have been good for the US federal government in the 80s and 90s to have actually imposed meaningful legal censorship on MTV for the benefit of the mental health of its youth audience?
> In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
> Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
Meanwhile I'm sitting here deliberating for the 200th time to delete my Whatsapp account, meaning I won't take part in group chats with my friends anymore ... in the end I won't delete it and next up is deliberating for the 201st time to delete my Whatsapp account ...
Of course they did. Anyone not blind to what is going on knows this, of course. It is merely a matter of proving it in front of the law. That's all this is about. It's no longer about the question whether or not they acted despicably.
I doubt serious consequences will follow this time, as there haven't been following serious consequences all the previous times Meta/Facebook has been found guilty of crimes. However, it can serve as one more event to point out to naive people, who don't want to believe the IT person, that FB/Meta is evil, because they don't want to give up some interaction on it, or some comfort they have, using FB/Meta's apps or tools. I think it's a natural tendency most of us have. We use something, then we want extra good proof when someone claims that thing is bad, because we don't want to change and stop using the thing. Plus FB/Meta will do anything they can, to make people addicted to their platforms.
Meta isn't a whole lot at fault, not that they didn't do wrong here, but that they behaved naturally, as expected of them.
You should absolutley expect companies to do whatever it takes to make the most profit, so long as they don't break the law. As a society, this failure should be put entirely at the foot of elected legislators who have been entrusted to pass laws to protect the public.
You shouldn't have to use less technology, quit social media, etc.. This things keep happening again and again, but by the time there are laws to do something about it, it's too late. At first I thought this reminded me of the tobacco industry, but now that I think about it, this is more akin to alcohol. You can't prohibit it, and you can only restrict it so much because of how abundant its use has become. But still, lots of lawmaking can still be done.
They should have been shutdown and all the C-Level exec arrested after Cambridge Analytica. The weapons grade psyops they used too get Trump elected are crimes against humanity.
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." - Upton Sinclair
HN has seen this quote many times; tech workers willfully or naively ignore the harm their contributions cause as long as the life changing paychecks keep coming, letting them pretend that they are too far removed from the damage to be responsible.
Then comes the classic post “I’m leaving FAANG, so brave of me <quiet-part>funded entirely by the same extraction and harm I once insisted I didn’t see.</quiet-part>"
Meta is Zuck. Zuck is bad. Accept it everyone. Why people hate Elon Musk but not Zuck is beyond me. Zuck has done real harm as
well, some of it worse than Musk.
I don’t understand why things like social media are meant to be regulated by the government.
Isn’t religion where we culturally put “not doing things that are bad for you”? And everyone is allowed to have a different version of that?
Maybe instead of regulating social media, we should be looking at where the teeth of religion went even in our separation of church and state society. If everyone thinks their kids shouldn’t do something, enforcing that sounds like exactly what purpose religion is practically useful for.
Well, the more scientific and pluralistic our society becomes the more religion is necessarily sapped of its ability to compel behavior. If you lived in 13th Century France the Catholic Church was a total cultural force and thus could regulate behavior, but the very act of writing freedom of religion into law communicates a certain idea about religion: its so unimportant that you can have whatever form of religion you want.
In any case, one ought to distinguish between "You shouldn't do things which are bad for you," and "You shouldn't do things you know are bad for others." Especially, "Giant corporations with ambiguous structures of responsibility shouldn't be allowed to do things which are bad for others."
13th century France is irrelevant because it was, religiously speaking, a different style of society from America since its founding.
In the past, America, unlike 13th century France, allowed multiple parallel religions who each enforced their own moral codes on top of the secular law using behavioral manipulation tactics including shame.
This seems to have worked up until quite recently. In the early 1900s religion was still massively influential in America. Your view on what freedom of religion means practically is a ret con, because people took it seriously up until universal mass secular schooling and electronic media.
I’m not saying we should return to Jesus or whatever, I’m just saying that there is a receptiveness in the human brain to having behavior enforced in a completely non-violent way where the behavior code is entered into voluntarily and can be abandoned non-violently as well, and hmm wonder if it makes sense to leverage that to solve problems that we are currently leaning for the levers of violence to fix (in the sense that state power enforcement is fundamentally rooted in violence, ie the threat of forced confinement at gunpoint).
On you vs others, I don’t have in mind some kind of religiously enforced corporate regulations, that’s obviously ridiculous. I’m referring to religiously enforced individual abstinence from social media, similar to religiously enforced abstinence from alcohol, or from casual sex, etc, all because they are considered harmful (by the people in the religion) to you, not (primarily) to others. If the abstinence was enforced socially the same way monogamy was in the early 1900s (yes, I know there were some exceptions, blah blah blah, it was basically ironclad relative to today), the social media companies would wither and blow away.
Without an explicit religion, the moral code of the group becomes some fuzzy, lowest common denominator Frankenstein.
Note that I’m not advocating for existing religions, just wondering about the use of religion as a tool (since it is baked into our legal code with an ability to use it for exactly this kind of thing).
Are you serious? People don't need religion to be moral. If what I see from religion these days is any indicator, I am extremely happy we kept our kids far far far away from it. From all of it. I will concede that not all religion is bad, but quite a lot of it is grift at best and cleverly disguised totalitarianism at worst. Many religious figures have absolutely no problem talking publicly about their "diety-given" right to dominate and control the lives of others for their own personal gain. I don't see how that fits inside any accepted definition of morality.
I am not referring to existing established religions, I am just talking about the construct of religion in general. We are allowed to invent new ones, you know.
There are certain statements that should make you wary of study findings.
People who x reported y is one of those phrases.
“people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,”
This is the same argument you see in cosmetic advertising as "Women who used this serum reported reduction in wrinkles"
If the study has evidence that people who x actually shows y, It would be irresponsible to not say that directly. Dropping to "people reported" seems like an admission that there was no measurable effect other than the influence of the researchers on the opinions of the subjects.
Mental state can be difficult in this respect because it is much harder to objectively measure internal states. the fact that it is harder to do, doesn't grant validity to subjective answers though.
I was once part of a study that did this. It was fascinating seeing something that appeared to have no effect being written up using both "people reported" and "significant" (meaning, not likely by chance, but implying a large effect to the casual reader).
Dude, who cares about study design and methodological validity! Let's just burn Meta down and put Zuck to jail! /s
What you a saying is valid criticism of the study but people here already made up their mind, so they downvote.
Another point to add is that 1 week is way too short - assuming there is an effect it might disappear or go in reverse after 1 month.
To all downvoters: if you think of yourself as smart rational people, please just use search/AI to see for yourself whether there is high quality evidence of _causal_ impact of social media on kids/mental health. The results are mixed at best.
Interestingly the post is climbing it's way back to zero.
I find the downvote without counterargument to be an odd response to a good faith post. It seems like it would strengthen the argument if the message they send is "I don't have a counter to this but I don't like it and I don't like that others will see this point of view"
I have come to realise that I have a much higher threshold when it comes to upvoting, downvoting, or rating things. It seems like a lot of people freely upvote, like, heart, or downvote without a care. We live in a world where a 4.8 star rating (comprised entirely of an aggregate of zero and five star ratings) is considered a concern. So I try not to be bothered by it, but I'm pretty sure subconsciously a downvote hurts more than someone saying "I disagree"
The usual reminders apply: you can allege pretty much anything in such a brief, and "court filing" does not endow the argument with authority. And, the press corps is constrained for space, so their summary of a 230-page brief is necessarily lacking.
The converse story about the defendants' briefs would have the headline "Plaintiffs full of shit, US court filing alleges" but you wouldn't take Meta's defense at face value either, I assume.
Every time they contact me I tell Meta recruiters that I wouldn't stoop to work for a B-list chucklehead like Zuck, and that has been my policy for over 15 years, so no.
You're not speaking to a jury. Regular people just living their lives only have to use their best judgment and life experience to decide which side they think is right. We don't need to be coerced into neutrality just because neither side has presented hard proof.
I’ve recently had to deal with my father cognitive decline & falling for scams left & right using Meta’s apps. This has been so hard on our family. I did a search the other day on marketplace and 100% of all sellers were scams, 20-30 of them.
Meta is a cancer on our society, I’m shutting down all my accounts. Back when TV/Radio/News paper were how you consumed news, you couldn’t get scams this bad at this scale. Our parents dealt with their parents so much easier as they cognitively declined. We need legal protections for elders and youth online more than ever. Companies need to be liable for their ads and scam accounts. Then you’d see a better internet.
My grandmother has been through the same thing. She was scammed out of all of her savings by accounts impersonating a particular celebrity. Thankfully the bank returned all of the money, but the perpetrators will never be caught, they operate out of Nigeria (one of them attached their phone to her Google account.)
Unfortunately these fake celebrity accounts are swarming her like locusts again. We tried to educate her about not using her real name online, not giving out information or adding unknown people as friends, but there's a very sad possibility that she doesn't fully understand what she's doing.
It was emotionally difficult going through her laptop to gather evidence for the bank. They know exactly how to romance and pull on heart strings, particularly with elderly people.
Meta's platforms are a hive of scammers and they should be held accountable.
> adding unknown people as friends
The number of my outer circle of friends who fall for the “copied profile” adding of unknown people or accept a friend request from the attractive young woman who somehow is interested in them is shocking. (I’m gauging this from looking at the “mutual friends” in the friend request.)
2 replies →
Why can’t you do a power of attorney(?) over her finances or move them into a living trust, etc. seems like there are legal protections out there if you can convince her it’s in her best interest to let her family manage her estate so she can focus on enjoying final years (obviously don’t say it like that)
My friend is a bank manager. He says everyday 2-3 elderly people come in confused about a scam.
This is a silent crisis impacting almost eveyone. My grandma personally had her gold stolen by a scammer.
She is now in a home for dimensia.
8 replies →
Unfortunately I have a similar experience. If someone's working at Meta right now, and has been in the past 10 years, they're willingly and actively contributing to making society worse. Some open-source tech is not going to undo any of this, nor any of the past transgressions. I get the pay is probably great, but have some decency.
I suggested a hiring ban on anyone who ever worked at Meta some years back. It was not met with open arms. Going to try again here...
I think it's a valid suggestion that might result in people rethinking working for Meta if it was taken seriously.
Working for Meta is ethically questionable. The company does unspeakable damage to our country. It harms our kids, our elders, our political stability. Working for it, and a number of similar companies, is contributing to the breakdown of the fabric of our society.
Why not build a list of Meta employees and tell them they're not eligible for being hired unless they show some kind of remorse or restitution?
It could be an aggregation of LinkedIn profiles and would call attention to the quandary of hiring someone with questionable ethics to work at your organization. It might go viral on the audacity of the idea alone. That might cause some panic and some pause amongst prospective Meta hires and interns. They might rethink their career choices.
13 replies →
But hey, at least the money is good..
One must also check what YouTube recommends their elderly parents, because it is easy for them to slide into getting recommended harmful content, mostly things like psychological, religious or alternative-medicine topics. Note that not all of them are harmful, but most of them are published by very odd channels.
Opening YouTube on a new machine / OS / browser / without login is eye opening in terms of the awful stuff that gets recommended by default and how quickly it tilts worse if you watch any of it.
7 replies →
YouTube should be held liable for what it is pushing. It literally can kill and seriously harm people.
12 replies →
In case anyone needs to help a relative without a Google account block YouTube channels or videos, the subreddit for uBlock Origin has a wiki that can help. You can block videos by channel or video title or URL using CSS rules. Removing the clickbait and watching a few videos of decent content with them helps a lot.
https://old.reddit.com/r/uBlockOrigin/wiki/solutions/youtube
Have you seen some of the ads between the videos? There are some shady get rich quick types of influencers selling stuff that might really set them back financially as well.
The old, mentally disabled guy in New Jersey falling over and dying trying to get to a date with a meta bot really broke something in me.
That was horrible. This also makes me think that all those researches on "unhappiness Vs spending"
One third of all scams in the US are operated on Meta platforms.
They have a policy that if a scammer’s ad spend makes up more than 0.15% of Meta revenue, moderators must protect the scammer instead of blocking it.
Meta is working hard to scam your dad for ad spend. It’s hugely profitable for them and they are helping to grow it per internal policy. They are only interested in fostering big-time scammers.
I would like to understand the downvotes: is it from doubting these facts? If so, I will post the sources (which were recent mainstream news on the front page of HN). Or is it because of the negative sentiment about Meta? Or disagreement that Meta has any responsibility over moderating scams they promote?
These are verified facts that make up the substance of my message:
- Meta protects their biggest scammers, per internal policy from leadership
- Meta makes a huge profit from these scammers (10% of total revenue; or in other words, their scam revenue is approximately 5x larger than the total Oculus revenue)
- The scams that Meta promotes represent one-third of the total online scams in the US
1 reply →
> 0.15% of Meta revenue
That must be a gigantic amount of money, you (or someone else) don't happen to know who any of those people (or organizations?) are?
1 reply →
> One third of all scams in the US are operated on Meta platforms.
And 100% of all internet scam traffic in the US goes through either US ISPs or US cell carriers.
Should those entities be held liable instead? Or maybe, Meta instead should scan users' private messages on their platforms and report everything that might seem problematic (whatever the current US administration in power considers as problematic) to the relevant authorities?
My personal take: there should be more effort in going after the actual scammers, as opposed to going after the "data pipes" (of various abstraction levels) like Meta/ISPs/cell carriers/etc.
4 replies →
So many of us have been there - it is brutal. These platforms are ripping us apart from each other, providing criminals easy access to the most vulnerable, and concentrating wealth to an unimaginable degree.
But hey, it's a free market /s
Maybe EU's regulation of digital markets isn't such a bad idea after all.
My dad had fallen for two scams - one through WhatsApp, the other texts.
I’m not sure how much we can blame individual companies for this. Obviously they should be doing more - shutting down accounts that message people at random, for instance, but I feel like the scammers will find a way.
I also don’t know what else we can do. It should be easier for kids (or anyone else) to shut down their parent’s accounts at least once this happens, stop all wire and crypto transfers, etc.
Past that, I really don’t know.
I don't mean to be rude or anything - and I don't disagree with what you're suggesting - but don't you think at some point you have a responsibility to stop them accessing these platforms yourself?
What did you search for on marketplace to find the scams?
> We need legal protections for elders and youth
Offline too.
Predation on the elderly is an industry.
Our own attempts to do something about (successful) scammers were meant with utter indifference by my parent's state's (Arizona) attorney general, county sheriffs, local police.
If you really want to hurt Meta, don't delete your accounts - sell these real, aged accounts to spammers for a few bucks.
That may hurt Meta, but not nearly as much as it hurts the elderly people who the spammers will defraud.
2 replies →
Why would that hurt Meta? The entire point here is that they don't care and if anything benefit from such activity.
I’m in a group chat and one member is a Cambodian slave that periodically tries to start romance scams
and we’re like “you’re free now, go home” (because of the economic sanctions and raid)
we recently had a vote on whether she should be booted from the chat, we voted no for the comedic value
so anyway sorry you’re going through that, its wild out there
At this point, I think all of the big tech companies have had some accusations of them acting unethically, but usually, the accusations are around them acting anticompetitively or issues around privacy.
Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit. The mix is usually: grow at all costs mindset, being "data-driven", optimizing for engagement/addiction, and monetizing via ads. The center of gravity of this has all been Meta (and social media), but that thinking has permeated lots of other tech as well.
We have evidence for this in other companies too. Oil & Gas and Tobacco companies are top of mind.
Don’t forget the All-Fats-Are-Bad sugar scam.
Petrochemical, Dow & Industrial Big Chem, Pharmaceutical companies, health insurance companies, finance companies, Monsanto, mining companies.
I mean, let's be real. That's really isn't a big company that achieves scale that doesn't have skeletons in the closet. Period.
It's a well worn playbook by now. But Meta seems to be the only one where we now have proof of internal research being scuttled for showing the inconvenient truth.
6 replies →
What do you think the social effects of large scale advertising are? The whole point is to create false demand essentially driving discontent. I've no idea if Google et al have ever done a formal internal study on the consequences, but it's not hard to predict what the result would be.
The internet can provide an immense amount of good for society, but if we net it on overall impact, I suspect that the internet has overall had a severely negative impact on society. And this effect is being magnified by companies who certainly know that what they're doing is socially detrimental, but they're making tons of money doing it.
I agree false demand effects exist. But sometimes ads tell you about products which genuinely improve your life. Or just tell you "this company is willing to spend a lot on ads, they're not just a fly-by-night operation".
One hypothesis for why Africa is underdeveloped is they have too many inefficient mom-and-pop businesses selling uneven-quality products, and not enough major brands working to build strong reputations and exploit economies of scale.
1 reply →
The positive benefits in education, science research and logistics are hard to understate. Mass advertising existed before the internet. Can you be more explicit about which downsides you thibk the additional mass advertising on the internet caused that can come anywhere close to the immeasurable benefits provided by the internet?
17 replies →
It's on the same scale of chemical companies covering up cancerous forever chemicals.
Cigarette companies hiding known addictive effects?
6 replies →
> Meta (and social media more broadly) are the only case where we have (in my opinion) substantiated allegations of a company being aware of a large, negative impact on society (mental wellness, of teens no less), and still prioritizing growth and profit
Them doing nothing about hate speech that fanned the flames for a full blown genocide is pretty terrible too. They knew the risks, were warned, yet still didn't do anything. It would be unfair to say the Rohingya genocide is the fault of Meta, but they definitely contributed way too much.
> Meta are the only case where we have substantiated allegations of a company being aware of a large, negative impact on society
Robinhood has entered the chat
Why would one specific industry be better? The toxic people will migrate to that industry and profit at the expense of society. It’s market efficiency at work.
I do think an industry is often shaped by the early leaders or group of people around them. Those people shape the dominant company in that space, and then go off to spread that culture in other companies that they start or join. And, competitors are often looking to the dominant company and trying to emulate that company.
6 replies →
For the uninformed, what large negative impact has Robinhood had on society?
5 replies →
Also, tobacco companies and oil companies famously got into trouble from revelations that they were perfectly aware of their negative impacts. For the gambling and alcohol industry, it probably wouldn't even make the news if some internal report leaked that they were "aware" of their negative impact (as if anyone thought they would not be?)
Social media is way down on the list of companies aware of their negative impact. The negative impact arguably isn't even central to their business model, which it certainly is for the other industries mentioned.
The leaders and one of the announcers of Radio Télévision Libre des Mille Collines got 30 years to life sentences for their part in the Rwandan genocide.
We all know this. As people in the tech industry. As people on this website. We know this. The question is, what are we going to do about it? We spend enough time complaining or saying "I'm going to quit facebook" but there's Instagram and Threads and whatever else. And you alone quitting isn't enough. We have to help the people really suffering. We can sometimes equate social media to cigarettes or alcohol and relate the addictive parts of that but we have to acknowledge tools for communication and community are useful, if not even vital in this day and age. We have to find a way to separate the good from the bad and actively create alternatives. It does not mean you create a better cigarette or ban alcohol for minors. It means you use things for their intended purpose.
We can strip systems like X, Instagram, Facebook, Youtube, TikTok, etc of their addictive parts and get back to utility to value. We can have systems not owned by US corporations that are fundamentally valuable to society. But it requires us, the tech savvy engineering folk to make those leaps. Because the rest of society can't do it. We are in the position of power. We have the ability.
We can do something about it.
I wrote something to that effect two days ago on a platform I'm building. https://mu.xyz/post?id=1763732217570513817
Platforms that have the useful stuff from social media without the addictive part already exist: Forums, micro-blogging, blogs, news aggregators, messaging apps, platforms for image portfolios, video sharing platforms.
And most of them have existed before the boom of social media, but they just don't get as huge because they are not addictive.
The useful part of a social media is so small that if you put it on it's own you don't get a famous app, you have something that people use for a small part of their day and otherwise carry on with their life.
A social media essentially leverages the huge and constant need that umans have to socialize, and claims that you can do it better and more through the platform instead of doing it in real life, and they do so by making sure that enough people in your social circle prioritise the platform over getting together in real life. And I believe this is also the main harmlful part of them, people not getting actual real social time with their peers and then struggling with mental health.
At the moment the biggest hope I have is there’s client side tech that protects us from these dark patterns. But I suspect they’ll have their own dark patterns to make them profitable.
I guess we can speculate or theorise on potential strategies but beyond hope we should also try to do something. I have seen some X clones with variations but a lot of the same behaviour plays out when you have no rules around posting, moderation, types of content, etc. Effectively these platforms end up in the same place of gamification and driving engagement through addictive behaviours because they want users. Essentially I think true community is different, true community keeps each other accountable and in check. Somehow we need to get back to some of that. Maybe co-operative led tools. Non profits. I think Mastodon meant well and didn't end up in the right place. Element/Matrix is OK but again doesn't feel quite right. Maybe we should never try to replicate what was, I don't know. BitChat (https://bitchat.free/) is an interesting alternative from Jack Dorsey - who I think is trying to fix the loss of Twitter and the stronghold of WhatsApp.
We can do something about it:
Just not use those services. X is addictive, but otherwise utterly unnecessary. It seemed useful about 8 years ago when you could get tech insights form industry veterans on a daily basis and then use them in your own company. Those days are long gone.
Just. Don't. Use. Those. Services.
Easiest life-hack ever for a happier and more productive life.
Companies can't really be expected to police themselves.
I remember reading that oil companies were aware of global warming in internal literature even back in the 80's
> Companies can't really be expected to police themselves.
Not so long as we don't punish them for failure to. We need a corporate death penalty for an organization that, say, knowingly conspires to destroy the planet's habitability. Then the bean counters might calculate the risk of doing so as unacceptable. We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Corporate death penalty as in terminate the corporation?
Why not the actual death penalty? Or put another way, why not sanctions on the individuals these entities are made up of? It strikes me that qualified immunity for police/government officials and the protections of hiding behind incorporation serve the same purpose - little to no individual accountability when these entities do wrong. Piercing the corporate veil and pursuing a loss of qualified immunity are both difficult - in some cases, often impossible - to accomplish in court, thus incentivizing bad behavior for individuals with those protections.
Maybe a reform of those ideas or protocols would be useful and address the tension you highlight between how we treat "individuals" vs individuals acting in the name of particular entities.
As an aside, both protections have interesting nuances and commonalities. I believe they also highlight another tension (on the flip-side of punishment) between the ability of regular people to hold individuals at these entities accountable in civil suits vs the government maintaining a monopoly on going after individuals. This monopoly can easily lead to corruption (obvious in the qualified immunity case, less obvious but still blatant in the corporate case, where these entities and their officers give politicians and prosecutors millions and millions of dollars).
As George Carlin said, it's a big club. And you ain't in it.
33 replies →
“It is forbidden to kill; therefore all murderers are punished unless they kill in large numbers and to the sound of trumpets.” ― Voltaire
11 replies →
> We're so ready and willing to punish individuals for harm they do to other individuals, but if you get together in a group then suddenly you can plot the downfall of civilization and get a light fine and carry on.
Surely "plot the downfall of civilization" is an exaggeration. Knowing that certain actions have harmful consequences to the environment or the humanity, and nevertheless persisting in them, is what many individuals lawfully do without getting together.
Well said, and yes, this is practically what must happen.
The group of pretty much all humans is such a group because we all conspire to burn fossil fuels. Do you really think a global civilization death penalty is a good idea? That's throwing out the baby with the bathwater.
even back in the 80's
The 1980s is when the issue was finally brought into the political conversation. Shell internal documents go back as far as 1962: https://www.desmog.com/2023/03/31/lost-decade-how-shell-down...
As for science itself: the first scientific theories on greenhouse effects were published in the 1850s -- and the first climate model was published in 1896: https://daily.jstor.org/how-19th-century-scientists-predicte...
No entity can police itself. Not even the police.
Companies, non-profits, regulators, legislative branches of government, courts, presidential administrations, corporate bureaucrats, government bureaucrats, entrepreneurs, regular citizens. They cannot self-police.
That's the motivation for having a system of _checks and balances_[a]: We want power, including the power to police, to be distributed in a society.
---
[a] https://www.britannica.com/topic/checks-and-balances
go further. humanity can't police itself.
1970s
https://news.harvard.edu/gazette/story/2023/01/harvard-led-a...
Global warming was understood for almost a century by 1980
Your second point is right, but depressingly it was the 50s instead of the 80s.
Maybe more parallels to tobacco companies. Incredible amount of taxes and warnings and rules forbidding kids from using it are the solutions to the first problem and likely this second one too.
To your point...
1. "The Tobacco Institute was founded in 1958 as a trade association by cigarette manufacturers, who funded it proportionally to each company's sales. It was initially to supplement the work of the Tobacco Industry Research Committee (TIRC), which later became the Council for Tobacco Research. The TIRC work had been limited to attacking scientific studies that put tobacco in a bad light, and the Tobacco Institute had a broader mission to put out good news about tobacco, especially economic news." [0]
2. "[Lewis Powell] worked for Hunton & Williams, a large law firm in Richmond, Virginia, focusing on corporate law and representing clients such as the Tobacco Institute. His 1971 Powell Memorandum became the blueprint for the rise of the American conservative movement and the formation of a network of influential right-wing think tanks and lobbying organizations, such as The Heritage Foundation and the American Legislative Exchange Council."
[0] https://en.wikipedia.org/wiki/Tobacco_Institute
[1] https://en.wikipedia.org/wiki/Lewis_F._Powell_Jr.
>Companies can't really be expected to police themselves.
Companies can't. Employees can. If someone's still working at Meta, they are ok with it.
The problem is that our current ideology basically assumes they will be - either by consumer pressure, or by competition. The fact that they don't police themselves is then held as proof that what they did is either wanted by consumers or is competitive.
Deniers should watch the movie "The White House effect". It's a great documentary that shows where and how the strategies of the oil companies changed.
"Companies can't really be expected to police themselves."
so does government
No one expects government to police itself.
Government in functioning democratic societies is policed by voters, journalists, and many independent watchdog groups.
4 replies →
> so does government
The public is supposed to police the government, and replace it if it acts against the public interest.
But now that you mention it, perhaps we should also give everyone an equal vote on replacing the boards of too-big-to-fail corporations
8 replies →
true that.. but it seems that they are fostering an environment for SA and even p3dofeelia.. Channel 4 news did a piece on it..
> Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold." I don’t get it. Is sex trafficking driven user growth really so significant for Meta that they would have such a policy ?
The "catching" is probably some kind of automated detection scanner with an algo they don't fully trust to be accurate, so they have some number of "strikes" that will lead to a takedown.
There is always a complexity to this (and don't think I'm defending Meta, who are absolutely toxic).
Like Apple's "scanning for CSAM", and people said "Oh, there's a threshold so it won't false report, you have to have 25+ images (or whatever) before it will"... Like okay, avoid false reporting, but that policy is one messy story away from "Apple says it doesn't care about the first 24 CSAM images on your phone".
Of course it's not. We could speculate about how to square this with reason and Meta's denial; perhaps some flag associated with sex trafficking had to be hit 17 times, and some people thought the flag was associated with too many other things to lower the threshold. But the bottom line is that hostile characterizations of undisclosed documents aren't presumptively true.
We don’t know. But as you read from the article, Meta’s own employees were concerned about it (and many other things). For Zuck it was not a priority, as he said himself.
We can speculate. I think they just did not give a fuck. Usually limiting grooming and abuse of minors requires limiting the access of those minors to various activities on the platform, which means those kids go somewhere else. Meta specifically wanted to promote it’s use among children below 13 to stimulate growth, that often resulting in the platform becoming dangerous for minors was not seen as their problem.
If your company is driven by growth über alles à la venture capitalism, it will mean the growth goes before everything else. Including child safety.
Reading Careless People by Sarah Wynn Williams is eye opening here, and it's pretty close to exactly that.
> I think they just did not give a fuck.
It's that people like Zuck and Sandberg were just so happily ensconced in their happy little worlds of private jets and Davos and etc., that they really could not care less if it wasn't something that affected them (and really, the very vast majority of issues facing Meta, don't affect them, only their bonuses and compensation).
Your actions will lead to active harm? "But not to me, so, so what, if it helps our numbers".
One of the worst outcomes of the last 20 years is how Big Tech companies have successfully propagandized us that they're neutral arbiters of information, successfully blaming any issues with "The Algorithm" [tm].
Section 230 is meant to be a safe harbor for a platform not to be considered a publisher but where is the line between hosting content and choosing what third-party content people see? I would argue that if you have sufficient content, you could de facto publish any content you want by choosing what people see.
"The Algorithm" is not some magical black box. Everything it does is because some human tinkered with it to produce a certain result. The thumb is constantly being put on the scale to promote or downrank certain content. As we're seeing in recent years, this is done to cozy up to certain administrations.
The First Amendment really is a double-edged sword here because I think these companies absolutely encourage unhealthy behavior and destructive content to a wide range of people, including minors.
I can't but help consider the contrast with China who heavily regulate this sort of thing. Yes, China also suppresses any politically sensitive content but, I hate to break it to you, so does every US social media company.
Your solution to the government putting pressure on social media companies to censor is to give the government more power over them by removing section 230?
I'm saying social media companies are using Section 230 as a shield with the illusion of "neutrality" when they're anything but. And if they're taking a very non-neutral stance on content, which they are, they should be treated as a publisher not a platform.
I just hope that in 100 years time, people will be shocked at the prevalence of social media these past 2 decades
I predict that in much sooner than 100 years social media will be normalized and it will be common knowledge that moderating consumption is just as important as it is with video games, TV, alcohol, and every other chapter of societies going through growing pains of newly introduced forms of entertainment. If you look at some of the old moral panic content about violent video games or TV watching they feel a lot like the lamentations about social media today. Yet generations grew up handling them and society didn’t collapse. Each time there are calls that this time is different than the last.
In some spaces the moral panic has moved beyond social media and now it’s about short form video. Ironically you can find this panic spreading on social media.
We moderate consumption of alcohol, sugar, gambling, and tobacco with taxes and laws. We have regulations on what you can show on TV or films. It is complete misuse of the term to claim a law prohibiting sale of alcohol for minors is ‘moral panic’. It is not some individual decision and we need those regulations to have a functioning society.
Likewise in few generations we hopefully find a way to transfer the cost in medical bills of mental health caused by these companies to be paid by those companies in taxes, like we did with tobacco. At this point using these apps is hopefully seen to be as lame as smoking is today.
5 replies →
"Society didn't collapse" is a very very low bar.
> Yet generations grew up handling them and society didn’t collapse.
Society did not collapse. That does not mean those things did not have negative effects on society.
I don't think any of those items have had the significance and decisiveness of social media, or have been controlled by billionaires who have corrupted the election systems.
Social media seems far more dangerous and harder to control because of the power it grants its "friends". It'll be much harder to moderate than anything else you mentioned.
Not only social media but addiction to phones too. The impact on kids and teenagers is well documented by now.
Where are the parents when you need them?
In 100 years time they will be so fried by AI they won't be capable of being shocked. Everyone will just be swiping on generated content in those hover chairs from Wall E.
In Mad Men, we have these little moments of mind=blown by the constant sexism, racism, smoking, alcoholism, even attitudes towards littering. In 2040 someone's going to make a show about the 2010s-2020s and they'll have the same attitude towards social media addiction.
So does this apply to all social medias? (Threads, X, Bluesky, IG, etc) how come they didn’t have this evidence from their users well? Or maybe they didn’t bother to ask..
I suppose the harm from social networks is not as pronounced (since you generally interact only with people and content you opted to follow (e.g. Mastodon).
The harm is from designing them to be addictive. Anything intentionally designed to be addictive is harmful. You’re basically hacking people’s brains by exploiting failure modes of the dopamine system.
If I remember correctly, other research has shown that it's not just the addictive piece. The social comparison piece is a big cause, especially for teenagers. This means Instagram, for example, which is highly visual and includes friends and friends-of-fiends, would have a worse effect than, say, Reddit.
What about it being addictive by its nature? I find myself spending too much time on HN and there’s no algorithm driving content to me specifically.
1 reply →
I had a similar thought. I wonder if any social media on a similar scale as FB/IG would have the same problems and if it's just intrinsic to social media (which is really just a reflection of society where all these harms also exist)
I think group chats (per interest gathering places) without incentives for engagement are the most natural and least likely to cause harm due to the exposure alone.
I quit Facebook in the early to mid 2010s, well before social media became the ridiculously dystopian world it is today.
Completely coincidentally, I had quit smoking a few weeks before.
The feelings of loss, difficulty in sleeping, feeling that something was missing, and strong desire to get back to smoking/FB was almost exactly the same.
And once I got over the hump, the feelings of calm, relaxation, clarity of thought, etc were also similar.
It was then that I learnt, well before anyone really started talking about social media being harmful, that social media (or at least FB…I didn’t really get into any other social media until much later), was literally addictive and probably harmful.
https://www.pnas.org/doi/10.1073/pnas.1320040111
In 2014, Facebook published a paper showing how they can manipulate users’ emotions with their news feed algorithm.
Facebook ran this test on 700k users without consent.
I deactivated my account the day I read that paper and never looked back.
I never really liked fb or any other big application that much, so kicking them after 2016 was not that bad, but I used to be heavy user or forums and kicking some of them felt pretty similar to kicking tobacco back in the day.
We are super social insane monkey creatures that get high on social interaction, which in many ways is a good thing, but can turn into toxic relationships towards family members or even towards a social media application. It is not very dissimilar how coin slot machines or casinos lure you into addiction. They use exactly the same means, therefore they should be regulated like gambling.
That's interesting. When I quit Facebook after years of heavy use, I felt no better or worse.
The News Feed killed the positive social interaction on the site, so it had essentially become a (very bad) news aggregator for me.
I wouldn’t say I felt better.
Which is why I found it so comparable to quitting smoking.
A smoker doesn’t feel “better” after quitting smoking. Even over a decade after having quit I bet if I smoked a cigarette right now I would feel much nicer than I did right before I smoked it. However, I would notice physiological changes, like a faster heart rate, slight increase in jumpiness, getting upset sooner, etc.
Quitting FB was similar. I didn’t feel “better”, but several psycho-physiological aspects of my body just went down a notch.
I quit Twitter/X about a month ago. Had the exact same feeling.
"It can make quite a difference not just to you but to humanity: the sort of boss you choose, whose dreams you help come true." -Vonnegut
Meta delenda est.
Ads delenda est
"Priorities" quote: Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything. As an example, I doubt any of us would appreciate any novelist of not focusing on saving children more than on writing books...
But I get what you are saying.
> You need to be careful with those arguments because you can fall into the trap of "think of the children" for everything
In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
Fair point, but the fuller context is absurd—the OP's rendering is correct in tone and emphasis.
1 reply →
We need to boycott Meta. Otherwise, social media will destroy our children.
> In a 2020 research project code-named “Project Mercury,” Meta (META.O), opens new tab scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook and Instagram, according to Meta documents obtained via discovery.
Did they pick people at random and ask those people to stop for a while, or is this about people who choose to stop for their own reasons?
"Social media harm" sounds like one of these nebulous things which has no real definition
"Social media was a mistake, just like the internet" oh ok so we should just give up our gmails and reddits and everything because people insist on the widest possible swathe of categories
But actually when it comes to Metabook... I don't think Zuckerberg cares about anybody, and more to the point they refuse to give you a chronological service just for starters
People have died and their friends haven't known about it because the algorithm never showed them. People have noticed messages they've got from people trying to get in touch with them years later, because Zuck feels you should be using Facebook all the time, not email https://news.ycombinator.com/item?id=4151433
When your company is run by a megalomaniac this is what you get...
> To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.
I don't think it's even a stretch at this point to compare Meta to cigarette companies.
Complete with the very expensive defence lawyers, payoffs to government, and waxing poetic about the importance of the foundation of American democracy meaning they must have the freedom to make toxic, addictive products and market them to children, whilst they simultaneously claim of course they would never do that.
Journalist love that study but tend to ignore the likely causal reason for the improved outcomes, which is that users who were paid to stop using Facebook had much lower consumption of news and especially political news.
Teens don't care about politics for the most part and have absolutely horrible outcomes from social media
That's a pretty good reason to leave FB though.
What does political news have to do with loneliness and social comparison?
Cigarettes aren't the only source of smoke
At minimum, Stricter and revised gambling laws should certainly apply to attention consumption where recommendation algo's are used.
[dead]
I remember reading case studies on the Tylenol recall.
Meta leadership has had opportunity after opportunity to do the hard thing, and be the force for good in a manner that they can live with.
Even more frustrating - the decision making shares give Zuckerberg control.
Big oil, big tobacco, big social, there seems to be a clear pattern of burying evidence of negative impacts of their products to satisfy some personal greed. These people are mentally ill and we need to help them.
Relevant. CEOs lie. Boldly.
https://www.youtube.com/watch?v=e_ZDQKq2F08
Social media has become a tool box for powerful people. They use it for public manipulation. Why would any powerful entity do anything against that?
These discussions never discuss the priors, is this harm on a different scale then what preceded it? Like is social media worse than MTV or teen magazines?
It is a completely different scale.
I loved MTV as a kid but it was as different to social media as can be.
Half the time you would turn it on and not like the video playing then switch the channel. Even if you liked the video that was playing, half the time the next video was something you didn't like so you would switch the channel.
Now imagine if MTV had used machine learning to predict the next video to show me personally that would best cause me to not change the channel.
That is not even really a different scale but a different category.
Why does it matter? We can’t go back and retroactively punish MTV for its behavior decades ago. Not to mention we likely have a much better understand of the impact of media on mental health now than we did then.
The best time to start doing the right thing is now. Unless the argument here is “since people got away with it before it’s not fair to punish people now.”
It matters because it points towards a common failure mode which we've seen repeatedly in the past. In the 1990s, people routinely published news articles like the OP (e.g. https://www.nytimes.com/1999/04/26/business/technology-digit...) about how researchers "knew" that violent video games were causing harm and the dastardly companies producing them ignored the evidence. In the 1980s, those same articles (https://www.nytimes.com/1983/07/31/arts/tv-view-the-networks...) were published about television: why won't the networks acknowledge the plain, obvious fact that showing violence on TV makes violence more acceptable in real life?
Is the evidence better this time, and the argument for corporate misconduct more ironclad? Maybe, I guess, but I'm skeptical.
1 reply →
What policy proposals would you have made with respect to MTV decades ago, and how would people at the time have reacted to them? MTV peaked (I think) before I was alive or at least old enough to have formative memories involving it, but people have been complaining about television being brain-rotting for many decades and I'm sure there was political pressure against MTV's programming on some grounds or another, by stodgy cultural conservatives who hated freedom of expression or challenges to their dogma. Were they correct? Would it have been good for the US federal government in the 80s and 90s to have actually imposed meaningful legal censorship on MTV for the benefit of the mental health of its youth audience?
2 replies →
Plus if we don’t do anything about it now, rohan_2 twenty years from now will use the same argument about whatever comes next!
I already knew Zuck was a piece of shit before readying Careless People but holy shit.
> In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.”
> Zuckerberg also shot down or ignored requests by Clegg to better fund child safety work.
Meanwhile I'm sitting here deliberating for the 200th time to delete my Whatsapp account, meaning I won't take part in group chats with my friends anymore ... in the end I won't delete it and next up is deliberating for the 201st time to delete my Whatsapp account ...
I deleted it a couple of years ago, leaving all group chats. Haven't looked back since.
Everyone that is important to me (and not a slave, nor enslaver of their friends) is on Signal anyways
Of course they did. Anyone not blind to what is going on knows this, of course. It is merely a matter of proving it in front of the law. That's all this is about. It's no longer about the question whether or not they acted despicably.
I doubt serious consequences will follow this time, as there haven't been following serious consequences all the previous times Meta/Facebook has been found guilty of crimes. However, it can serve as one more event to point out to naive people, who don't want to believe the IT person, that FB/Meta is evil, because they don't want to give up some interaction on it, or some comfort they have, using FB/Meta's apps or tools. I think it's a natural tendency most of us have. We use something, then we want extra good proof when someone claims that thing is bad, because we don't want to change and stop using the thing. Plus FB/Meta will do anything they can, to make people addicted to their platforms.
Someone should include the owl really meme into this process
Meta isn't a whole lot at fault, not that they didn't do wrong here, but that they behaved naturally, as expected of them.
You should absolutley expect companies to do whatever it takes to make the most profit, so long as they don't break the law. As a society, this failure should be put entirely at the foot of elected legislators who have been entrusted to pass laws to protect the public.
You shouldn't have to use less technology, quit social media, etc.. This things keep happening again and again, but by the time there are laws to do something about it, it's too late. At first I thought this reminded me of the tobacco industry, but now that I think about it, this is more akin to alcohol. You can't prohibit it, and you can only restrict it so much because of how abundant its use has become. But still, lots of lawmaking can still be done.
They should have been shutdown and all the C-Level exec arrested after Cambridge Analytica. The weapons grade psyops they used too get Trump elected are crimes against humanity.
Who is surprised? Fuck zuckerberg
Unironically, Zuckerberg and the rest of the top brass of Meta should be in the Hague.
Social media is worse than using fentanyl
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." - Upton Sinclair
HN has seen this quote many times; tech workers willfully or naively ignore the harm their contributions cause as long as the life changing paychecks keep coming, letting them pretend that they are too far removed from the damage to be responsible.
Then comes the classic post “I’m leaving FAANG, so brave of me <quiet-part>funded entirely by the same extraction and harm I once insisted I didn’t see.</quiet-part>"
Meta is Zuck. Zuck is bad. Accept it everyone. Why people hate Elon Musk but not Zuck is beyond me. Zuck has done real harm as well, some of it worse than Musk.
[dead]
[dead]
[flagged]
Sad thing is that nothing will come out of this. Meta will go scott free.
I don’t understand why things like social media are meant to be regulated by the government.
Isn’t religion where we culturally put “not doing things that are bad for you”? And everyone is allowed to have a different version of that?
Maybe instead of regulating social media, we should be looking at where the teeth of religion went even in our separation of church and state society. If everyone thinks their kids shouldn’t do something, enforcing that sounds like exactly what purpose religion is practically useful for.
Well, the more scientific and pluralistic our society becomes the more religion is necessarily sapped of its ability to compel behavior. If you lived in 13th Century France the Catholic Church was a total cultural force and thus could regulate behavior, but the very act of writing freedom of religion into law communicates a certain idea about religion: its so unimportant that you can have whatever form of religion you want.
In any case, one ought to distinguish between "You shouldn't do things which are bad for you," and "You shouldn't do things you know are bad for others." Especially, "Giant corporations with ambiguous structures of responsibility shouldn't be allowed to do things which are bad for others."
13th century France is irrelevant because it was, religiously speaking, a different style of society from America since its founding.
In the past, America, unlike 13th century France, allowed multiple parallel religions who each enforced their own moral codes on top of the secular law using behavioral manipulation tactics including shame.
This seems to have worked up until quite recently. In the early 1900s religion was still massively influential in America. Your view on what freedom of religion means practically is a ret con, because people took it seriously up until universal mass secular schooling and electronic media.
I’m not saying we should return to Jesus or whatever, I’m just saying that there is a receptiveness in the human brain to having behavior enforced in a completely non-violent way where the behavior code is entered into voluntarily and can be abandoned non-violently as well, and hmm wonder if it makes sense to leverage that to solve problems that we are currently leaning for the levers of violence to fix (in the sense that state power enforcement is fundamentally rooted in violence, ie the threat of forced confinement at gunpoint).
On you vs others, I don’t have in mind some kind of religiously enforced corporate regulations, that’s obviously ridiculous. I’m referring to religiously enforced individual abstinence from social media, similar to religiously enforced abstinence from alcohol, or from casual sex, etc, all because they are considered harmful (by the people in the religion) to you, not (primarily) to others. If the abstinence was enforced socially the same way monogamy was in the early 1900s (yes, I know there were some exceptions, blah blah blah, it was basically ironclad relative to today), the social media companies would wither and blow away.
2 replies →
> If everyone thinks their kids shouldn’t do something, enforcing that sounds like exactly what purpose religion is practically useful for.
Alternatively, being raised well by their parents and the Community around them.
Religion is not a needed component of that.
Without an explicit religion, the moral code of the group becomes some fuzzy, lowest common denominator Frankenstein.
Note that I’m not advocating for existing religions, just wondering about the use of religion as a tool (since it is baked into our legal code with an ability to use it for exactly this kind of thing).
2 replies →
The Spanish Inquisition has entered the chat.
Are you serious? People don't need religion to be moral. If what I see from religion these days is any indicator, I am extremely happy we kept our kids far far far away from it. From all of it. I will concede that not all religion is bad, but quite a lot of it is grift at best and cleverly disguised totalitarianism at worst. Many religious figures have absolutely no problem talking publicly about their "diety-given" right to dominate and control the lives of others for their own personal gain. I don't see how that fits inside any accepted definition of morality.
I am not referring to existing established religions, I am just talking about the construct of religion in general. We are allowed to invent new ones, you know.
There are certain statements that should make you wary of study findings.
People who x reported y is one of those phrases.
“people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,”
This is the same argument you see in cosmetic advertising as "Women who used this serum reported reduction in wrinkles"
If the study has evidence that people who x actually shows y, It would be irresponsible to not say that directly. Dropping to "people reported" seems like an admission that there was no measurable effect other than the influence of the researchers on the opinions of the subjects.
Mental state can be difficult in this respect because it is much harder to objectively measure internal states. the fact that it is harder to do, doesn't grant validity to subjective answers though.
I was once part of a study that did this. It was fascinating seeing something that appeared to have no effect being written up using both "people reported" and "significant" (meaning, not likely by chance, but implying a large effect to the casual reader).
Dude, who cares about study design and methodological validity! Let's just burn Meta down and put Zuck to jail! /s
What you a saying is valid criticism of the study but people here already made up their mind, so they downvote.
Another point to add is that 1 week is way too short - assuming there is an effect it might disappear or go in reverse after 1 month.
To all downvoters: if you think of yourself as smart rational people, please just use search/AI to see for yourself whether there is high quality evidence of _causal_ impact of social media on kids/mental health. The results are mixed at best.
Interestingly the post is climbing it's way back to zero.
I find the downvote without counterargument to be an odd response to a good faith post. It seems like it would strengthen the argument if the message they send is "I don't have a counter to this but I don't like it and I don't like that others will see this point of view"
I have come to realise that I have a much higher threshold when it comes to upvoting, downvoting, or rating things. It seems like a lot of people freely upvote, like, heart, or downvote without a care. We live in a world where a 4.8 star rating (comprised entirely of an aggregate of zero and five star ratings) is considered a concern. So I try not to be bothered by it, but I'm pretty sure subconsciously a downvote hurts more than someone saying "I disagree"
The usual reminders apply: you can allege pretty much anything in such a brief, and "court filing" does not endow the argument with authority. And, the press corps is constrained for space, so their summary of a 230-page brief is necessarily lacking.
The converse story about the defendants' briefs would have the headline "Plaintiffs full of shit, US court filing alleges" but you wouldn't take Meta's defense at face value either, I assume.
https://www.lieffcabraser.com/pdf/2025-11-21-Brief-dckt-2480...
This is a weird comment to make, given that they're citing "Meta documents obtained via discovery."
Doesn't seem like you're making this comment in good faith, and/or you're very invested in Meta somehow.
Every time they contact me I tell Meta recruiters that I wouldn't stoop to work for a B-list chucklehead like Zuck, and that has been my policy for over 15 years, so no.
You're not speaking to a jury. Regular people just living their lives only have to use their best judgment and life experience to decide which side they think is right. We don't need to be coerced into neutrality just because neither side has presented hard proof.