Some of the most interesting excerpts (although it's worth reading in its entirety):
> My path in technology started at Facebook where I was the first Director of Monetization. [...] we sought to mine as much attention as humanly possible and turn into historically unprecedented profits. We took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset.
> Tobacco companies [...] added sugar and menthol to cigarettes so you could hold the smoke in your lungs for longer periods. At Facebook, we added status updates, photo tagging, and likes, which made status and reputation primary and laid the groundwork for a teenage mental health crisis.
> Allowing for misinformation, conspiracy theories, and fake news to flourish were like Big Tobacco’s bronchodilators, which allowed the cigarette smoke to cover more surface area of the lungs.
> Tobacco companies added ammonia to cigarettes to increase the speed with which nicotine traveled to the brain. Extreme, incendiary content—think shocking images, graphic videos, and headlines that incite outrage—sowed tribalism and division. And this result has been unprecedented engagement -- and profits. Facebook’s ability to deliver this incendiary content to the right person, at the right time, in the exact right way... that is their ammonia.
> The algorithm maximizes your attention by hitting you repeatedly with content that triggers your strongest emotions — it aims to provoke, shock, and enrage. All the while, the technology is getting smarter and better at provoking a response from you. [...] This is not by accident. It’s an algorithmically optimized playbook to maximize user attention -- and profits.
> When it comes to misinformation, these companies hide behind the First Amendment and say they stand for free speech. At the same time, their algorithms continually choose whose voice is actually heard. In truth, it is not free speech they revere. Instead, Facebook and their cohorts worship at the altar of engagement and cast all other concerns aside, raising the voices of division, anger, hate and misinformation to drown out the voices of truth, justice, morality, and peace.
This bit of dialog should be the smoking gun in my opinion. Big Tobacco got taken to the woodshed over this very thing: making the product as addictive as possible. This should be the club that is used to beat Social Media platforms over their heads. As with Big Tobacco I'm sure it rings true with Social platforms as well in that not just one of them is doing it they all are.
One problem with this is that it's easy to conflate "addictive" with "people like to use it". Should television shows been punished for cliffhangers because they hook people into seeing the next episode? Breaking Bad had an interesting plot and character progression that made me want to keep watching - are they addicting me?
One person might say "We created all these statuses and features to be addictive" but it seems just as true to say "We created this stuff because people liked it and we are trying to make something people like."
Tobacco use as a percentage of the population has consistently declined by .5% since data started to be gathered the 1960s [0].
The Master Settlement Agreement in 1998 [1] had no statistical impact on the rate reduction of smoking - the rate of decline of smokers is the same now as it was in 1965.
The tobacco industry is more profitable than ever and they are diversifying into nicotine delivery vehicles like vapes, gum [2]. So the underlying goal - increase nicotine dependence across the global population and capture the nicotine consumption market is still going strong.
Much like the desire to be intoxicated, the desire to influence people will never go away. It's baked into our biology. Everyone in this thread interacting with each other is trying to influence everyone else. Facebook etc... is just doing successfully what Bernays dreamed of.
You can beat these platforms all you want - just like the tobacco industry was beat. The problems will just surface elsewhere in a different form.
Attack the root issue - ban advertising. oh and do it in a way that allows for "free speech." The challenge of the century.
I disagree somewhat. The addiction argument is merely an extreme.
Suppose someone offered to mow your lawn for free. Great offer, so you take them up on it. Turns out they're also using the access you give them to mine gold you didn't know was in your backyard. Whether or not you were addicted to their mowing services is irrelevant, they're stealing from you.
The problem with Facebook is that they're taking your attention and monetizing it. There's no serious argument against requiring them to disclose their actions - particularly who is buying your attention. It doesn't make any difference if you're addicted or a mere user of their product, they're still using your attention without telling you. They simply know more about science.
I think it’s more than just making it addictive. It’s making choices that make the product more harmful in order to make it more addictive. Even that bar, though, hasn’t triggered action against food producers for sugaring things up. I think there also must be a critical mass of cultural anger.
They really didn't; cigarettes were allowed to flourish for decades and legal action was only taken once their popularity started to wane. Don't expect any meaningful action from your govt
Re: the comments about incendiary content and maximizing attention.
This is what every news outlet tries to do. The only difference is that FB is better at it. It reminds me of the controversy about targeting ads towards protected categories (age, gender). This is something all media buys do as well, based on location, event type, but FB just has a better way.
I'm not saying its right, or necessarily wrong, just that this seems to be more about them being good at something than it is about them operating in moral territory that is different than any other business.
Many news outlets try to do this, but not all of them. There are some that strive to be fair and prioritize informing rather than inflaming their audience. The problem is that there is more money in the latter and many investors are greedy.
The guy is complaining about incendiary content whilst repeatedly comparing Facebook to "Big Tobacco"...I think there is a lot of bombastic nonsense being thrown about.
And I agree Facebook is not the first company in the world to maximise attention with this kind of content. Go back to when political pamphlets started appearing in the 16th century, it was mostly salacious bullshit about well-known public figures being possessed by the devil or drinking the blood of orphans.
I am not even sure what the problem is anymore, let alone what the solution is...but this is not going to stop with Facebook, this is just a reflection of human nature (and yes, everyone has complained about this kind of "content", it ignores the fact that most humans enjoy consuming it).
(I think the most problematic part of Facebook is just that so many people get their news from there and, like every human that has ever existed, they have been unable to deal with that responsibility in an even-handed way...I don't know though. They are basically a dead platform anyway, it is mainly used by old people to keep up to-date with their grandchildren afaik...I don't really know anyone who uses it, and I have never used it myself).
I used to smoke, and I also have (very mild) asthma that was diagnosed prior to me starting to smoke. I always said that I could breath better after a cigarette and people would laugh at me. It never occurred to me that of the thousands of chemicals in a cigarette some of them might be geared specifically to "help" you take in more smoke, and by extension, more air after.
> misinformation, conspiracy theories, and fake news
It's amazing to see people casually use these words as if they still have universally meaningful definitions. Not anymore. What one half of the country considers misinformation another half of the country considers the truth. Not to mention that social media operates internationally.
You can't have a meaningful discussion without admitting this and doing something to escape the semantic trap of perfect ambiguity. In other words, you first need to establish some sort of information processing principle that is unambiguously defined and everyone (or at least the wast majority of people) agrees with.
> In 2016, internal analysis at Facebook found 64% of all extremist group joins were due to their own recommendation tools. Yet repeated attempts to counteract this problem were ignored or shut down.
That's pretty damning. Facebook execs knew that extremist groups were using their platform and Facebook's own tooling catalyzed their growth, and yet they did nothing about it.
On the surface it sounds pretty outrageous. My question would be though, what should Facebook do instead?
A recommendation engine is just an algorithm to maximize an objective function. That objective function being matching users with content that they enjoy and engage with. The algorithm has no in-built notion of political extremism. It almost assuredly seems to be the case that people with radical opinions prefer to consume media that matches their views. If Bob is a three-percenters, it's highly unlikely he'd prefer to read the latest center-left think piece from The Atlantic.
Unless you're willing to ban recommend engines entirely, the only possible alternative I can see is for Facebook to intentionally tip the scales. Extremist political opinions would have to be explicitly penalized in the objective function.
But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion. It means some humans at Facebook are intentionally deciding what people should and should not read, watch and listen to. Remember Facebook as an organization is not terribly representative of the country as a whole. Fewer than 5% of Facebook employees vote Republican, compared to 50% of the country. Virtually no one is over 50. Males are over-represented relative to females. Blacks and hispanics are heavily under-represented. And that doesn't even get into international markets, where the Facebook org is even less representative.
The cure sounds worse than the disease. I really think it's a bad idea to pressure Facebook into the game of explicitly picking political winners and losers. A social media platform powerful enough to give you everything you want is strong enough to destroy everything you value.
> My question would be though, what should Facebook do instead?
What should Big Tobacco do? If your business is a net negative for the world... get out of business. This is not hard. Corporations are not precious endangered species that we have some moral obligation to keep alive.
> A recommendation engine is just an algorithm to maximize an objective function.
A cigarette is just dried leaves wrapped in paper. If the use and production of that devices harms the world, stop using and producing it.
> But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion.
Facebook is already a non-neutral platform. Humans at Facebook chose to use an algorithm to decide recommendations and chose which datasets to use to train that algorithm.
Playing Russian roulette and pointing the gun at someone else before pulling the trigger does not absolve you of responsibility. Sure, the revolver randomly decided which chamber to stop at, but you chose to play Russian roulette with it.
But doing so would hurt engagement, and hence the bottom line!
Facebook have, perhaps accidentally, created a monster of perverse incentives. Not sure what the solution is, besides regulation (which would be extremely difficult).
I would be very careful about considering any actions that Facebook, in particular, takes to be accidental. From the very beginning, intentional recklessness ("Move fast and break things") has been their credo.
When you're being reckless on purpose, none of the damage you create is accidental.
Not sure what the solution is, besides regulation (which would be extremely difficult).
The solution is only difficult if you start from the basis that Facebook must continue to exist. If they cannot run a profitable business that isn’t harmful, that’s noone’s problem but theirs.
How did they define "extremist" in that analysis? And how many total people are we talking about?
Seems like the relevance of that line really depends on answers to both. I.e., if extremist is super narrow we may be talking about 64 people out of 100. If extremist is overly broad, then maybe all the recommendations were for groups that a majority of the population would not find offensive.
Just saying the line by itself without context doesn't convey as much information as it first appears.
Sadly unsurprising. I have Facebook sockpuppet accounts that I use just for researching extremist types and I am constantly amazed at how much of the work FB does for me.
According to Tim Kendall's LinkedIn, he stopped working at Facebook in 2010. So it's interesting that he claims to have internal information from 2016.
Maybe, but it's pretty common for people to keep up with former coworkers and happenings in the company, especially since he was an early employee with (I'm assuming) a fair amount of equity.
I could see an employee giving him that data out of concern, but that's a fair point.
It is really not that surprising. People talk. When you work for a company like this you end up with a large portion of your circle of friends being current or former co-workers. I knew things there were not NDA-cleared for years after I left various startups because people chat and if you know the right questions to ask or are reasonably good at appearing to know just a bit more than you happen to know then people will often fill in the blanks for you. The well really only runs dry when most of that cohort have also left the company.
Why anyone think capitalists can actually practice morality? That's never been done in the hundreds of years of history of capitalism.
And capitalists can be quite moral personally. Across the history, the rich and powerful have always had a positive image. But their enterprises have always been requiring regulations.
I don't like Facebook but why would they configure a recommendation engine to stop suggesting extremist groups to people with extremist affinities ? There are no humans behind those wheels.
What else beside outright banning should have they done ? (I think banning extremists wouldn't have impacted their revenues much so they should have but that's another debate)
> I don't like Facebook but why would they configure a recommendation engine to stop suggesting extremist groups to people with extremist affinities
That's almost certainly not what they did. When you see someone ranting about the 5G turning the coronavirus communist or whatever, that person didn't generally come up with that idea themselves; they were exposed to it online, either via friends, or via this.
Their algorithm is likely pushing extremist nonsense on people which it determines are vulnerable to believing it, which isn't the same as having an affinity for it. Obviously this isn't what they set out to do; they presumably set out to increase engagement, and if that happens to increase engagement, well...
There are very similar issues with YouTube's "Rabbit Hole of Extremism" [1]. YT's algo has noticed that "mild" extremist content gets views and by feeding you progressively more extreme content in series, people get sucked in. I expect this wasn't even planned. It was generated.
It got my father. Living in rural area, cable/satellite TV became too expensive and low quality. So, us kids paid for an internet connection for him. Given only YouTube to inform him, he went from a generally relaxed redneck to talking about how "black community is a lost cause" and "we need to glass (nuke) the middle east and take their oil" in a very short time.
We got Netflix for him and he's calmed back down some. But, definitely not back to where he was before.
I've seen this too and it's really worrying. I don't understand what can be done about that. A legal solution seems difficult and will probably have some negative side effects. I think people just need to slowly learn how dangerous it is to fall into these traps.
There is no doubt that there's a lot wrong with social media, such as spreading fake information, privacy, etc...
Maybe they should have some king of regulation specific to them.
But I fail to see how making your product as addictive as you can, without breaking laws, is terrible. I mean, no one is forced to create a FB/TW/IG profile, as far as I know.
I'm not defending Social Networks, or saying that a case against them should not be made, I'm just saying that I can't get behind the "your product is too adictive" argument.
Just my two cents. Maybe I'm missing something right now that will force me to change my mind later.
>But I fail to see how making your product as addictive as you can, without breaking laws, is terrible
This is an interesting take. Usually I suspect people would say something more like "Making your product as addictive as possible is terrible, but definitely not illegal. And, it's difficult to design laws against something that is addictive and destructive."
I think it's pretty clear that "making your product as addictive as you can" is absolutely terrible. Again, I'm not sure that regulation can solve this problem in a constructive way, (and would love to be proven wrong here) but I fail to see how this isn't bad.
No one is forced to become obese, however it's definitely bad to have a nation full of obese people.
>I think it's pretty clear that "making your product as addictive as you can" is absolutely terrible
Why? Honest question. For instance, you mentioned obesity. Should a restaurant that makes the most delicious and sugar loaded food be forbidden to do so because its customers can't stop eating it and are getting obese?
IMO obesity is an individual problem. I'm all for helping obese people that want to change, don't get me wrong. I'm just saying that they got themselves in that situation. The restaurant should not be punished for their clients lack of control. They should, however, be forced to let clients know exactly what they're eating, but after that, it's not their fault.
Near my office in SF there is a guy who sits on the street corner with his pants rolled up so you can see that his calves were pretty much just two big, open, leaking sores as a side effect of so many injections. I bought him some bandages but he wouldn't use them until the end of the day because showing them off got him more sympathy money that he needed in order to purchase more injections. The motivation center of his brain has been completely hijacked by a product. Suffering to death is no longer a concern for him. Only the product matters.
I don't know what physical processes are behind a facebook addiction, but I doubt it's as serious a condition as that caused by a chemically addictive product. I would equate it more with gambling addiction. Not to say that it's not a problem, but I have a hard time equating the two. That might just be my naivete' though. I've been lucky enough not to encounter either type of addiction.
To me, It’s not just that it’s addictive that is the problem, it’s that the addiction is accelerating the spread of misinformation and allows national/global hate groups to not only exist but flourish.
Many have suspected it for a long while but this testimony proves that Facebook profits from hate groups and the spread of misinformation. That’s not hyperbole, that’s now fact.
It has also accelerated the pace at which good information can spread. What happened to the idea of free-speech and countering bad-ideas with better ones?
Perhaps the real acceleration is in the ballooning expansion of who we consider a "hate-group" -- which seems to have no fixed definition and is thrown around rather cavalierly.
> I fail to see how making your product as addictive as you can, without breaking laws, is terrible.
I think it's important to be clear about "addictive" because people use it in different ways. If by "addictive" you mean "really compelling" then, sure, it may not be intrinsically terrible. A product that, for example, makes it really compelling for users to improve their physical health or fight climate is probably not terrible.
But the clinical definition of "addiction" which is why "addiction" has a strong negative connotation is that for something that is so compelling that your need to use it causes significant disruption to your quality of life of that of those around you.
Read the testimony again. The argument here is not just that Facebook is super engaging. It's that Facebook use harms its users and the world at large and its level of engagement magnifies that.
For sure. But I mentioned the "too addictive" argument specifically. I understand and agree that facebook does more harm than good, and that is wrong and must be addressed. I just don't understand this addiction angle. Making your product as addictive as you can, without breaking laws, is not wrong IMO.
But I think I see where you coming from. They're getting people addicted to something wrong, did I understand you?
It's bad if you accept that people deserve agency: the ability to freely choose how they act.
The primary purpose of making an addictive product is to remove peoples' agency by hijacking known deficiencies in our minds/bodies. It's a form of coercion, because your goal is to prevent people from being able to choose whether they use your product or not.
But they can't do it without said people help, correct?
If they aim to remove agency, it's because you have it in the first place, meaning you can stop it from happening with proper information.
I understand that some people might not understand they are being targeted and should be clearly told what could happen to them. But the majority of people must know FB is addictive.
After that, I can't see how people still getting addicted is the company's fault.
Wow, if this were not hosted on house.gov I wouldn't believe it was real.
Edit: A side note, Kendall's current venture is about "Break[ing] your screen and social media addiction". You're free to make any assumptions regarding that in connection to this hearing.
Just because it's on house.gov doesn't mean it's not fake. The house has a democrat majority, and they're all corrupt lying communists who are out to destroy Trump at all & any costs, so you clearly can't believe this just because it's on house.gov /s
To be clear, that was sarcasm. But sarcasm aside, this is exactly the stance that several members of my family would take if I shared this and asserted that it's not "fake news" because it's on house.gov. The problem is that we're so far through the looking glass that legitimate attempts to pull back the curtain face a huge uphill battle because of the very system that they're trying to expose.
>My path in technology started at Facebook where I was the first Director of Monetization.
While the title is a bit creepy, I can certainly see how it could take awhile to start second-guessing the work when it's your first tech job. Making a platform more interesting, useful, and engaging would certainly be an interesting challenge, particularly at first.
I'm sure getting people into the ovens at Auschwitz presented a fascinating logistical challenge as well. They probably got some really good enterprise architects on that effort. Likely a great career start for many.
We know a few from a different company. They’re outraged by what they saw on The Social Dilemma, but they also have paid off mansions in extremely high cost of living areas thanks to the exact same companies they’re now troubled by.
Wonder what it's like to be a Facebook employee on HackerNews and see this
Anyone who joined, or stayed at Facebook, in the last say 5 years, 100% knew about it and was OK with it. They’re probably laughing at everyone else taking so long to figure it out!
I could almost see it being like Grisham's The Firm. As a new employee, you're all bright eyed and busy tailed. Thinking about what you're going to buy, where you're going to live with all of the bags of cash you're earning. As you continue to work there, the bloom starts to fall off of the rose. Next thing you know, your consciousness starts to get an itch. You're either too dependent on that salary to voluntarily leave, or you've just decided "it's not that bad" and stick you head back in the sand and cashing that paycheck.
Some part of the company is surely not okay with it. I assume many employees have been wrestling with their conscience for months or years.
I had my own experience, once, leaving a company due to ethical concerns: it took me a year and a half to finally follow through and quit. I had coworkers who felt the same who stayed on for years.
No doubt the thought that occurs to many FB employees when they read these articles is "I need to gather my resolve and get the hell out of here!"
That's not really true. Working at a big company like that usually leaves you with very little visibility into anything except your small domain and whatever news the company publicly releases. Sure, you can watch the news, if you aren't working yourself to death. But if someone is paying you bags of money every week, it skews your perception just a bit and changes what you want to hear.
This thread* is a good glimpse, they apparently have teams at FB dedicated to creating propaganda touting the great cause FB is working towards. Those that remain working there have seemingly bought in.
Building something like this intentionally not only contributed to societal breakdown, but acutely impacted the mental health of millions of users. I wouldn’t be surprised if there is a link to the commiserate increase in suicide we’ve also seen.
What do you think about a dystopia were GPT-3 / GPT-4 bots post comments to Hacker News including references and links without being distinguished from real humans?
If indistinguishable, would that be a dystopia or a utopia[1]? At least this is Hacker News, not /r/totallynotrobots. Maybe if we gaze long enough into a procedural abyss, the abyss will gaze back?
The key to making a bot indistinguishable is to mix patterns in it along with machine generation.
For example, simply providing an alternative to pay walled article is a recurring task people do here. It's easy to automate and doesn't raise eye brows. It raises someone's perception of the profile if they were to do a quick check. Another one include providing alternative to products. It's easy. Search through product hunt or other sites for results or wishing someone on their product launch/show HN which again doesn't require contextual understanding to the same degree.
Big tech, philosophical, news media, etc threads are predictable. T5 and electra models from Google are good at filling the blanks (in contrast to gpt which generates texts in forward fashion) so they can be used to make unique sentences following a pattern. They are more meaningful at the cost of less randomness.
Many posts on HN appear first on lobster, small subreddits, GitHub trending, and popular twitter accounts. You could simply fetch the links at a random interval within a timezone and post unique links here.
You can target a demography who is least likely to suspect it's a bot. HN is siloed in many small parts despite having the same front page. You can predict which users are likely to post in certain threads and what their age demography is i.e Emacs anything. Database of HN is available on big query.
You can train a response to suspicious comment calling them a bot: That hurts. I am not a native English speaker. Sorry, if I offended you. or Please check the guidelines...
There are many techniques to make a sophisticated bot. ;)
>"In 2016, internal analysis at Facebook found 64% of all extremist group joins were due to their own recommendation tools. Yet repeated attempts to counteract this problem were ignored or shut down."
But then, plenty people still work for Big Tobacco. Many do so voluntarily, not just because it's their only viable option. The trouble is it comes down, in large part, to ethics and morals. And we don't all share the same moral compass.
The pay sounds really good. Not gonna lie, if I was offered a job there with 400k comp it would be hard to turn down, depending on what I would be working on
Anybody that continues to use Facebook is complicit in the downfall of democracy. That goes for Twitter and 24/7 news channels as well. Get off that crap, it’s rotting out minds.
On one hand, I agree that using Facebook and other social media can be destructive for most people. On the other hand, I see that democracy and many other institutions of our world have fallen behind the times. We can't stop that the world is becoming more connected. We can't stop that there are means to broadcast all kinds of information to millions and billions of people. Perhaps it's time we try to figure out how to upgrade existing systems or draft new ones.
I stopped using Facebook because I agree that it's a net negative for the world.
For what it's worth, I don't have that experience with Twitter. There, I seem to have enough control over who I follow and whose tweets Twitter shows me that my Twitter use is generally beneficial and healthy to me. Despite trying very hard to do so, I was never able to tune my Facebook feed to be healthy in that way.
I understand and fully agree with the damage that social media contributes to radicalization and extremism.
However I think the safeties they place on this are going to contribute to regulatory capture. Facebook has already benefited from policies as is, changes that put a substantial cost on new media companies will just further aid in Facebooks "clone, advertise and usurp" behaviours.
We need to empower people with these algorithms. Imagine the power of a tiktok/YouTube/etc. algo where you get to make the choices. I want: less cartoons, more long-form Econ, increase diversity of recommendations, no repetition, etc. over decades of use and tweaks.
problem i have with this is that while it is true that Facebook is optimizing for something harmful with their business model, these testimonies and congressional hearings are not aimed at solving the problem. they are aimed at scoring political points and huffing and puffing.
so it is a situation where an organization with shitty incentives that doesn't have good faith alignment with society at large is regulated by an organization with shitty incentives that doesn't have good faith alignment with society at large.
the whole process is completely illegitimate and basically a TV show. i don't have a solution for this, i just know that this is not it.
I view that the core issue is that democracy and many modern institutions as they exist today are hopelessly ill-equipped to deal with the reality where billions of people are connected. We live in a world of interconnected differences. Existing social and cultural norms basically compel us to synchronize everyone in the network to the same instance of truth, and this simply doesn't work anymore.
We are already seeing huge increases in support for things like systems thinking, ecological worldview, decentralization, holism, etc. The future is pluralistic and that's okay.
The biggest difference is that Nicotine that administrated via cigarettes, while social media doesn't administrate anything and the addiction is self-generated.
First, I think the most-missed story about the 2016 election is the role that Groups played in Bernie Sander’s ascendance. The volume of meme content and direct voter contact that I received from Bernie volunteers and passive supporters from just a few major pro Bernie groups alone— ones that I was not even part of— exceeds the volume that I have received from all other campaigns online to date.
Second, in the early days of Groups, FB decided I was a very far right-wing activist and recommended that I join a series of groups agitating for a US military coup. I still have screenshots of it. It eventually got better at guessing my tastes.
Yeah for sure. I found the post I made about it (01/24/2014) but I am going to have to dig through my messages/HD to find the screenshot. I'll post when I'm done w/ work
I hate fb as much as the next guy, however it seems like these "mea culpa" admissions might be motivated by raising the profile of the person making them. It feels like these public acts of self-flagellation are meant to shield them from scrutiny for doing these things in the first place. What brought about this change of heart? Why did you work there in the first place?
Unless you think he's lying, who fucking cares what his motivation for whistleblowing is? Many whistleblowers want to get back at companies that screwed them, or want to get interviewed on cable news, or want a book deal, or whatever. Who fucking cares if they're not ideologically pure altruistic saints? What matters is the truth, not the motivations of those telling the truth.
(Do people with exclusively pure selfless motivations even exist? Even people who donate to charities anonymously are plausibly motivated at least in part by the warm tingly feeling they enjoy when giving charitably.)
These things aren't mutually exclusive, a testimony in Congress is inherently a public memorialization.
Tim Kendall has been an outspoken critic for a long time and also a recent central figure in the movie "The Social Dilemma" which is the about the same thing and will lead to more speaking engagements on the topic.
That doesn't dilute the message. If you think it does, what does a better arbiter of this aspect of reality look like? Who would that person be and what would their credentials be?
What do you mean they have been an outspoken critic for a long time?
Honest question: Were they outspoken when they were a director at FB, or when they were president at pinterest? Or did it start two years ago when they became CEO of Moment selling an app to cut down on screen time?
In my mind, an ideal arbiter isn't also selling a product to fix the problem they are raising awareness about.
This doesn't mean what they are saying isn't true, or that they didn't have a real change of heart, but is certainly a conflict of interests.
What evidence do you have to back up these claims?
“It seems like these mea culpa admissions might be motivated...”
It seems like you’re not willing to state there is another agenda but you want to attack people speaking up anyway — because they were part of the problem or contributed to it, that anything they say now doesn’t matter.
Some of the most interesting excerpts (although it's worth reading in its entirety):
> My path in technology started at Facebook where I was the first Director of Monetization. [...] we sought to mine as much attention as humanly possible and turn into historically unprecedented profits. We took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset.
> Tobacco companies [...] added sugar and menthol to cigarettes so you could hold the smoke in your lungs for longer periods. At Facebook, we added status updates, photo tagging, and likes, which made status and reputation primary and laid the groundwork for a teenage mental health crisis.
> Allowing for misinformation, conspiracy theories, and fake news to flourish were like Big Tobacco’s bronchodilators, which allowed the cigarette smoke to cover more surface area of the lungs.
> Tobacco companies added ammonia to cigarettes to increase the speed with which nicotine traveled to the brain. Extreme, incendiary content—think shocking images, graphic videos, and headlines that incite outrage—sowed tribalism and division. And this result has been unprecedented engagement -- and profits. Facebook’s ability to deliver this incendiary content to the right person, at the right time, in the exact right way... that is their ammonia.
> The algorithm maximizes your attention by hitting you repeatedly with content that triggers your strongest emotions — it aims to provoke, shock, and enrage. All the while, the technology is getting smarter and better at provoking a response from you. [...] This is not by accident. It’s an algorithmically optimized playbook to maximize user attention -- and profits.
> When it comes to misinformation, these companies hide behind the First Amendment and say they stand for free speech. At the same time, their algorithms continually choose whose voice is actually heard. In truth, it is not free speech they revere. Instead, Facebook and their cohorts worship at the altar of engagement and cast all other concerns aside, raising the voices of division, anger, hate and misinformation to drown out the voices of truth, justice, morality, and peace.
This bit of dialog should be the smoking gun in my opinion. Big Tobacco got taken to the woodshed over this very thing: making the product as addictive as possible. This should be the club that is used to beat Social Media platforms over their heads. As with Big Tobacco I'm sure it rings true with Social platforms as well in that not just one of them is doing it they all are.
One problem with this is that it's easy to conflate "addictive" with "people like to use it". Should television shows been punished for cliffhangers because they hook people into seeing the next episode? Breaking Bad had an interesting plot and character progression that made me want to keep watching - are they addicting me?
One person might say "We created all these statuses and features to be addictive" but it seems just as true to say "We created this stuff because people liked it and we are trying to make something people like."
33 replies →
Tobacco use as a percentage of the population has consistently declined by .5% since data started to be gathered the 1960s [0].
The Master Settlement Agreement in 1998 [1] had no statistical impact on the rate reduction of smoking - the rate of decline of smokers is the same now as it was in 1965.
The tobacco industry is more profitable than ever and they are diversifying into nicotine delivery vehicles like vapes, gum [2]. So the underlying goal - increase nicotine dependence across the global population and capture the nicotine consumption market is still going strong.
Much like the desire to be intoxicated, the desire to influence people will never go away. It's baked into our biology. Everyone in this thread interacting with each other is trying to influence everyone else. Facebook etc... is just doing successfully what Bernays dreamed of.
You can beat these platforms all you want - just like the tobacco industry was beat. The problems will just surface elsewhere in a different form.
Attack the root issue - ban advertising. oh and do it in a way that allows for "free speech." The challenge of the century.
[0]https://www.lung.org/research/trends-in-lung-disease/tobacco...
[1] https://en.wikipedia.org/wiki/Tobacco_Master_Settlement_Agre...
[2]https://www.wsj.com/articles/u-s-tobacco-industry-rebounds-f...
4 replies →
I disagree somewhat. The addiction argument is merely an extreme.
Suppose someone offered to mow your lawn for free. Great offer, so you take them up on it. Turns out they're also using the access you give them to mine gold you didn't know was in your backyard. Whether or not you were addicted to their mowing services is irrelevant, they're stealing from you.
The problem with Facebook is that they're taking your attention and monetizing it. There's no serious argument against requiring them to disclose their actions - particularly who is buying your attention. It doesn't make any difference if you're addicted or a mere user of their product, they're still using your attention without telling you. They simply know more about science.
I think it’s more than just making it addictive. It’s making choices that make the product more harmful in order to make it more addictive. Even that bar, though, hasn’t triggered action against food producers for sugaring things up. I think there also must be a critical mass of cultural anger.
They really didn't; cigarettes were allowed to flourish for decades and legal action was only taken once their popularity started to wane. Don't expect any meaningful action from your govt
1 reply →
> Big Tobacco got taken to the woodshed
And then nothing really happened in that woodshed other than some lousy warnings on a toxic product for the consumer and its surroundings.
1 reply →
Re: the comments about incendiary content and maximizing attention.
This is what every news outlet tries to do. The only difference is that FB is better at it. It reminds me of the controversy about targeting ads towards protected categories (age, gender). This is something all media buys do as well, based on location, event type, but FB just has a better way.
I'm not saying its right, or necessarily wrong, just that this seems to be more about them being good at something than it is about them operating in moral territory that is different than any other business.
Many news outlets try to do this, but not all of them. There are some that strive to be fair and prioritize informing rather than inflaming their audience. The problem is that there is more money in the latter and many investors are greedy.
1 reply →
It is new to run targeted ads at protected classes. And it is new moral territory.
Example: The government of Iran use pizza ads targeted towards gay people to track down their identities. Still the same as other media?
The guy is complaining about incendiary content whilst repeatedly comparing Facebook to "Big Tobacco"...I think there is a lot of bombastic nonsense being thrown about.
And I agree Facebook is not the first company in the world to maximise attention with this kind of content. Go back to when political pamphlets started appearing in the 16th century, it was mostly salacious bullshit about well-known public figures being possessed by the devil or drinking the blood of orphans.
I am not even sure what the problem is anymore, let alone what the solution is...but this is not going to stop with Facebook, this is just a reflection of human nature (and yes, everyone has complained about this kind of "content", it ignores the fact that most humans enjoy consuming it).
(I think the most problematic part of Facebook is just that so many people get their news from there and, like every human that has ever existed, they have been unable to deal with that responsibility in an even-handed way...I don't know though. They are basically a dead platform anyway, it is mainly used by old people to keep up to-date with their grandchildren afaik...I don't really know anyone who uses it, and I have never used it myself).
4 replies →
> like Big Tobacco's bronchodilators
I used to smoke, and I also have (very mild) asthma that was diagnosed prior to me starting to smoke. I always said that I could breath better after a cigarette and people would laugh at me. It never occurred to me that of the thousands of chemicals in a cigarette some of them might be geared specifically to "help" you take in more smoke, and by extension, more air after.
There were actual "medical cigarettes" sold in the past.
https://www.pharmacytimes.com/contributor/timothy-aungst-pha...
> misinformation, conspiracy theories, and fake news
It's amazing to see people casually use these words as if they still have universally meaningful definitions. Not anymore. What one half of the country considers misinformation another half of the country considers the truth. Not to mention that social media operates internationally.
You can't have a meaningful discussion without admitting this and doing something to escape the semantic trap of perfect ambiguity. In other words, you first need to establish some sort of information processing principle that is unambiguously defined and everyone (or at least the wast majority of people) agrees with.
You don't need to fund wars or drugs, just use Facebook PS: The devil };)
I learned more about Big Tobacco here than fb.
Sure after fb made this guy a multimillionaire he grows a heart. Don't work for these dweebs in the first place next time...
> In 2016, internal analysis at Facebook found 64% of all extremist group joins were due to their own recommendation tools. Yet repeated attempts to counteract this problem were ignored or shut down.
That's pretty damning. Facebook execs knew that extremist groups were using their platform and Facebook's own tooling catalyzed their growth, and yet they did nothing about it.
On the surface it sounds pretty outrageous. My question would be though, what should Facebook do instead?
A recommendation engine is just an algorithm to maximize an objective function. That objective function being matching users with content that they enjoy and engage with. The algorithm has no in-built notion of political extremism. It almost assuredly seems to be the case that people with radical opinions prefer to consume media that matches their views. If Bob is a three-percenters, it's highly unlikely he'd prefer to read the latest center-left think piece from The Atlantic.
Unless you're willing to ban recommend engines entirely, the only possible alternative I can see is for Facebook to intentionally tip the scales. Extremist political opinions would have to be explicitly penalized in the objective function.
But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion. It means some humans at Facebook are intentionally deciding what people should and should not read, watch and listen to. Remember Facebook as an organization is not terribly representative of the country as a whole. Fewer than 5% of Facebook employees vote Republican, compared to 50% of the country. Virtually no one is over 50. Males are over-represented relative to females. Blacks and hispanics are heavily under-represented. And that doesn't even get into international markets, where the Facebook org is even less representative.
The cure sounds worse than the disease. I really think it's a bad idea to pressure Facebook into the game of explicitly picking political winners and losers. A social media platform powerful enough to give you everything you want is strong enough to destroy everything you value.
> My question would be though, what should Facebook do instead?
What should Big Tobacco do? If your business is a net negative for the world... get out of business. This is not hard. Corporations are not precious endangered species that we have some moral obligation to keep alive.
> A recommendation engine is just an algorithm to maximize an objective function.
A cigarette is just dried leaves wrapped in paper. If the use and production of that devices harms the world, stop using and producing it.
> But now you've turned Facebook from a neutral platform into an explicit arbiter of political opinion.
Facebook is already a non-neutral platform. Humans at Facebook chose to use an algorithm to decide recommendations and chose which datasets to use to train that algorithm.
Playing Russian roulette and pointing the gun at someone else before pulling the trigger does not absolve you of responsibility. Sure, the revolver randomly decided which chamber to stop at, but you chose to play Russian roulette with it.
4 replies →
> what should Facebook do instead?
The same thing social networks did before.
If I subscribed to 1000 people, show me whatever the hell they wrote, all of it, in chronological order.
Don't show me what my friends wrote on other pages, if they think that's important of interesting, they will link or share manually.
2 replies →
We probably should ban recommendation engines for content.
1 reply →
Well, based on the original quote, it's not that they passively did nothing about it.
The consciously and proactively blocked attempts to fix it.
> attempts to counteract this problem were ignored or shut down.
I think you misinterpreted this?
6 replies →
But doing so would hurt engagement, and hence the bottom line!
Facebook have, perhaps accidentally, created a monster of perverse incentives. Not sure what the solution is, besides regulation (which would be extremely difficult).
I would be very careful about considering any actions that Facebook, in particular, takes to be accidental. From the very beginning, intentional recklessness ("Move fast and break things") has been their credo.
When you're being reckless on purpose, none of the damage you create is accidental.
1 reply →
Not sure what the solution is, besides regulation (which would be extremely difficult).
The solution is only difficult if you start from the basis that Facebook must continue to exist. If they cannot run a profitable business that isn’t harmful, that’s noone’s problem but theirs.
1 reply →
How did they define "extremist" in that analysis? And how many total people are we talking about?
Seems like the relevance of that line really depends on answers to both. I.e., if extremist is super narrow we may be talking about 64 people out of 100. If extremist is overly broad, then maybe all the recommendations were for groups that a majority of the population would not find offensive.
Just saying the line by itself without context doesn't convey as much information as it first appears.
Sadly unsurprising. I have Facebook sockpuppet accounts that I use just for researching extremist types and I am constantly amazed at how much of the work FB does for me.
According to Tim Kendall's LinkedIn, he stopped working at Facebook in 2010. So it's interesting that he claims to have internal information from 2016.
Maybe, but it's pretty common for people to keep up with former coworkers and happenings in the company, especially since he was an early employee with (I'm assuming) a fair amount of equity.
I could see an employee giving him that data out of concern, but that's a fair point.
It is really not that surprising. People talk. When you work for a company like this you end up with a large portion of your circle of friends being current or former co-workers. I knew things there were not NDA-cleared for years after I left various startups because people chat and if you know the right questions to ask or are reasonably good at appearing to know just a bit more than you happen to know then people will often fill in the blanks for you. The well really only runs dry when most of that cohort have also left the company.
Do we know what percentage of group joins in general are due to recommendations? Would be an interesting data point to have.
Having all extremists in one directory must be handy for FBI/police's investigation
Who will sue the execs Zuckerberg/Sandberg at this point? It's about time.
Well, I dont think anyone should be surprised...
Why anyone think capitalists can actually practice morality? That's never been done in the hundreds of years of history of capitalism.
And capitalists can be quite moral personally. Across the history, the rich and powerful have always had a positive image. But their enterprises have always been requiring regulations.
I don't like Facebook but why would they configure a recommendation engine to stop suggesting extremist groups to people with extremist affinities ? There are no humans behind those wheels.
What else beside outright banning should have they done ? (I think banning extremists wouldn't have impacted their revenues much so they should have but that's another debate)
> I don't like Facebook but why would they configure a recommendation engine to stop suggesting extremist groups to people with extremist affinities
That's almost certainly not what they did. When you see someone ranting about the 5G turning the coronavirus communist or whatever, that person didn't generally come up with that idea themselves; they were exposed to it online, either via friends, or via this.
Their algorithm is likely pushing extremist nonsense on people which it determines are vulnerable to believing it, which isn't the same as having an affinity for it. Obviously this isn't what they set out to do; they presumably set out to increase engagement, and if that happens to increase engagement, well...
There are very similar issues with YouTube's "Rabbit Hole of Extremism" [1]. YT's algo has noticed that "mild" extremist content gets views and by feeding you progressively more extreme content in series, people get sucked in. I expect this wasn't even planned. It was generated.
It got my father. Living in rural area, cable/satellite TV became too expensive and low quality. So, us kids paid for an internet connection for him. Given only YouTube to inform him, he went from a generally relaxed redneck to talking about how "black community is a lost cause" and "we need to glass (nuke) the middle east and take their oil" in a very short time.
We got Netflix for him and he's calmed back down some. But, definitely not back to where he was before.
[1] https://www.nytimes.com/interactive/2019/06/08/technology/yo...
I've seen this too and it's really worrying. I don't understand what can be done about that. A legal solution seems difficult and will probably have some negative side effects. I think people just need to slowly learn how dangerous it is to fall into these traps.
There is no doubt that there's a lot wrong with social media, such as spreading fake information, privacy, etc...
Maybe they should have some king of regulation specific to them.
But I fail to see how making your product as addictive as you can, without breaking laws, is terrible. I mean, no one is forced to create a FB/TW/IG profile, as far as I know.
I'm not defending Social Networks, or saying that a case against them should not be made, I'm just saying that I can't get behind the "your product is too adictive" argument.
Just my two cents. Maybe I'm missing something right now that will force me to change my mind later.
>But I fail to see how making your product as addictive as you can, without breaking laws, is terrible
This is an interesting take. Usually I suspect people would say something more like "Making your product as addictive as possible is terrible, but definitely not illegal. And, it's difficult to design laws against something that is addictive and destructive."
I think it's pretty clear that "making your product as addictive as you can" is absolutely terrible. Again, I'm not sure that regulation can solve this problem in a constructive way, (and would love to be proven wrong here) but I fail to see how this isn't bad.
No one is forced to become obese, however it's definitely bad to have a nation full of obese people.
>I think it's pretty clear that "making your product as addictive as you can" is absolutely terrible
Why? Honest question. For instance, you mentioned obesity. Should a restaurant that makes the most delicious and sugar loaded food be forbidden to do so because its customers can't stop eating it and are getting obese?
IMO obesity is an individual problem. I'm all for helping obese people that want to change, don't get me wrong. I'm just saying that they got themselves in that situation. The restaurant should not be punished for their clients lack of control. They should, however, be forced to let clients know exactly what they're eating, but after that, it's not their fault.
11 replies →
Near my office in SF there is a guy who sits on the street corner with his pants rolled up so you can see that his calves were pretty much just two big, open, leaking sores as a side effect of so many injections. I bought him some bandages but he wouldn't use them until the end of the day because showing them off got him more sympathy money that he needed in order to purchase more injections. The motivation center of his brain has been completely hijacked by a product. Suffering to death is no longer a concern for him. Only the product matters.
I don't know what physical processes are behind a facebook addiction, but I doubt it's as serious a condition as that caused by a chemically addictive product. I would equate it more with gambling addiction. Not to say that it's not a problem, but I have a hard time equating the two. That might just be my naivete' though. I've been lucky enough not to encounter either type of addiction.
2 replies →
To me, It’s not just that it’s addictive that is the problem, it’s that the addiction is accelerating the spread of misinformation and allows national/global hate groups to not only exist but flourish.
Many have suspected it for a long while but this testimony proves that Facebook profits from hate groups and the spread of misinformation. That’s not hyperbole, that’s now fact.
It has also accelerated the pace at which good information can spread. What happened to the idea of free-speech and countering bad-ideas with better ones?
Perhaps the real acceleration is in the ballooning expansion of who we consider a "hate-group" -- which seems to have no fixed definition and is thrown around rather cavalierly.
6 replies →
> I fail to see how making your product as addictive as you can, without breaking laws, is terrible.
I think it's important to be clear about "addictive" because people use it in different ways. If by "addictive" you mean "really compelling" then, sure, it may not be intrinsically terrible. A product that, for example, makes it really compelling for users to improve their physical health or fight climate is probably not terrible.
But the clinical definition of "addiction" which is why "addiction" has a strong negative connotation is that for something that is so compelling that your need to use it causes significant disruption to your quality of life of that of those around you.
Read the testimony again. The argument here is not just that Facebook is super engaging. It's that Facebook use harms its users and the world at large and its level of engagement magnifies that.
For sure. But I mentioned the "too addictive" argument specifically. I understand and agree that facebook does more harm than good, and that is wrong and must be addressed. I just don't understand this addiction angle. Making your product as addictive as you can, without breaking laws, is not wrong IMO.
But I think I see where you coming from. They're getting people addicted to something wrong, did I understand you?
3 replies →
It's bad if you accept that people deserve agency: the ability to freely choose how they act.
The primary purpose of making an addictive product is to remove peoples' agency by hijacking known deficiencies in our minds/bodies. It's a form of coercion, because your goal is to prevent people from being able to choose whether they use your product or not.
But they can't do it without said people help, correct?
If they aim to remove agency, it's because you have it in the first place, meaning you can stop it from happening with proper information.
I understand that some people might not understand they are being targeted and should be clearly told what could happen to them. But the majority of people must know FB is addictive.
After that, I can't see how people still getting addicted is the company's fault.
2 replies →
This is a very philosophical view of why addiction is bad. I doubt you can convince politicians to act based on this reasoning.
Wow, if this were not hosted on house.gov I wouldn't believe it was real.
Edit: A side note, Kendall's current venture is about "Break[ing] your screen and social media addiction". You're free to make any assumptions regarding that in connection to this hearing.
Why not?
It could easily pass as sci-fi about some technological dystopia.
2 replies →
Just because it's on house.gov doesn't mean it's not fake. The house has a democrat majority, and they're all corrupt lying communists who are out to destroy Trump at all & any costs, so you clearly can't believe this just because it's on house.gov /s
To be clear, that was sarcasm. But sarcasm aside, this is exactly the stance that several members of my family would take if I shared this and asserted that it's not "fake news" because it's on house.gov. The problem is that we're so far through the looking glass that legitimate attempts to pull back the curtain face a huge uphill battle because of the very system that they're trying to expose.
I think it's interesting how folks always seem to grow a conscience only AFTER they made their riches.
>My path in technology started at Facebook where I was the first Director of Monetization.
While the title is a bit creepy, I can certainly see how it could take awhile to start second-guessing the work when it's your first tech job. Making a platform more interesting, useful, and engaging would certainly be an interesting challenge, particularly at first.
I'm sure getting people into the ovens at Auschwitz presented a fascinating logistical challenge as well. They probably got some really good enterprise architects on that effort. Likely a great career start for many.
We know a few from a different company. They’re outraged by what they saw on The Social Dilemma, but they also have paid off mansions in extremely high cost of living areas thanks to the exact same companies they’re now troubled by.
Remember Agile? Often we don't know what we've built until we've built it. Add in some ground-breaking work, and here we are.
Plus, I believe he's selling products that'll help you combat the addiction he helped hook you on.
https://inthemoment.io/
Livestream for Hearing on “Mainstreaming Extremism: Social Media’s Role in Radicalizing America" is still going on: https://www.youtube.com/watch?v=mstNE5KIM-g
linking to youtube is such delicious irony.
Coincidentally, Behind the Bastards podcast has put out a two-part show on Mark Zuckerberg's background and the atrocities Facebook has let happen on their platform - https://www.iheart.com/podcast/105-behind-the-bastards-29236...
BTB is quite inflammatory, but the host eloquently puts together a lot of really damning and shocking stories from inside Facebook's doors.
Wonder what it's like to be a Facebook employee on HackerNews and see this
I imagine like any other day.
"Ex-facebooker blasts facebook to promote new venture"
This time they use a cigarette analogy
Wonder what it's like to be a Facebook employee on HackerNews and see this
Anyone who joined, or stayed at Facebook, in the last say 5 years, 100% knew about it and was OK with it. They’re probably laughing at everyone else taking so long to figure it out!
I could almost see it being like Grisham's The Firm. As a new employee, you're all bright eyed and busy tailed. Thinking about what you're going to buy, where you're going to live with all of the bags of cash you're earning. As you continue to work there, the bloom starts to fall off of the rose. Next thing you know, your consciousness starts to get an itch. You're either too dependent on that salary to voluntarily leave, or you've just decided "it's not that bad" and stick you head back in the sand and cashing that paycheck.
1 reply →
Some part of the company is surely not okay with it. I assume many employees have been wrestling with their conscience for months or years.
I had my own experience, once, leaving a company due to ethical concerns: it took me a year and a half to finally follow through and quit. I had coworkers who felt the same who stayed on for years.
No doubt the thought that occurs to many FB employees when they read these articles is "I need to gather my resolve and get the hell out of here!"
I mean if you are still working there at this point you already knew similar things were going on
That's not really true. Working at a big company like that usually leaves you with very little visibility into anything except your small domain and whatever news the company publicly releases. Sure, you can watch the news, if you aren't working yourself to death. But if someone is paying you bags of money every week, it skews your perception just a bit and changes what you want to hear.
2 replies →
This thread* is a good glimpse, they apparently have teams at FB dedicated to creating propaganda touting the great cause FB is working towards. Those that remain working there have seemingly bought in.
* https://news.ycombinator.com/item?id=24510904
Tim Kendall also appears on the Netflix documentary, “Social Dilemma”. I would highly recommend giving it a watch.
Building something like this intentionally not only contributed to societal breakdown, but acutely impacted the mental health of millions of users. I wouldn’t be surprised if there is a link to the commiserate increase in suicide we’ve also seen.
We should hold the decision makers accountable.
A 1952 prediction of a dystopia we haven't yet reached: https://news.ycombinator.com/item?id=24576623
What do you think about a dystopia were GPT-3 / GPT-4 bots post comments to Hacker News including references and links without being distinguished from real humans?
If indistinguishable, would that be a dystopia or a utopia[1]? At least this is Hacker News, not /r/totallynotrobots. Maybe if we gaze long enough into a procedural abyss, the abyss will gaze back?
https://news.ycombinator.com/item?id=24470017
1 reply →
The key to making a bot indistinguishable is to mix patterns in it along with machine generation.
For example, simply providing an alternative to pay walled article is a recurring task people do here. It's easy to automate and doesn't raise eye brows. It raises someone's perception of the profile if they were to do a quick check. Another one include providing alternative to products. It's easy. Search through product hunt or other sites for results or wishing someone on their product launch/show HN which again doesn't require contextual understanding to the same degree.
Big tech, philosophical, news media, etc threads are predictable. T5 and electra models from Google are good at filling the blanks (in contrast to gpt which generates texts in forward fashion) so they can be used to make unique sentences following a pattern. They are more meaningful at the cost of less randomness.
Many posts on HN appear first on lobster, small subreddits, GitHub trending, and popular twitter accounts. You could simply fetch the links at a random interval within a timezone and post unique links here.
You can target a demography who is least likely to suspect it's a bot. HN is siloed in many small parts despite having the same front page. You can predict which users are likely to post in certain threads and what their age demography is i.e Emacs anything. Database of HN is available on big query.
You can train a response to suspicious comment calling them a bot: That hurts. I am not a native English speaker. Sorry, if I offended you. or Please check the guidelines...
There are many techniques to make a sophisticated bot. ;)
https://ai.googleblog.com/2020/02/exploring-transfer-learnin...
https://github.com/fuzhenxin/Style-Transfer-in-Text
https://ai.googleblog.com/2020/03/more-efficient-nlp-model-p...
https://console.cloud.google.com/marketplace/details/y-combi...
It wouldn't surprise me if a non significant number of users here were bots.
I am more interested in the question: Does the difference matter especially in text as long as a bot user is a more useful user?
1 reply →
>"In 2016, internal analysis at Facebook found 64% of all extremist group joins were due to their own recommendation tools. Yet repeated attempts to counteract this problem were ignored or shut down."
If thats accurate, it's freaking me out while thinking about Facebook's role in the Myanmar genocide https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...
I don't understand how anyone could continue to work for Facebook, knowing this.
Neither do I.
But then, plenty people still work for Big Tobacco. Many do so voluntarily, not just because it's their only viable option. The trouble is it comes down, in large part, to ethics and morals. And we don't all share the same moral compass.
The pay sounds really good. Not gonna lie, if I was offered a job there with 400k comp it would be hard to turn down, depending on what I would be working on
Why was the title changed from 'Facebook director of monetization: "We took a page from Big Tobacco's playbook"' ?
That first title was what originally drew me to this story, and I find it to be more informative.
I saw the first one and didn't click; the new one made me click. To each their own I guess ;)
Anybody that continues to use Facebook is complicit in the downfall of democracy. That goes for Twitter and 24/7 news channels as well. Get off that crap, it’s rotting out minds.
On one hand, I agree that using Facebook and other social media can be destructive for most people. On the other hand, I see that democracy and many other institutions of our world have fallen behind the times. We can't stop that the world is becoming more connected. We can't stop that there are means to broadcast all kinds of information to millions and billions of people. Perhaps it's time we try to figure out how to upgrade existing systems or draft new ones.
I stopped using Facebook because I agree that it's a net negative for the world.
For what it's worth, I don't have that experience with Twitter. There, I seem to have enough control over who I follow and whose tweets Twitter shows me that my Twitter use is generally beneficial and healthy to me. Despite trying very hard to do so, I was never able to tune my Facebook feed to be healthy in that way.
I understand and fully agree with the damage that social media contributes to radicalization and extremism.
However I think the safeties they place on this are going to contribute to regulatory capture. Facebook has already benefited from policies as is, changes that put a substantial cost on new media companies will just further aid in Facebooks "clone, advertise and usurp" behaviours.
We need to empower people with these algorithms. Imagine the power of a tiktok/YouTube/etc. algo where you get to make the choices. I want: less cartoons, more long-form Econ, increase diversity of recommendations, no repetition, etc. over decades of use and tweaks.
problem i have with this is that while it is true that Facebook is optimizing for something harmful with their business model, these testimonies and congressional hearings are not aimed at solving the problem. they are aimed at scoring political points and huffing and puffing.
so it is a situation where an organization with shitty incentives that doesn't have good faith alignment with society at large is regulated by an organization with shitty incentives that doesn't have good faith alignment with society at large.
the whole process is completely illegitimate and basically a TV show. i don't have a solution for this, i just know that this is not it.
I view that the core issue is that democracy and many modern institutions as they exist today are hopelessly ill-equipped to deal with the reality where billions of people are connected. We live in a world of interconnected differences. Existing social and cultural norms basically compel us to synchronize everyone in the network to the same instance of truth, and this simply doesn't work anymore.
We are already seeing huge increases in support for things like systems thinking, ecological worldview, decentralization, holism, etc. The future is pluralistic and that's okay.
The biggest difference is that Nicotine that administrated via cigarettes, while social media doesn't administrate anything and the addiction is self-generated.
Reading this makes me grateful for HN and dang.
Two anecdotes on FB Groups:
First, I think the most-missed story about the 2016 election is the role that Groups played in Bernie Sander’s ascendance. The volume of meme content and direct voter contact that I received from Bernie volunteers and passive supporters from just a few major pro Bernie groups alone— ones that I was not even part of— exceeds the volume that I have received from all other campaigns online to date.
Second, in the early days of Groups, FB decided I was a very far right-wing activist and recommended that I join a series of groups agitating for a US military coup. I still have screenshots of it. It eventually got better at guessing my tastes.
Can you share some of the screenshots?
Yeah for sure. I found the post I made about it (01/24/2014) but I am going to have to dig through my messages/HD to find the screenshot. I'll post when I'm done w/ work
1 reply →
What's his net worth?
I hate fb as much as the next guy, however it seems like these "mea culpa" admissions might be motivated by raising the profile of the person making them. It feels like these public acts of self-flagellation are meant to shield them from scrutiny for doing these things in the first place. What brought about this change of heart? Why did you work there in the first place?
Unless you think he's lying, who fucking cares what his motivation for whistleblowing is? Many whistleblowers want to get back at companies that screwed them, or want to get interviewed on cable news, or want a book deal, or whatever. Who fucking cares if they're not ideologically pure altruistic saints? What matters is the truth, not the motivations of those telling the truth.
(Do people with exclusively pure selfless motivations even exist? Even people who donate to charities anonymously are plausibly motivated at least in part by the warm tingly feeling they enjoy when giving charitably.)
These things aren't mutually exclusive, a testimony in Congress is inherently a public memorialization.
Tim Kendall has been an outspoken critic for a long time and also a recent central figure in the movie "The Social Dilemma" which is the about the same thing and will lead to more speaking engagements on the topic.
That doesn't dilute the message. If you think it does, what does a better arbiter of this aspect of reality look like? Who would that person be and what would their credentials be?
What do you mean they have been an outspoken critic for a long time?
Honest question: Were they outspoken when they were a director at FB, or when they were president at pinterest? Or did it start two years ago when they became CEO of Moment selling an app to cut down on screen time?
In my mind, an ideal arbiter isn't also selling a product to fix the problem they are raising awareness about.
This doesn't mean what they are saying isn't true, or that they didn't have a real change of heart, but is certainly a conflict of interests.
4 replies →
What evidence do you have to back up these claims?
“It seems like these mea culpa admissions might be motivated...”
It seems like you’re not willing to state there is another agenda but you want to attack people speaking up anyway — because they were part of the problem or contributed to it, that anything they say now doesn’t matter.
I don’t think this is constructive.