← Back to context

Comment by mrwh

11 hours ago

Meta wants to be an impartial platform only and exactly when it suits them to be.

Yeah, glad to see Zuck is sticking with those strong free speech principles he couldn't wait to get back to last year.

  • Free speech which apparently includes https://news.ycombinator.com/item?id=42651178

    I actually am more at odds with HN than many people might be because I think the lies surrounding covid and the censorship were absolutely wrong and platforms could genuinely after things like that lay claim to being unfairly directed, but you can tell Zuck doesn't actually care because he immediately started doing that

Wow.

Does Zuckerberg have some kind of clinical condition where he just can't imagine how other people might see him?

Sure this will slow down the personal injury lawyers finding clients but it won't stop them, meantime it is more ammunition for Facebook's enemies to use against it.

It is one thing to do shady business, it is another thing to incriminate yourself. If you were involved with weed and somebody sent you an email asking if they could come around and pick up a Q.P. next Saturday I'd expect you to give the person a correction in person that they shouldn't do that again.

Not to say you should be like Epstein but I mean he and the people he corresponded with had some sense so there is is very little evidence of criminal activity in millions of emails.

At Facebook on the other hand all the time people sent emails about things that could just as easily been left as "dark matter" unexplained and minimally documented decisions but no it is like that M.F. Doom song "Rapp Snitch Knishes", like a bunch of children or something with no common sense at all.

  • > Does Zuckerberg have some kind of clinical condition where he just can't imagine how other people might see him?

    Yeah, it's called having-too-much-money-to-careitis.

  • >Does Zuckerberg have some kind of clinical condition where he just can't imagine how other people might see him?

    Not sure he cares. He's literally got hundreds of billions of dollars to his name, and the corporation he founded is worth trillions.

  • > Does Zuckerberg have some kind of clinical condition where he just can't imagine how other people might see him?

    Nah, he just doesn't care. Nothing he does will ever get people (en masse, onesie, twosies don't matter) to stop using Meta products.

    People can/will complain about him forever, but shitty people will continue to help him build things, and shitty people will continue to use them.

    • Well Meta products are decaying like Cobalt 60; people are stopping

      https://trends.google.com/explore?q=facebook&date=all&geo=US

      and maybe Zuck doesn't think he can do anything about it. There are different theories but i like this one:

      -- originally you would put some imagination and elbow grease into using Facebook and get some intention which made it very attractive and interesting to people around 2010

      -- then it found a business model which was dependent on your not being able to use imagination and elbow grease to get attention which made it less interesting in general but still somewhat interesting because now you could put cash into the slot machine and get cash out

      -- over time they lowered the payout of the slot machine which made the game less interesting and more dependent on 100% profitable scams which could function no matter how bad the payout was; people lose trust in the platform and stop engaging with ads, real advertisers don't want to be seen next to scam ads (lest they be seen as scams) which further lowers the payout and makes the game less interesting over time

      -- and now they won't even take your money... so who cares?

      2 replies →

    • They both captured the early market (inconsistent page style of Myspace, slowness of Friendster, then they acquired Friendfeed) in an early internet - anyone who captures the early market will have THE network effect for decades (plus shadow profiles) as person x joins because person y is there because person z is there - which is still young to this day, and also they apparently used to censor links to their competition

      The game is rigged, also Instagram and Whatsapp (yeah, companies get acquired. but WA's Acton was very explicit - "delete Facebook" (also, ever tried deleting FB? almost impossible. more network effects). he was pissed off at what happened)

      1 reply →

  • Yes, it's called being a billionaire. I'm sure if clinicians actually studied this group of people, they would find strains of delusions of grandeur, paranoia, extreme risk taking behavior, lack of self control and self awareness, inability to deal with adversity and setbacks without emotional outbursts, inability to contain and dismiss intrusive antisocial thoughts.

    I feel probably that the emotional maturity of most billionaires is at the toddler level or below, and I mean that quite seriously and literally.

one tos clause and neutrality disappears. now meta decides which claims get reach

Name one platform that doesn't, and I'm not just talking about lip service.

I mean, they spun up a bullshit "Oversight Board" that they can fully 100% choose to ignore and decline to implement their demands when they're made.

Reddit is the same way. Poke a few sacred cows and suddenly you're banned for something you did 6 months ago that we aren't going to tell you about and no we don't want to discuss it.

Kafkaism is natural and organic.

[flagged]

  • Throwing the baby out with the bathwater?

    I believe we need to strengthen 230, but with the added caveat that affected platform owners must stop gaming the algorithms, that it must require user-driven curation. Let me curate my own feed, stop shoving shit in front of my eyes. When you do so, you're making heavy editorial decisions, and should be open to liability.

    • This is really the essence of it. Section 230 is critical to a healthy internet, but there is large grey area between editorial and platform. Places like youtube, meta, X, etc. are pretending to be platforms when really they are algorithmic editors, gatekeepers, and curators. They are much more like traditional media newspapers than say your ISP, and they need to be treated as such.

      3 replies →

  • A few years ago this seemed a bit too extreme for me. Now, with the web mostly burned down anyway, I see little to lose and lots to gain in a section 230 repeal. My, how the Overton Window changes on some ideas. And when it's changing on some things it tends to accelerate on others too, like a social momentum on reconsidering past norms.

    • My compromise pitch, since the "You need ID from your users" ship has sailed:

      Companies are not liable if they have proper ID of the person who submitted the content and can provide that to a plaintiff. If they have not made a good-faith effort to know who submitted this info (like taking ID, not just an email address) then they're taking responsibility for the submitted content.

      Which means sites that have responsible moderation can still allow anonymous contributions.

      The real problem is the inherent asymmetry of legal battles, where the wealthiest can fight forever with endless motions and have near-total impunity while a legal action would basically nuke a normal person's life. Not to mention the fact that an international border can often make this whole conversation moot.

      3 replies →

  • The main problem with 230 is that the courts have decided to treat it as if it removes all legal liability from online platforms, rather than just publisher liability. The way the text was written seems to be intended to protect platform operators from publisher liability but still have them under distributor liability. For example, if you own a bookstore and carry a book that says something defamatory, you can be held liable if you don't remove the book after being informed about its contents. However, a court case soon after 230 passed created the precedent that it absolves online platforms of all forms of liability. This means that if a platform knows it hosts illegal or defamatory content and doesn't take it down, they aren't liable and any legal cases against them will get thrown out due to 230. One of the authors of section 230 later said that "the judge-made law has drifted away from the original purpose of the statute."

    • >For example, if you own a bookstore and carry a book that says something defamatory, you can be held liable if you don't remove the book after being informed about its contents.

      I don't think you can in the US. Maybe elsewhere, but in the US AFAIK the author is responsible for the content they publish, not the bookstores carrying the books.

      >This means that if a platform knows it hosts illegal or defamatory content and doesn't take it down, they aren't liable and any legal cases against them will get thrown out due to 230.

      No it doesn't. Section 230 doesn't allow sites to host illegal content, of course only "legality" within the framework of US law matters.

      All it says is that the liability for user posted content lies with the user posting the content, not the platform hosting it. Which to me seems appropriate.

I think there’s a clear difference in restricting advertising vs organic posts.

  • Meta does both. It has long been said that businesses have little organic reach in Meta’s platforms, as an incentive for them to use ads.

    • I’m not a big poster at all, but ran into this precise issue.

      They analyze the video posts on instagram. If they detect the video has even a small amount of commercial value, they classify it as branded content and you need to pay for it to get promoted.