Comment by pjc50

1 year ago

Once you actually read the article .. you see a similar kind of thing to complaints about Youtube or bank demonetization. People are accused of fraud, and have their access withdrawn - but nobody will explain what they allegedly did, because that would leak information about the fraud detection.

It's a kind of automated low trust economy. The drivers don't trust the apps, and the app doesn't trust the drivers, so the thing has to be held together by surveillance and micromanagement.

I am currently in a nightmare scenario at a new job. I just finished building their website, and it got flagged as a phishing website by Google Safebrowsing because Google seems to think that our analytics subdomain which is a self-hosted instance of Umami Analytics is a phishing attempt.

I requested a review once, they removed the flag. It came back a couple of days ago. I then had to move Umami to its own domain because I couldn't risk this ever happening again (visitors to our root domain were also getting the huge red warning, and our business was coming off as a scam).

Then they flagged the new domain as well. They've removed it again at my request, but I am just counting down the days until it happens again.

There is no way for me to get through to a human to talk about why this is happening.

  • Have legal send Google a C&D and shoot an email to the FTC about anticompetitive behavior. That's how you get a human involved.

    • Even if this works, it represents a failure in the system that needs to be fixed.

      (I assume you're just trying to help the parent solve their problem so I'm not trying to be dismissive of your comment)

      18 replies →

> The drivers don't trust the apps, and the app doesn't trust the drivers, so the thing has to be held together by surveillance and micromanagement.

Exactly. And a large dose of gaming the system (or trying to), which reduces trust even further. Why play fair with an unaccountable algorithm?

That and the use of black box models whose predictions are not explainable.

  • What's fun is you can still do black box probing. And guess what, spammers have done this.

    I get these emails that look like classic spam like a link to a home depot or wallmart giftcard, but they're addressed to someone who isn't me. After getting a bunch of these I decided to look at the original email. They are being sent to an outlook (e.g. notmyname@biggerish.someShortName01.shortname.outlook.com) and appear from something that looks like a store (e.g. contact_support.csz@fakestore.fr>). It passes SPF and DMARC but fails DKIM.

    The content?

    It used to be PAGES of stuff like "here's your email password reset link" or "thank you for signing up <legitimate place>". I was confused at first but then realized that yeah, this stuff likely bypass a ML filter. But the spammers have gotten better at it and now they can do it with only a page of content.

    Of course, I can easily filter these by just parsing the "To Address" (I use Thunderbird). But I reported tons of these and was deleting them. But in middle of last year I decided to just start collecting them. I have over 50...

    This is low hanging fruit stuff... Like a Naive Bayes could handle this. The current solution could probably handle it if they started actually fucking labeling the examples as spam and assumed that the labeling process was noisy (dear god I hope they use at least "legit" "unknown" "spam" and don't assume legit if it isn't marked as spam...)

    I have EVEN TALKED TO A PERSON and the issue couldn't be escalated... Which IMO is being complacent in spam.

I disagree with using "debanking" as an example. At least in the US, banks are required by law (the Bank Secrecy Act (BSA), et al) to not divulge certain information. As far as I'm aware, YouTube et al are not under such a legal requirement.

  • While we did not divulge your certain information, we regret to tell you that your certain information was access in a hack that was discovered (9 months later).

    if they intentionally or unintentionally were the source of that certain information, there's little recourse for you after the fact

low trust economy is basically techno fascism. This is what a pre-cyberpunk distopia looks like, and while the first impacts appear towards progress, it's unlikely going to be about progress but cementing the technocrats and oligarchs.

  • There is an old Charlie Chaplin movie about turning factory workers into an extension of machines.

    If an app pretty much tells you how to do your job there's no place for personal expression and you become a zombie.

There's more egregious cases though that I think illustrate the problem at large: no one wants accountability.

A very famous and egregious example is the XBox user who got banned for listing "Fort Gay" as their place of residence[0]. This is a problem that was caused by automation and honestly, could have entirely been resolved with automation too[1]. But it was also a problem that could have been resolved in under a minute were a human given real power to do anything (or recognize that the cheapest labor usually isn't the cheapest labor).

Another is how there's a family suing Google for directing a man to drive off a bridge[2]. Hold your reservations because this is kinda like the McDonald's Coffee lawsuit[3]. The bridge had collapsed in 2013 and the man drove off in 2022. There's multiple parties that share some fault here (like city for not marking and barricading the bridge[4]), but the issue was reported many times and what kind of live map system isn't updating their maps within a decade?

I frequently report spam, phishing attacks, and all sorts of stuff. Nothing gets through. Same with Google maps. Same with literally any app. I can even send to dev channels with patches and things often do not go through. I can sit on a PR for months while others are asking for a merge and then a dev comes back and says "oh, change color to colour" or something, I'll repatch that night, and then the dev goes radio silent (seriously, it is more work to ask me to make that change than it is to do it yourself...).

I have so many frustrations, but the root of it all is that I can't fix problems I find. Even if I can create the fix myself, I can't get them upstream so I don't have to patch every fucking patch that comes down. I think a lot comes down to our mentality of "move fast and break things." This is fine for learning but not fine for production. Who cleans up all the mess left behind? The debt just grows and compounds. I know mitigating future costs is "invisible" but often we're talking about 15 minutes of work. If you don't have that kind of slack in your system then you're doomed. It's like having exactly the number of lifeboats on a ship such that you can accommodate every passenger. That's dumb. You have to over accommodate. Or else you get the Titanic (which underaccommodated, despite being capable of overaccomodating).

[0] https://kotaku.com/xbox-live-gamer-suspended-for-living-in-f...

[1] Step 1: Check user's location. If they aren't masking it, you'll find that they are located in "Fort Gay". Step 2: If it is masked, plug the fucking location into Google Maps or some database with a list of cities and check for a match. Done. Yay. 30 minutes of programming and you saved the company hundreds of dollars in customer service fees and millions of dollars in reputation rebuilding "fees".

[2] https://www.cnn.com/2023/09/21/us/father-death-google-gps-dr...

[3] https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau...

[4] I highly advocate citizen action here. If you live near there, put a pile of rocks or anything in the way to make a barricade. Law comes after you? Fuck the law. Besides, I'm sure it'll make a great news story. We have those for people filling in potholes, this seems much more sensational.

  • XBox Fort Gay was a classic example of the Scunthorpe Problem[0]. I suppose we need a formal Scunthorpe Test, but this seems like you could solve the Problem with a popup checkbox and text field whenever your filter flags an account.

    The seminal Falsehoods Programmers Believe About Names[1] looks at similar territory from a different perspective.

    [0] https://en.wikipedia.org/wiki/Scunthorpe_problem

    [1] https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-...

    • I agree. But also at the root of it is that a problem can't be escalated such that a thinking human that has actionable power can be involved.

      The fallacy here is a belief that the filter is perfect. Or really, that any process can be perfect. Even if one could be perfect at a specific moment in time, well time marches on and things change.

      I'm all for automation but it has to be recognized that the thing will always break and likely in a way you don't expect. Even in ways you __couldn't__ expect. So you have to design with that failure in mind. A lot of these "Falsehoods Programmers Believe About <X>" could summarized as "Programmers Believe They Can Accurately Predict All Reasonable Situations". I added "reasonable" on purpose. The world is just complex and we can only see a very limited amount. The best way to be accurate is to know that you're biased, even if you can't tell in which way you're biased.

      2 replies →