Comment by JoshTriplett

2 days ago

Good riddance to a system that would have provided precedent for client-side scanning for arbitrary other things, as well as likely false positives.

> I wanted there to be a reasonable debate on it

I'm reminded of a recent hit-piece about Chat Control, in which one of the proponent politicians was quoted as complaining about not having a debate. They didn't actually want a debate, they wanted to not get backlash. They would never have changed their minds, so there's no grounds for a debate.

We need to just keep making it clear the answer is "no", and hopefully strengthen that to "no, and perhaps the massive smoking crater that used to be your political career will serve as a warning to the next person who tries".

This. No matter how cool the engineering might have been, from the perspective of what surveillance policies it would have (and very possibly did) inspire/set precedent for… Apple was very much creating the Torment Nexus from “Don’t Create the Torment Nexus.”

  • > from the perspective of what surveillance policies it would have (and very possibly did) inspire/set precedent for…

    I can’t think of a single thing that’s come along since that is even remotely similar. What are you thinking of?

    I think it’s actually a horrible system to implement if you want to spy on people. That’s the point of it! If you wanted to spy on people, there are already loads of systems that exist which don’t intentionally make it difficult to do so. Why would you not use one of those models instead? Why would you take inspiration from this one in particular?

    • The problem isn’t the system as implemented; the problem is the very assertion “it is possible to preserve the privacy your constituents want, while running code at scale that can detect Bad Things in every message.”

      Once that idea appears, it allows every lobbyist and insider to say “mandate this, we’ll do something like what Apple did but for other types of Bad People” and all of a sudden you have regulations that force messaging systems to make this possible in the name of Freedom.

      Remember: if a model can detect CSAM at scale, it can also detect anyone who possesses any politically sensitive image. There are many in politics for whom that level of control is the actual goal.

      1 reply →

    • > I can’t think of a single thing that’s come along since that is even remotely similar. What are you thinking of?

      Chat Control, and other proposals that advocate backdooring individual client systems.

      Clients should serve the user.

      1 reply →

I don’t think you can accurately describe it as client-side scanning and false positives were not likely. Depending upon how you view it, false positives were either extremely unlikely, or 100% guaranteed for practically everybody. And if you think the latter part is a problem, please read up on it!

> I'm reminded of a recent hit-piece about Chat Control, in which one of the proponent politicians was quoted as complaining about not having a debate. They didn't actually want a debate, they wanted to not get backlash. They would never have changed their minds, so there's no grounds for a debate.

Right, well I wanted a debate. And Apple changed their minds. So how is it reminding you of that? Neither of those things apply here.

  • Forgot about the concept of bugs have we? How about making Apple vulnerable to demands from every government where they do business?

    No thanks. I'll take a hammer to any device in my vicinity that implements police scanning.

    • > Forgot about the concept of bugs have we?

      No, but I have a hard time imagining a bug that would meaningfully compromise this kind of system. Can you give an example?

      > How about making Apple vulnerable to demands from every government where they do business?

      They already are. So are Google, Meta, Microsoft, and all the other giants we all use. And all those other companies are already scanning your stuff. Meta made two million reports in 2024Q4 alone.

      1 reply →