← Back to context

Comment by btown

2 days ago

This. No matter how cool the engineering might have been, from the perspective of what surveillance policies it would have (and very possibly did) inspire/set precedent for… Apple was very much creating the Torment Nexus from “Don’t Create the Torment Nexus.”

> from the perspective of what surveillance policies it would have (and very possibly did) inspire/set precedent for…

I can’t think of a single thing that’s come along since that is even remotely similar. What are you thinking of?

I think it’s actually a horrible system to implement if you want to spy on people. That’s the point of it! If you wanted to spy on people, there are already loads of systems that exist which don’t intentionally make it difficult to do so. Why would you not use one of those models instead? Why would you take inspiration from this one in particular?

  • The problem isn’t the system as implemented; the problem is the very assertion “it is possible to preserve the privacy your constituents want, while running code at scale that can detect Bad Things in every message.”

    Once that idea appears, it allows every lobbyist and insider to say “mandate this, we’ll do something like what Apple did but for other types of Bad People” and all of a sudden you have regulations that force messaging systems to make this possible in the name of Freedom.

    Remember: if a model can detect CSAM at scale, it can also detect anyone who possesses any politically sensitive image. There are many in politics for whom that level of control is the actual goal.

    • > The problem isn’t the system as implemented

      Great!

      > the problem is the very assertion “it is possible to preserve the privacy your constituents want, while running code at scale that can detect Bad Things in every message.”

      Apple never made that assertion, and the system they designed is incapable of doing that.

      > if a model can detect CSAM at scale, it can also detect anyone who possesses any politically sensitive image.

      Apple’s system cannot do that. If you change parts of it, sure. But the system they proposed cannot.

      To reiterate what I said earlier:

      > The vast majority of the debate was dominated by how people imagined it worked, which was very different to how it actually worked.

      So far, you are saying that you don’t have a problem with the system Apple designed, and you do have a problem with some other design that Apple didn’t propose, that is significantly different in multiple ways.

      Also, what do you mean by “model”? When I used the word “model” it was in the context of using another system as a model. You seem to be using it in the AI sense. You know that’s not how it worked, right?

  • > I can’t think of a single thing that’s come along since that is even remotely similar. What are you thinking of?

    Chat Control, and other proposals that advocate backdooring individual client systems.

    Clients should serve the user.

    • > Chat Control, and other proposals that advocate backdooring individual client systems.

      Chat Control is older than Apple’s CSAM scanning and is very different from it.

      > Clients should serve the user.

      Apple’s system only scanned things that were uploaded to iCloud.

      You missed the most important part of my comment:

      > I think it’s actually a horrible system to implement if you want to spy on people. That’s the point of it! If you wanted to spy on people, there are already loads of systems that exist which don’t intentionally make it difficult to do so. Why would you not use one of those models instead? Why would you take inspiration from this one in particular?