← Back to context

Comment by JimDabell

1 day ago

> The problem isn’t the system as implemented

Great!

> the problem is the very assertion “it is possible to preserve the privacy your constituents want, while running code at scale that can detect Bad Things in every message.”

Apple never made that assertion, and the system they designed is incapable of doing that.

> if a model can detect CSAM at scale, it can also detect anyone who possesses any politically sensitive image.

Apple’s system cannot do that. If you change parts of it, sure. But the system they proposed cannot.

To reiterate what I said earlier:

> The vast majority of the debate was dominated by how people imagined it worked, which was very different to how it actually worked.

So far, you are saying that you don’t have a problem with the system Apple designed, and you do have a problem with some other design that Apple didn’t propose, that is significantly different in multiple ways.

Also, what do you mean by “model”? When I used the word “model” it was in the context of using another system as a model. You seem to be using it in the AI sense. You know that’s not how it worked, right?