For context, I deeply hate the abuse of children and I've worked on a contract before that landed 12 human traffickers in custody that were smuggling sex slaves across boarders. I didn't need to know details about the victims in question, but it's understood that they're often teenagers or children.
So my initial reaction when reading this Twitter thread was "let's get these bastards" but on serious reflection I think that impulse is wrong. Unshared data shouldn't be subject to search. Once it's shared, I can make several cases for an automated scan, but a cloud backup of personal media should be kept private. Our control of our own privacy matters. Not for the slippery slope argument or for the false positive argument, but for its own sake. We shouldn't be assuming the worst of people without cause or warrant.
That said, even though I feel this way a not-small-enough part of me will be pleased if it is deployed because I want these people arrested. It's the same way I feel when terrorists get captured even if intelligence services bent or broke the rules. I can be happy at the outcome without being happy at the methods, and I can feel queasy about my own internal, conflicted feelings throughout it all.
Having known many victims of sexual violence and trafficking, I feel for the folks that honestly want that particular kind of crime to stop. Humans can be complete scum. Most folks in this community may think they know how low we can go, but you are likely being optimistic.
That said, law enforcement has a nasty habit of having a rather "binary" worldview. People are either cops, or uncaught criminals. ..and they wonder why they have so much trouble making non-cop friends (DISCLAIMER: I know a number of cops).
With that worldview, it can be quite easy to "blur the line" between child sex traffickers, and parking ticket violators. I remember reading a The Register article, about how anti-terrorism statutes are being abused by local town councils to do things like find zoning violations (for example, pools with no CO).
Misapplied laws can be much worse than letting some criminals go. This could easily become a nightmare, if we cede too much to AI.
And that isn't even talking about totalitarian regimes, run by people of the same ilk as child sex traffickers (only wearing Gucci, and living in palaces).
”Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be.”
> People are either cops, or uncaught criminals. ..and they wonder why they have so much trouble making non-cop friends (DISCLAIMER: I know a number of cops).
Ehh let's not make a habit of asserting anecdote as fact, please. Saying you know cops is like saying you know black people and that somehow it affords you some privilege others do not possess.
I'm not. I am very unambiguously against this and I think if word gets out Apple could have a real problem.
I would like to think I am against child porn as any well-adjusted adult. That does not mean I wish for all my files to be scanned without my consent or even knowledge for compliance, for submission to who knows where matching to who knows what reporting to, well, who knows.
That's crossing a line. You are now reading my private files, interpreting them, and doing something based on that interpretation. That is surveillance.
If you truly want to "protect the children" you should have no issue for the police to visit and inspect your, and all of your neighbors houses. Every few days. Unannounced, of course. And if you were to resist, you MUST be a pedophile who is actively abusing children in their basement.
I'm actually more OK with unannounced inspections of my basement (within reason) than with some government agents reading through my files all the time.
“If you want a vision of the future, imagine a boot stamping on a human face - forever.” I always think about this Orwell quote and think it’s up to us to try to fight for what is good, but we were too busy doom-scrolling on Twitter to do anything about it.
The NCMEC database that Apple is likely using to match hashes, contains countless non-CSAM pictures that are entirely legal not only in the U.S. but globally.
This should be reason enough for you to not support the idea.
From day 1, it's matching legal images and phoning home about them. Increasing the scope of scanning is barely a slippery slope, they're already beyond the stated scope of the database.
The database seems legally murky. First of all, who would want to actually manually verify that there aren't any images in it that shouldn't be? If the public can even request to see it, which I doubt, would you be added to a watch list of potentially dangerous people or destroy your own reputation? Who adds images to it and where do they get those images from?
My point is that we have no way to verify the database wouldn't be abused or mistaken and a lot of that rests on the fact that CSAM is not something people want to have to encounter, ever.
NCMEC is an private organization created by the U.S. Government, funded by the U.S. Government, operates with no constitutional scrutiny, operates with no oversight / accountability, could be prodded by the U.S. Government, and they tell you to "trust them".
To be fair the Twitter thread says (emphasis mine) "These tools will allow Apple to scan your iPhone photos for photos that match a specific perceptual hash, and report them to Apple servers if too many appear."
I don't know what the cutoff is, but it doesn't sound like they believe that possession of a single photo in the database is inherently illegal. That doesn't mean this is overall a good idea. It simply weakens your specific argument about occasional false positives.
Since you worked on an actual contract catching these sorts of people you are perhaps in a unique position to answer the question: will this sort of blanket surveillance technique in general but also in iOS specifically - actually work to help catch them?
I have direct knowledge of examples of where individuals were arrested and convicted of sharing CP online and they were identified because a previous employer I worked for used PhotoDNA analysis on all user uploaded images. So yeah, this type of thing can catch bad people. I’m still not convinced Apple doing this is a good thing, especially on private media content without a warrant, even though the technology can help catch criminals.
Just as being banned from one social media platform for bad behavior pushes people to a different social media platform, this might very well push the exactly wrong sort of people from iOS to Android.
If Android then implements something similar, they have the option to simply run different software, as Android lets you run whatever you want so long as you sign the wavier.
"You're using Android?! What do you have to hide?"
-- Apple ad in 2030, possibly
I'm the person you're responding to, and I think so? My contract was on data that wasn't surveilled, it was willingly supplied in bad faith. Fake names, etc. And there was cause / outside evidence to look into it. I can't really go into more details than that, but it wasn't for an intelligence agency. It was for another party that wanted to hand something over to the police after they found out what was happening.
This scanning doesn't prevent the actual abuse and all this surveillance doesn't get to the root of the problem but can be misused by authoritarian governments.
It's a pandoras box.
You wouldn't allow the regular search of your home in real life.
I'm not at all conflicted about this. Obviously CSAM is bad and should be stopped, but it is inevitable this new feature will become a means for governments to attack material of far more debatable 'harm'.
Well, it's a lot like everything. No one wants abusers, murderers, and others out and about. But then, we can't search everyone's homes all of the time for dead bodies, or other crimes.
We would all be better off without these things happening, and anyone would want less of it to happen.
Since they are only searching for _known_ abusive content, by definition they can only detect data that has been shared, which I think is the important point here.
> Unshared data shouldn't be subject to search. Once it's shared, I can make several cases for an automated scan, but a cloud backup of personal media should be kept private. Our control of our own privacy matters. Not for the slippery slope argument or for the false positive argument, but for its own sake. We shouldn't be assuming the worst of people without cause or warrant.
I have a much simpler rule: Your device should never willingly* betray you.
*With a warrant, police can attempt to plant a bug, but your device should not help them do so.
I don't think this rule makes any sense, because it just abstracts all the argument into the word "betray".
The vast majority of iPhone users won't consider it a betrayal that they can't send images of child abuse, any more than they consider it a betrayal that it doesn't come jailbroken.
The victims of child abuse depicted in these images may well have considered it a betrayal by Apple that they allowed their privacy to be so flagrantly violated on their devices up until now.
So if I understand correctly, they want to scan all your photos, stored on your private phone, that you paid for, and they want to check if any of the hashes are the same as hashes of child porn?
So... all your hashes will be uploaded to the cloud? How do you prevent them from scanning other stuff (memes, leaked documents, trump-fights-cnn-gif,... to profile the users)?
Or will a huge hash database of child porn hashes be downloaded to the phone?
Honestly, i think it's one more abuse of terrorism/child porn to take away privacy of people, and mark all oposing the law as terrorists/pedos.
...also, as in the thread from the original url, making false positives and spreading them around (think 4chan mass e-mailing stuff) might cause a lot of problems too.
> and they want to check if any of the hashes are the same as hashes of child porn?
... without any technical guarantee or auditability that any of the hashes they're alerting on are actually of child porn.
How much would you bet against law enforcement to abuse their ability to use this, and add hashes to find out who's got anti government memes or police committing murder images on their phones?
And that's just in "there land of the free", how much worst will the abuse of this be in countries who, say, bonesaw journalists to pieces while they are alive?
I remember the story where some large gaming company permanently banned someone because they had a file with a hash that matched a "hacking tool". Turns out the hash was for an empty file.
A malware will definitely be created, almost immediately, that will download files that are intentionally made to match CP - either for the purposes of extortion or just watching the world burn.
I'm usually sticking my neck out in defence of more government access to private media than most on HN because of the need to stop CP, but this plan is so naive, and so incredibly irresponsible, that I can't see how anyone with any idea of how easy it would be to manipulate would ever stand behind it.
Signal famously implemented, or at least claimed to implement, a rather similar-sounding feature as a countermeasure against the Cellebrite forensics tool:
If you can recreate a file so it’s hash matches known CP then that file is CP my dude. The probability of just two hashes accidentally colliding is approximately: 4.3*10-60
Even if you do a content aware hash where you break the file into chunks and hash each chunk, you still wouldn’t be able to magically recreate the hash of a CP file without also producing part of the CP.
That document you downloaded that is critical of the party will land you and your family in jail. Enjoy your iPhone.
Seriously, folks, we shouldn't celebrate Apple's death grip over their platform. It's dangerous for all of us. The more of you that use it, the more it creates a sort of "anti-herd immunity" towards totalitarian control.
Apple talks "privacy", but jfc they're nothing of the sort. Apple gives zero shits about your privacy. They're staking more ground against Facebook and Google, trying to take their beachheads. You're just a pawn in the game for long term control.
Apple cares just as much for your privacy as they do your "freedom" to run your own (un-taxed) software or repair your devices (for cheaper).
And after Tim Cook is replaced with a new regime, you'll be powerless to stop the further erosion of your liberties. It'll be too late.
But is there a realistically better alternative? Pinephone with a personally audited Linux distro? A jailbroken Android device with a non-stock firmware that you built yourself? A homebuilt RaspberryPi based device? A paper notepad and a film camera and an out of print street map?
It always starts with child porn, and in a few years the offline Notes app will be phoning home if you write speech criticising the government in China.
This technology inevitably leads to the sueveillance, suppression and murder of activists and journalists. It always starts with protecting the kids or terrorism.
Perceptual hashes like what Apple is using are already used in WeChat to detect memes that critique the CCP.
What happens on local end user devices must be off limits. It is unacceptable that Apple is actively implementing machine learning systems that surveil and snitch on local content.
I agree, I would add that people have generated legal images that match the hashes.
So I want to ask what happens if you have a photo that is falsely identified as one in question and then an automated mechanism flags you and reports you to the FBI without you even knowing. Can they access your phone at that point to investigate? Would they come to your office and ask about it? Would that be enough evidence to request a wiretap or warrant? Would they alert your neighbors? How do you clear your name after that happens?
edits: yes, the hash database is downloaded to the phone and matches are checked on your phone.
Another point is that these photos used to generate the fingerprints are really legal black holes that the public is not allowed to inspect I assume. No one wants to be involved in looking at them, no one wants to be known as someone who looks at them. It could even be legally dangerous requesting to find out what has been put into the image database I assume.
>I would add that people have generated legal images that match the hashes.
That seems like a realistic attack. Since the hash list is public (has to be for client side scanning), you could likely set your computer to grind out a matching image hash but of some meme which you then distribute.
No need to upload every hash or download a huge database with very hash. If I were building this system, I'd make a bloom-filter of hashes. This means O(1) space and time checking of a hash match, with a risk of false positives. I'd only send hashes to check against a full database.
No, your hashes are not uploaded to the cloud, yes, hashes are downloaded to your phone. Yes, it will be interesting to see if it gets spammed with false positives, although it seems as though that can easily be identified silently to the user.
That's the stated purpose, but keep in mind that these databases (NCMEC's in particular, which is used by FB and very likely Apple) contain legal images that are NOT child porn.
No-no-no. It's not your phone. If it was your phone - you would have a root access to it. It's their phone. And it's their photos. They just don't like when there's something illegal on their photos, so they will scan it, just in case.
It's funny to see anyone here could find this acceptable. I wonder what's comments would be after Apple start to scan phones for anti-censorship or anti-CCP materials in China. Or for some gay porn in Saudi Arabia.
Because you know in some countries there are materials that local government find more offensive than mere child abuse. And once surveillance tech is deployed it's certainly gonna be used to oppress people.
> Because you know in some countries there are materials that local government find more offensive than mere child abuse. And once surveillance tech is deployed it's certainly gonna be used to oppress people.
In Saudi, Bahrain, and Iran there is no minimum age of consent – just a requirement for marriage. In Yemen, the age of consent for women is 9 (but they must be married first). In Macau, East Timor, and UAE, it's 14. [1]
I would allege that in all of those states they would probably find the perceptual hash of government criticism far more important to include on the "evil material" database than anything else.
Won't anyone think of the children! And Tim Cook personally promised to not look at anything in my unencrypted iCloud backup, they really care about privacy!
And you can be sure that there's no way for the PRC, that already runs its own iCloud, to use this. America's favorite company wouldn't allow that.
> I wonder what's comments would be after Apple start to scan phones for anti-censorship or anti-CCP materials in China.
I'm cynical enough to wonder whether this isn't their actual commercial reason for developing this, with CSAM being a PR fig leaf. Apple is substantially more dependent on China than its major competitors.
Exactly, now Apple has this tech, shady governments know they can mandate Apple to use it for their own databases and Apple will have to do this if they want to keep operating within a territory.
It's quite easy to extrapolate this and in a few steps end up in a boring dystopia.
First it's iPhone photos, then it's all iCloud files, that spills into Macs using iCloud, then it's client side reporting of local Mac files, and somewhere along all other Apple hardware I've filled my home with have received equivalent updates and are phoning home to verify that I don't have files or whatever data they can see or hear that some unknown authority has decided should be reported.
What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?
> What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?
Apple takes care of everything for you, and they have your best interests at heart. You will be safe, secure, private and seamlessly integrated with your beautiful devices, so you can more efficiently consume.
What's not to like about a world where child crime, terrorism, abuse, radical/harmful content and misinformation can be spotted in inception and at the source and effectively quarantined?
No one here has a problem with the worst criminals being taken out. The problem is the scope creep that always comes after.
In 2021 and 2020 we saw people being arrested for planning/promoting anti lockdown protests. Not for actually participating but for simply posting about it. The scope of what "harmful content" is is infinite. You might agree that police do need to take action against these people but surely you can see how the scope creeped from literal terrorists and pedophiles to edgy facebook mums and how that could move even further to simple criticisms of the government or religion.
It's difficult to say how we draw the line to make sure horrible crimes go punished while still protecting reasonable privacy and freedom. I'm guessing apples justification here is that they are not sending your photos to police but simply checking them against known bad hashes and if you are not a pedophile, there will be no matches and none of your data will have been exposed.
I mean, Apple isn't too far from the Mac thing you mention. Since Catalina running an executable on macOS phones home and checks for valid signatures on their servers.
> "What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?"
Basically victims of rape don't want imagery of their rape freely distributed as pornography. They consider that a violation of their rights.
It's interesting how many users in this thread are instinctively siding with the offenders in this, and not the victims. Presumably because they made it through their own childhoods without having imagery of their own abuse shared online.
You are actually creating a false dichotomy here. There are more sides to this. And you are creating (as said a false) black and white image here.
I strongly believe that nobody wants to further victimize people by publicly showing images of their abuse.
And I believe very strongly that putting hundreds of millions of people under blanket general suspicion is a dangerous first step.
Imagine if every bank had to search all documents in safe deposit boxes to see if people had committed tax evasion (or stored other illegal things like blood diamonds obtained with child labor). That would be an equivalent in the physical world.
Now add to this, as discussed elsewhere here, that the database in question contains not only BIlder of victims, but also perfectly legal images. This can lead to people "winning" a house search because they have perfectly legal data stored in their cloud.
Furthermore, this means that a single country's understanding of the law is applied to a global user community. From a purely legal point of view, this is an interesting problem.
And yes: I would like to see effective measures to make the dissemination of such material more difficult. At the same time, however, I see it as difficult to use a tool for this purpose that is not subject to any control by the rule of law and cannot be checked if the worst comes to the worst.
I feel it's a little disingenuous to describe millions of innocent people being surveilled as "the offenders" because there are a handful of actual offenders among them.
It is a violation of their rights. But we have a justice system set up which makes distributing such images knowingly (and committing such acts with the intent to distribute those images) a crime.
It's also incredibly likely this could be used to send people you don't like to prison by sending their phone innocuous images that look like wrong images to an AI.
It's also also incredibly likely this will evolve into scanning for more than just abuse photos, especially in the hands of governments around the world.
A new aspect of this is that because this is self-reported, and the end goal is to involve the criminal justice system, there is now (essentially) an API call that causes law enforcement to raid your home.
What would be the result of 'curl'ing back a few random hashes as positives from the database? Do I expect to be handcuffed and searched until it's sorted out? What if my app decides to do this to users? A malicious CSRF request even?
A report to the cybertips line does not equal a police raid. Unfortunately the scale of the problem and the pace of growth is such that only the worst of the worst content is likely to be prosecuted.
If a phone calls the API "hello, I found some porn here" the phone (and/or it's owner) become a "person of interest" very quickly.
(I'll wager) The majority of these calls will be false positives. Now a load of resources get deployed to keep an eye on the device's owner, wasting staff time and compute, wasting (tax funded) government budget that could have gone towards proper investigation.
Yeah and sadly many of those who are consumers of illicit content get away with it because it's much more important to target the creators. The unfortunate reality of finite resources.
Also, if they send perceptual hashes to your device - it's possible images could be generated back from those hashes. These aren't cryptographic hashes, so I doubt they are very good one-way functions.
Another thought - notice that they say "if too many appear". This may mean that the hashes don't store many bits of information (and would not be reversible) and that false positives are likely - ie, one image is not enough to decide you have a bad actor - you need more.
But at Apple's scale, statistically, some law-abiding users would likely get snagged with totally innocent images.
It's also just plain absurd. Hundreds of pictures of my own children at the beach in their bathing suits? No problem. Hundreds of photos of other peoples' children in bathing suits? Big problem. Of course, the algorithm is powerless to tell the difference.
In cryptography creating a one-way function is not a problem. The only thing required for that is loosing information, which is trivial. For example taking the first n bytes of a file is a one-way hash function (for most files). So reversing the hashes is most definitely not a problem.
Creating collisions could be though, eg. brute forcing a normal picture by modifying random pixels by a bit into matching an illegal content’s hash is a possibility.
1) You willingly delegated the decision of what code is allowed to run on your devices to the manufacturer (2009). Smart voices warned you of today's present even then.
2) You willingly got yourself irrevocably vendor-locked by participating in their closed social networks, so that it's almost impossible to leave (2006).
3) You willingly switched over essentially all human communication to said social networks, despite the obvious warning signs. (2006-2021)
4) Finally you showed no resistance to these private companies when they started deciding what content should be allowed or banned, even when it got purely political (2020).
Now they're getting more brazen. And why shouldn't they? You'll obey.
Great, so what's the solution? What are you doing to fix it? Do you roll your own silicon? Do you grow your own food (we have no idea what someone could be putting in it)? Are you completely off-grid? Or are you as completely dependent on society writ large as everyone else?
Making holier than thou comments about everyone else being sheep isn't helpful or thought provoking. Offer an alternative if it is a bad one (looking at you Mastodon). So here's mine: we need to change the power of digital advertising. Most of the most rent seeking companies generate revenue primarily selling ads to get more people to buy more crap. I want a VAT on all revenue passing through the digital advertising pipeline. My hope is that if these things are less profitable, it will reduce the over-sized impact these companies (social, infotainment [there is no news anymore], search, etc.) have on our economy and life. People are addicted to to fomo and outrage (faux?), I don't that that will ever change but we can try to make it less profitable.
Seriously? Perhaps heed the warnings? Whenever Apple tightened the reigns, thousand of apologists came to their defense. I wouldn't even have minded if they kept their obedience to personal decisions. But they extended their enlightenment to others.
The solution is to go back to the original spirit of the Internet, when it was a set of open standards connecting people and organizations. Somehow it got forgotten and now we have a bunch of commercial companies giving you the same stuff in exchange for your privacy and who increasingly control everything you do.
Dumping Facebook and its products as I and others have done is one strong step forward but people can't even manage this. It's deeply disappointing. Techno bears its seeds in rebellion but everybody throwing techno parties is coordinating on what is essentially an Orwellian state. Punks too, they're cozied up to this framework of oppression and can't see it for what it is.
I think people have a hard time seeing the ethics in the technology they choose to use. Maybe the next wave of net-natives will be able to rediscover that common thread of rebellion and resist. It's insidious, I'll give you that. It's not obvious what is being surrendered with every participation on these platforms but it doesn't take a genius to see clearly.
What we have been doing would have worked if the majority would have followed.
Chose open standards, use and contribute to FOSS, avoid social networks, get involved in your local community, etc.
No need to go to extreems or complicated plans, corporations follow the customers.
But nobody did listen. Quite the opposite. I never had a facebook account, and now today people are boasting when they leave FB. But 10 years ago ? Oh we were the paranoid extremists .
Even today my friends regularly pressure me to get whatsapp.
The solution won't be technological, it will be in realm of laws and regulations. We are weak peasants and don't have any power over big tech, but we can change legal environment for them.
IANAL but we (via our elected representatives) can push a law that prohibit restrictions on execution of users' code on their own devices. Or we can split app stores from vendors and obligate them to provide access to third-party stores, like we do with IE and windows.
Also, it's completely doable to stop NSA/Prism totalitarian nonsense.
What we can do as tech people?
- raise awareness
- help people to switch from big tech vendor locks
- help people harming BT by installing adblockers, pihole etc
- participate in opensource (with donations or work)
I'd argue: avoid using proprietary networks, avoid vendor lock-in with software and hardware, and use hardware that one is allowed to use their own software on. Champion using open and federated protocols for social tools.
I think solutions exist, but honestly, it isn't easy.
> Making holier than thou comments about everyone else being sheep isn't helpful
I would offer the GP comment isn't necessarily a holier than thou comment, it's a comment of frustration. Frankly, I feel the same frustration.
It's tiring to hear snide remarks of "ohh yeah, we can include you for something because you don't have an iPhone". Hell, I have openly heard, even on this forum, that people don't include folks on social conversations with EVEN THEIR OWN FAMILY because of the dreaded "green bubble". (FYI, MMS is entirely done over HTTP and SMS! How is Apple's MMS client so bad that it can't handle HTTP and SMS?).
Or there is the "why don't you have WhatsApp/Facebook/Instagram/etc." and people think your some sort of weirdo because you don't want to use those networks.
So to be honest, when I see something like that, I think "Well I'm not surprised, this is what happens when you are locked out of your own hardware".
> What are you doing to fix it?
While GP may not be doing anything, others are helping and actively working for alternatives. For example, I have been working to get the Pinephone to have MMS and Visual Voicemail support so I can use it daily. I an very fortunate to work with a lot of very talented and motivated folks who want to see it succeed.
It's incredible how people are going to blame absolutely everything on ads. We're talking about a company for which ads are only a small part of their revenue doing something following government pressure, and somehow ads are the problem.
I'm dependent, just as you say, and have no illusions about that.
Getting into this situation wasn't my decision (it was a collective "decision" of our society), and getting out of this won't be due to anything I'll personally do either.
The only difference between me and the average joe is having understood that we have a problem earlier than most.
It’s not clear that governments would give the open social networks an easier ride either. It could be argued that distributed FOSS developers are easier to pressurise into adding back doors, unless we officially make EFF our HR/Legal department.
The other problem is workers have a right to be paid. The alternatives are FOSS and/or distributed social media. Who in good conscience would ask a tech worker to give away their labour for free, in the name of everyone else’s freedom?
In a world of $4k rent, who amongst us will do UX, frontend, backend, DevOps, UO, and Security for 7 billion people, for anything but the top market rate?
The real alternative is to attack the actual problem: state overreach. Don’t attack people for using SnapChat — get them to upend the government’s subservience to intrusive law enforcement.
imho, we have everything in the foss world working tightly except great UX/UI. in my experience in the open source world – which is not insignificant – great UX is the only thing stopping us from a paradigm shift to actual tech liberation.
even outside of corporate funded work/commits, we see an astounding number of people donating incredible amounts of their time towards great quality code. but we still thoroughly lack great UX/UI.
i’m not talking about “good”, we have some projects with “good” UX, but very very few with great.
there are many reasons and I’d be happy to share what some of them are, but in my mind great UX is unquestionably one of two primary things holding us back from actual truly viable software liberation.
> It could be argued that distributed FOSS developers are easier to pressurise into adding back doors, unless we officially make EFF our HR/Legal department.
>Who in good conscience would ask a tech worker to give away their labour for free, in the name of everyone else’s freedom?
Here's the hope: the tech workers doing it for 'free' because they're scratching their own itch. So it would not be an act of onerous charity. The techies make some free open source decentralised clone of Reddit, say, then some folks among knitting communities, origami enthusiasts, parents groups, etc. copy it for free and pay to run it on their own hardware.
If it seems like this scanning is working as advertised, this will be a great marketing stunt for Apple. Actual predators will stop using Apple products out of fear of getting caught and they will be forced to use Android.
Now any person who owns an Android is a potential predator. Also, if you are trying to jailbreak your iPhone, you are a potential predator.
Some 'predators' are dumb. They'll keep using iPhones, get caught, and have their mugshots posted in the press. Great PR for the policy makers who decided this. Such stories will be shoved in the faces of the privacy advocates who were against it, to the detriment of their credibility.
The twitter comments also mentioned scanning for political propaganda etc. This could work against Apple if normal folks don't want all their stuff scanned on behalf of unnamed agencies.
Or they will just get one step deeper into the dark, by using a designated device for the dirty stuff. Potentially only used within tor/vpn with no connection to "normal"-life.
Congratz, investigations got a bit harder, but now all people have to life with a tool that will be used against them when needed. No sane person can believe that this isn't used for other "crimes" (how ever those are defined) tomorrow.
I think having a manufacturer that is able to read the contents of your device at any point is good marketing. Although, I know some Apple users that would certainly buy that excuse.
This sort of scanning has existed for well over a decade, and was originally developed by Microsoft (search PhotoDNA).
The only thing that's changed here is that there is more encryption around, and so legal guidelines are being written to facilitate this, which has been happening for a long, long time.
(I don't disagree with your overall point, and child porn is definitely the thin edge of the wedge, but this isn't new and presumably shouldn't be too surprising for any current/former megacorp as they all have systems like this).
Nothing is ever new. You can always find some vague prototype of an idea that failed to become ubiquitous ten years ago.
When I read that this shouldn't be surprising, it has an aftertaste of "Dropbox is not interesting/surprising because ftpfs+CVS have existed for well over a decade"
> 1) You willingly delegated the decision of what code is allowed to run on your devices to the manufacturer (2009). Smart voices warned you of today's present even then.
99% of the population will delegate the decision of what code is allowed to run to someone, be it the manufacturer, the government, some guy on the Internet or whatever. For that 99% of the population, by the way, it's actually more beneficial to have restrictions on what software can be installed to avoid malware.
> 2) You willingly got yourself irrevocably vendor-locked by participating in their closed social networks, so that it's almost impossible to leave (2006).
"Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Not to mention that, again, 99% of people will get vendor-locked because in the end nobody wants to run their own instance of a federated social network.
> You willingly switched over essentially all human communication to said social networks, despite the obvious warning signs. (2006-2021)
Yes, it's been years since I talked someone face to face or on the phone and I cannot send letters anymore.
> 4) Finally you showed no resistance to these private companies when they started deciding what content should be allowed or banned, even when it got purely political (2020).
No resistance? I mean, it's been quite a lot of discussion and pushback on social networks for their decisions on content. Things move slow, but "no resistance" is quite the understatement.
> Now they're getting more brazen. And why shouldn't they? You'll obey.
Is this Mr. Robot talking now?
But now more seriously, in December the European Electronic Communications Code comes into effect, and while it's true that there's a temporary derogation that allows these CSAM scanners, there's quite a big debate around it and things will change.
The main problem with privacy and computer control is a collective one that must be solved through laws. Thinking that individual action and free software will solve it is completely utopic. A majority of the people will delegate control over their computing devices to another entity because most people don't have both knowledge and time to do it, and that entity will always have the option to go rogue. And, unfortunately, regulation takes time.
Anyways, one should wonder why, after all these years of these kinds of smug messages, we're in this situation. Maybe the solutions and the way of communicating the problems is wrong, you know.
>99% of the population will delegate the decision of what code is allowed to run to someone, be it the manufacturer, the government, some guy on the Internet or whatever. For that 99% of the population, by the way, it's actually more beneficial to have restrictions on what software can be installed to avoid malware
I do not agree with this. You are saying people are too stupid to make decisions and that is amoral in my opinion.
>"Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Data portability is a thing. This was the original problem with FB and thats how we got 'takeout'.
>Yes, it's been years since I talked someone face to face or on the phone and I cannot send letters anymore.
>Is this Mr. Robot talking now?
Using the extreme in arguments is dishonest. We are talking on HN where it is a selective group of like minded people(bubble). How does your delivery driver communicate with their social circles? Or anyone that services you? You will find different technical solutions are used as you move up and down the social hierarchy.
>The main problem with privacy and computer control is a collective one that must be solved through laws.
Technology moves faster than what any law maker can create. We do not need more laws as technology advances but rather an enforcement of personal rights and protections enabling users to be aware of what is happening. It appears you are stating "people aren't smart enough to control their devices" and "We need laws to govern people" vs my argument that "people should be given the freedom to chose" and "existing laws should be enforced and policy makers should protect citizens with informed consent".
> "Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Not true. If you have interoperability between different networks, you can leave. This is how ActivityPub (e.g. Mastodon, PeerTube, PixelFed) works.
> Not to mention that, again, 99% of people will get vendor-locked because in the end nobody wants to run their own instance of a federated social network.
You just switch to any other instance, because Mastodon doesn't prevent you from doing that.
> The main problem with privacy and computer control is a collective one that must be solved through laws. Thinking that individual action and free software will solve it is completely utopic.
We need both. You cannot force Facebook to allow interoperability when there is no other social network.
You could go back even further if you wanted. Possibly to the first handwritten letter delivered by a third party. That's where all the potential for censorship and tampering started.
Truth is even if our tools evolve, our chains evolve faster.
Signals intelligence intercept and analysis centres have been called Black Chambers for a long time, including the first such group in the US, predecessor to the NSA:
I don't think this is a fair characterization; it should not be most people's life goal to fight for their privacy against big companies. Some people make it theirs, and that's fine, but I think it's def not something to expect from most people, in the same way that you don't expect everyone to be actively fighting for clean tap water or drivable roads.
Instead, we as a collective decided to offload these tasks to the government and make broad decisions through voting. This allows us to focus on other things (at work, with our actual job, at home, you can focus with what matters for you, whatever that might be).
For instance, I tried to avoid Facebook for a while and it was working well, I just missed few acquaintances but could keep in touch with the people who matter for me. Then suddenly they acquired Whatsapp. What am I to do? Ask my grandmother and everyone in between to switch to Telegram? Instead, I'm quite happy as a European about the GDPR and how the EU is regulating companies in these regards. It's definitely not yet there, but IMHO we are going in the right direction.
Probably because for most people they estimate the risk to be low enough (correctly or not). If I was a politically sensitive person in China for instance I’ll definitely be more weary.
(1) happened with the first multitasking OS, or possibly when CPUs got microcode; Android and iOS are big increases in freedom in comparison to the first phones.
(2) and (3) are bad, but a tangential bad to this: it’s no good having an untainted chat layer if it’s running on an imperfect — anywhere from hostile to merely lowest-bidder solution — OS. (And most of the problems we find in software have been closer to the later than the former).
(4) for all their problems, the American ones held off doing that until there was an attempted coup, having previously resisted blocking Trump despite him repeatedly and demonstrably violating their terms.
Re (1): That's technically true, but missing the point when viewed holistically. Those first feature phones were mostly just used to make quick calls to arrange an appointment or discuss one or two things. They were not a platform to mediate a majority chunk of our social lifes like today's phones are.
Will this work differently depending on what country you are in?
For instance, back in 2010 there was that thing about Australia ruling that naked cartoon children count as actual child porn. [1]
It's perfectly legal elsewhere (if a bit weird) to have some Simpsons/whatever mash-up of sexualised images, but if I flew on a plane to the land down under, would I then be flagged?
edit:
If this is scanning stuff on your phone automatically, and you have whatsapp or whatever messenger set to save media automatically, then mass texting an image that is considered 'normal' in the sender country, but 'bad' in the recipients, you could get a lot of people flagged just by sending a message.
Sorry to say that, but stuff like this has to happen at some point when people don't own their devices. Currently, nearly no one owns their phone and at least EU legislation is underway to ensure that it stays this way. The next step will be to reduce popular services (public administration, banking, medicine) to access through such controlled devices. Then we are locked in.
And you know what? Most people deserve to be locked in and subject to automatic surveillance. They will wake up when their phone creates a China-Style social score automatically, but then it will be far too late. It's a shame for those people that fought this development for years, though. But the "I have nothing to hide" crowd deserves to wake up in a world of cyber fascism.
When you mention that set of population as deserving the consequences, it does not seem too far to me from "People who want trains instead of cars deserve trains". Is this relevant? The big problem is, people buy controversial services, hence finance them and endorse them, hence strengthen them, and in some cases these services make the acceptable ones extinct: the big problem is that people do not refuse what is not sensible, and sensible people have to pay.
Already here where I live, I cannot get essential services¹ because that practice made them extinct!!!
¹(exactly: public administration, tick; banking, very big tick; medicine, not yet. And you did not mention ___the cars___, and more...)
Other note: you wrote
> nearly no one owns their phone
and some of us are stuck with more reliable older devices, which soon may need some kind of replacement. If you know the exceptions to the untrustable devices, kindly share brand/model/OS/tweak.
Why do they DESERVE to be so? Despite what you say there was and is no mechanism to really change or affect the course of these affairs.
Apple? How? Your other option is Android, who do you choose when they start to do it?
Or when governments decide to mandate that ALL phones need to legally have "scanning all the files on it and report them back to the police database" mechanisms?
The EU? Particularly how? An organization that has been deliberately structured to supersede the legitimacy of nation states and export it's power to all of it's members at the whim -- sometimes it seems -- of some aging out of touch bureaucrats.
I'm not even a #brexiter, btw.
Should Scotland become an independent nation? There was a public debate and people had opinions -- and there were mechanisms in place to act on and make a change, as an example.
There has been no public debate on this in a national sense (anywhere), and also no mechanisms by which people could decide to change it. I'm not sure people deserve it.
Well this really debunks my common phrase “Apple is a Privacy company, not a Security company”
I can’t say I’m surprised they are implementing this (if true), under the radar. I can’t imagine a correct way or platform for Apple to share this rollout publicly. I’m sure nothing will come of this, press will ignore the story, and we all go back to our iPhones
Apple only claim to care about privacy because they couldn't manage to compete with Google on running ad-serving cloud service. Since they couldn't sell ads in meaningful number, they figured they might as well brag about it.
But Apple iCloud for ex doesn't intrude on your privacy any more or less than Google Photos.
This is really a pointless comment - yes all companies are ultimately there to make money, but that does not mean always blindly doing whatever makes the most money in the short term. Clearly apple sees value in marketing themselves as privacy friendly.
The terrifying part about this is potential abuse. We have seen people arrested for having child porn in their web cache just from clicking on a bad link. I could inject your cache with any image I want using JS.
Presumably the same could apply to your phone. Most messengers save images automatically. I presume the images are immediately scanned against hashes once saved. And the report is immediately made if it passes the reported threshold. There’s no defence against this. Your phone number is basically public information and probably in a database somewhere. You have no protection here from abuse, if you’re a normal citizen. I bet most people don’t even turn the auto save setting off on WhatsApp.
This has worrying privacy implications. I hope Apple makes a public announcement about this but wouldn’t be surprised if they don’t. I also would expect EFF will get on this shortly.
> Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content.
> That’s the message they’re sending to governments, competing services, China, you.
False positives, what if someone can poison the set of hashes, engineered collisions, etc. And what happens when you come up positive - does the local sheriff just get a warrant and SWAT you at that point? Is the detection of a hash prosecutable? Is it enough to get your teeth kicked in, or get you informally labeled a pedo by your local police? On the flip side, since it's running on the client, could actual pedophiles use it to mutate their images until they can evade the hashing algorithm?
Ok. I can say this since I don't have anything to hide (edit: 1. that I am aware of and 2. yet).
I switched to the Apple ecosystem 2 years ago and have been extremely happy.
I couldn't see a single reason to switch back.
Today that reason came. What goes on on my phone is my business.
I guess fairphone next.
Again, I think have nothing to hide now so I can sat this loud and clear now. Given what recent elections have shown us we cannot know if I have something to hide in a few years (political, religious? Something else? Not that I plan to change but things have already changed extremely much since I was a kid 30 years ago.)
At the end of the day laws are relative so to say. The thought behind such a system is noble indeed, but as we've seen, anything any government gets their hands on, they will abuse it. Classic example being PRISM et al. In theory it's great to be able to catch the bad guys, but it was clearly abused. This is from countries that are meant to be free, forward thinking etc, not any authoritarian regimes.
People in this thread are asking what Saudi Arabia, China etc will do with such power that Apple is adding, you bet your ass that they'll use it for their own gain.
I want to believe in such systems for the good. I want child abusers caught. But a system that equally can be abused by the wrong people (and I guarantee you that will be western countries too) ain't it.
It's not even hypothetical, it's already known that Apple has to use servers operated by China for their operations there [1] so this capability will be fully within their hands now too to arbitrarily censor and report iPhone users for any material they want to disallow.
how the fuck am i supposed to know if that image i downloaded from some random subreddit is of a girl who is 17.98 years old? how long until we just use a NN to identify images of children automatically? she looks pretty young so i guess you will get disemboweled alive in prison? what is stopping someone from planting an image on your phone or a physical picture somewhere on your property? im so tired of this fucking dogma around child porn. you can always identify the presence of dogma by the accompanying vacuum of logic that follows in its wake. a teenage girl can go to jail for distributing pictures that she took of herself. do i even need to say more?
And with this, the fear politics are in effect. Just from reading the comments it seems one can no longer be 100% sure their phone is clean. So people will live in constant fear that on some random Tuesday the cops will come knocking, your reputation will be destroyed and in the end when you’re cleared, you will have incurred incredible financial and mental costs. This is just aside the fact that your phone should be your phone and no one should be allowed.
You demo this tech working with child porn, it maybe shows it's worth with some Isis training videos but before long China will be demanding access on their terms as a condition of accessing their markets.
And at that point the well meaning privacy advocate who worked hard to get some nice policies to protect users is booted off the project because you can hardly tell the shareholders and investors who own the company that you're going to ignore $billions in revenue or let your rival get ahead because of some irrelevant political movement on the other side of the world.
It's happened plenty of times before and it'll happen again.
What I find disturbing is that almost all commenters here took that rumour for a fact. There's nothing to substantiate it, there's no evidence of scan actually happening, and there's no historical precedence of similar thing done by Apple. And yet, people working in tech with supposedly developed critical thinking took the bait.
Why? Is it simply because it fits their world view?
You’re right of course but I think in this case it was due to the reputation of the poster on Twitter. At least, that’s the only reason I would take this rumor seriously. But yeah, a rumor is a rumor still.
I found another source (https://www.macobserver.com/analysis/apple-scans-uploaded-co...) saying apple was already running these scans on iCloud using homomorphic encryption… in 2019. It doesn’t really make sense for them to run it on device. Apple has the keys to unlock iCloud backups on their server and a sizable portion of users have those enabled, so why bother to run these on device?
I’m not sure if it’s a rumor or not but there was a thread on HN the other day about Facebook exploring homomorphic encryption for running ads on WhatsApp and I wonder if wires got crossed?
This matches up with how I view Apples corporate thinking. "we know what's best" "the consumer is not to be trusted".
Apple limits access to hardware, system settings, they block apps that don't meet moral standards, are "unsafe", or just might cause apple to not make as much money. They do not significantly care what people say they want after all they know best.
A lot of people love not having options and having these decisions made for them.
I would never want a device like that or with something that scans my device, but I think the vast majority of their customers if they even hear about it will think "I trust apple, they know what's best, it wont affect me"
Im ok with apple doing it because i think most apple users will be ok with it. I would not be ok with it if all Android devices started doing it though.
That’s sort of the genius of Apple though, isn’t it? They make products that hold your hand tighter than any designer but their marketing department obscures that fact as much as possible with their “think different” campaigns.
It’s less “I trust Apple” and more “if I really cared, I’d have bought from another designer”.
This isn't exclusive to Apple - Microsoft recently decided that starting from August Windows Defender will have the option for blocking PUAs enabled by default for those users who doesn't have other third-party security software [1]. This also I belive falls under "we know what's best" and "the customer is not to be trusted" or "is too stupid to run things by on its own".
This does looks good on paper - caring for customers and their security, peace of mind but tomorrow it might be a total vendor-lock with no ways of installing any other software than one approved by the corporate entities.
I don’t see why a circumventable default block of random executables is bad. People (even technically adept ones, who are a small minority) are very easy to fool. Depending on how easy it is to allow execution of such a program (which is a UX problem) it can indeed be what prevents a botnet on a significant number of computers.
I'm a little bit confused here and hope maybe some of you can clear this up.
My parents took lots of photos of me as a baby/small child. Say lying naked on a blanket or a naked 2yr old me in a kiddie pool in the summer in our backyard. Those are private photos and because it was the 1970s those were just taken with a normal non-digital camera. They were OBVIOUSLY never shared with others, especially outside immediate family.
Transform that into the 2020s and today these type of pictures would be taken with your iPhone. Would they now be classified as child pornography even though they weren't meant to be shared with anyone nor were they ever shared with anyone? Just your typical proud parent photo of your toddler.
Sounds a bit like a slippery slope, but maybe I am misunderstanding the gravity here. I'm specifically highlighting private "consumption" (parent taking picture of their child who happens to be naked as 1yr olds tend to be sometimes) vs "distribution" (parent or even a nefarious actor taking picture of a child and sharing it with third parties). I 100% want to eliminate child pornography. No discussion. But how do we prevent "false positives" with this?
As with all horribly-ill-defined laws, it depends how the judge is feeling that day and their interpretation of the accused's intent. If the case can be made that the images arouse inappropriate gratification, they can be deemed illegal.
If that sounds absurd - most laws are like that. For better or worse, there's a human who interprets the law, not a computer. It's unfortunate Apple is choosing to elect a computer as the judge here, for exactly concerns like yours.
I believe there is a large database of known child pornography.
Unless someone has been distributing photos of your kids as child porn (which would probably be good to know) it's unlikely any of your photos will match the hashes of the photos in that database.
I'm not sure that's how it works, but that's what I've gathered from the other comments on this post.
So far. Many websites already use NN trained to detect any nudity. It is only a matter of time before it lands on all consumer computing devices. The noose will keep on tightening because people keep debating instead of protesting.
Well, this is very problematic for a privacy concerned company. Under no circumstances do I want Apple to scan my private files/photos, aspecially so if it means that an alarm can allow someone to determine if it is a positive or a false positive.
Also, this functionality isn't something they should be able to implement without telling their end users.
It is also problematic because it will just make the cyber criminals more technical aware of what counter measures they must take to protect their illegal data.
The consequence is very bad for the regular consumer: the cyber criminal will be able to hide, and the government has the possibility to scan your files. End consumer lose, again.
Every so often I feel a wave of revulsion that the computer I use the most — my iPhone — is an almost completely closed system controlled by someone else.
Contrast this with my desktop where, in the press of a few buttons, I am presented with the source code for the CPU frequency scaling code.
This will be used for anti-piracy, government censorship, and targeted attacks, as always. There's no such thing as "were only scanning for CP". By creating the tool the company can be compelled to use the tool in other ways by U.S. or foreign governments. Apple already complies with anti-lgbt countries and will change their app store to suite each one of them. What happens when they're required to also scan for LGBT materials? They'll comply, because apple doesn't actually have morals.
Ontop of this, it gives apple far too much power. What happens when someone they don't like owns an iphone? They can pull an FBI and put the content onto the device, and having it then "automatically detected".
Since Snowden I use my phone in minimalistic way. Phone calls. Minimal texting. No games. Banking apps if necessary.
Treat your phones as an enemy. Use real computers with VPN and software like Little Snitch when online. Use cameras for photography and video.
The benefits of this approach are immense. I have long attention span. I don't have fear of missing out.
If governments wan't the future to be painted by tracing and surveillance mediated towards people trough big tech - lets make it mandatory by law. And since big tech will reap benefits from the big data they must provide phones for free.
:)
>Treat your phones as an enemy. Use real computers with VPN and software like Little Snitch when online.
I'm assuming your "real computer" is a mac (since little snitch is mac only). What makes you think apple won't do the same for macos? Also, while you have greater control with a "real computer", you also have less privacy from the apps themselves, since they're unsandboxed and have full access to your system.
Yes, I am a dumb person obviously. Thanks for your invaluable input. Dude.
But my use case is not to hide or remove digital exhaust.
Creating habit of limited usage is more important and realistic. Funny part is that as a side-effect I don't cary my smartphone around so much. I have separate GPS system in my cars and dumb phone for emergency.
If you're treating your phone as hostile why would you skip gaming apps but use banking ones? That seems backwards if you're assuming your mobile is the weak point.
In the EU the PSD2 directive obliged banks to provide strong authentication for customers login process and various operations on the account incl. payments ofc. Most of the time mobile applications are being used in the result - for either login confirm or as software OTP generators (biometric verification is also supported); the lists of printed codes are rather obsolete now and some banks may actually charge your extra for sending you text messages with such codes. I know there are hardware security tokens but in all these years I haven't seen anyone using such here.
So, it's rather hard to avoid banking apps.
Also, the PSD2 directive implements the duty of providing API infrastructure for third-parties. [1]
I was under the impression that one of the reasons why these tools aren’t available for public download is because the hashes and system can be used to design defeat mechanisms? Doesn’t this mean that someone who has an image and a jail broken device can just watch the system, identify how the photo is detected, and modify it so that it doesn’t trip the filter?
PhotoDNA and systems like it are really interesting, but it seems like clientside scanning is a dangerous decision, not just from the privacy perspective. It seems like giving a CSAM detector and hashes to people is a really risky idea, even if it’s perfect and it does what it says it does without violating privacy.
If the algorithm and the blocklists leaked, then not only it would be possible to develop tools that reliably modify CSAM to avoid detection, but also generate new innocent-looking images that are caught by the filter. That could be used to overwhelm law enforcement with false positives and also weaponized for SWAT-ing.
Fortunately, it seems that matching is split between client-side and server-side, so extraction of the database from the device will not easily enable generation of matching images.
I just assumed the entire FAANG group scanned user content for CP already. I mean, EU recently passed a vote that extended the permission for companies to be able to scan and report user content for this purpose without it being considered a privacy violation [1] (first introduced in 2020). And I recall MS[2]/Google[3] being open about this practice way in the past.
Personally I somehow doubt that MS/Google weren't scanning private content (aka not shared) for this type of material. But can't have transparency with these behemoths.
This might be unpopular opinion but catching people sharing CP images is like catching end users of drugs. Yes it's illegal but the real criminals are the ones producing drugs. But it's very difficult to get to them, so you just arrest end users.
Another side note is about near future when someone comes up with synthetic CP images, will they also be criminalised?
It's not just unpopular, it's also wrong: when you use drugs, you are almost entirely harming yourself (leaving aside funding all sorts of illegal activities, just focusing on the act itself). When you propagate CSAM material, you are causing psychological harm to the victims, plus can cause them to physically harm themselves or get harmed by others. So you are a criminal, harming a victim as well.
How would a victim of CSA ever find out that I downloaded a particular file? Surely the harm there is caused by the distributor, not the consumer.
Conversely, when I use drugs, I'm paying someone, so I'm actually directly funding criminals. Depending on the country and the drugs, this is often putting cash in the hands of a very violent cartel.
I have so many questions about the implementation details.
1) Does this work only on iPhones or will it be iPads, as well?
2) Is this part of a system software update? I wonder if that will show up in the notes and how it would be spun. "In order to better protect our users ..."
3) If it is part of the system software update, will they be trying to make it run on older iDevices?
4) Is it just photos in your photo bin, iCloud, or does it start grabbing at network drives it attaches to? I could see the latter being prone to blowing up in their proverbial faces.
Even when this reaches its final conclusion, policing copyrighted and political content, people will still be content to use their i-spy-devices. The future is grim; it's now.
How do they determine if an image is child porn? My wife has an iPhone and we take pictures of our baby daughter on it, sometimes in diapers and sometimes naked. Our intentions are not pornographic but now I am worried about apple's algorithm flagging them as such.
It just gathers hashes without judging or interpreting. This is the first phase. When a child porn picture is discovered and inquired about, they just compare the hash of it with what they have on database and see who had that picture on their phone as well, allowing them to build a nicely timelined trace of it and even discover the first source.
What happens when a theocracy demands that Apple check for hashes of images that disrespect their prophet? To me this sounds potentially more scary and distopian than surveillance in China. But if I'm honest, I don't know that China isn't scanning citizens' devices for illegal hashes.
It is lovely when your "own" device is working against you to catch if you are in possession of illegal numbers https://en.wikipedia.org/wiki/Illegal_number.
And surely we can trust Apple that it will only be used for this kind of content instead of for example government leaks.
I would like to hear the strongest case for the privacy trade-off. How many more children will be physically recovered versus existing methods? What is the reduction in money flow to abduction activities?
This might be naive, but I would guess that the best way to fight this kind of thing is to let people know more of the case details. People would protect themselves, find the crimes, and stop unwittingly supporting them. For instance, if it can be shown that cryptocurrency or encrypted messengers are used to a significant extent, the community will either find a technical solution, or stop using it.
This is terrifying. The possibilities of extraordinary abuse are endless. What's surprising to me is the complete lack of media focus on this topic ? Why isn't this being hotly debated on TV ? Scanning people's photos is just OK now ?
Back to an Android phone, once I confirm this story is true.
If you like this, I have some other innovations that you may be interested in:
* A car that automatically pulls over when a police cruiser attempts to intercept you
* A front door that unlocks when a cop knocks
* A camera that uses AI to detect and prevent the photography of minors, police, and critical infrastructure
* A Smart TV that counts the number of people in your living room to ensure you aren't performing an unauthorized public broadcast of copyrighted content
Surely, at least one of those sounds ridiculous to you. As well-intentioned as this scanning may be, it violates a core principle of privacy and human autonomy. Your own device should not betray you. As technologists, just because we can do something doesn't mean we should.
> The problem with allowing this is that you’re paving the way for future tyrants to use it against us.
It's funny how everybody talk about the future. This is happenning now. Remember how a certain german guy took the power some 90 years ago ? He was elected.
People nowadays voluntarily carry tracking devices. This will not stop getting worse until that behavior is denormalized.
The power to be gained from abusing it is beyond irresistable. Expecting those in power to not abuse it is like expecting a heroin junkie to be a good pharmacist.
The technology is there, now we only need the motivation.
If politicians decides that they want it now, they can simply orchestrate a media campaign and have it. The next time an "outrageous" act of crime is conducted, they can make sure that it stays at the media attention and be portrayed as "If we don't act now very bed things will happen", then slide in their solution.
* Cars can automatically pull over by installing a cheap cut fuel cut switch that can be activated by short range radio. In many places people are used to add devices for toll collection anyway. People are also used to pay for regulatory inspections on their vehicles.
* For the old cars, simply connect an NFC reader that unlocks the central lock system of a car by a master key. For the new cars, simply make manufacturers add a police master key.
* Commercial drones are already stopping their users from flying over forbidden areas, simply extend that to smartphones. Smartphones have enough power and sensors to identify forbidden locations and persons. Add NFC kill switch, meaning the police can send a signal to lock down cameras.
* There were reports of Smart TVs that record all the time, simply mandate it to all manufacturers and enforce automated inspection of the recordings.
Uneven application of the law seems crucial to keep the system functioning and technology can erode that. Many simple laws, if enforced thoroughly and without prejudice, would become absolutely draconian. It is not even possible for a human to know all the laws we are meant to follow at all times, yet computers can.
Apple devices already betray their "owners", and they've been doing it for a long time.
You can't repair them.
You can't run your own software.
You can't use a better, more compliant web browser.
Businesses have to pay a 30% tax.
Businesses are forced to use login with Apple and forfeit a customer relationship.
Businesses have to dance to appease Apple. Their software gets banned, randomly flagged, or unapproved for deployment, sometimes completely on a whim.
Soon, more iDevices and Apple Pay will lead to further entrenchment. Just like in the movie Demolition Man, everything will eventually be Apple. Your car, your movies, your music, your elected officials.
I live in a major city and I’m a fan of none of these things. I’m only one data point, but yours is a sweeping and inaccurate generalization that “cities are frighteningly unsafe”.
Maybe one trusts Apple more than <insert politician>, but they cannot so easily elect away Apple.
Ive already been itching to de-cloud, and de-tech my life.
If were already getting to this stage of surveillance I guess thats just another sign I should be getting on top of it.
Today its csam. Tomorrow "misleading information". etc.
Im looking to do the same - this pandemic has made me feel quite claustrophobic about the encroachment of tech, and work into my personal life. Im planning on getting a dumb-ish nokia phone and leaving my smartphone at home to try and wean myself off it. What are your plans?
So many questions that akes this Tweet look odd. It's a "Client side tool" - so what? An app you install? That law enforcement can install? That Apple can silently install.
It lets "Apple Scan"? So Apple is going to proactively scanning your photos using a tool then install?
This is just horrible… the people who actually abuse children and download such photos will now stop using Apple devices, and now the rest of us is vulnerable to misuse/abuse/corruption.
Instead of specifically targeting suspects, everyone is surveyed by default. Welcome to a world of mass surveillance.
This is horrifying. Does this only affect iMessage, or the photos library? Is it remote? Does it require physical access?
As I understand it: it's a tool (that sends a command of some sort) that compels an iphone to perform the hashing match operation, and output results. Is that correct? Does it notify the user?
If I had to build it within apple's privacy framework, that'd probably be my approach: remote command causes sepos unlock of photos library (even running the job on sepos?) to do photo scanning. sepos returns hashes that match
Soon enough this will scope creep into anything containing what a puritanic neoliberal corporation considers contraband. Time for a serious look at Linux phones.
I won't get into the CSAM discussion but for anyone that has a stash of non-DRMed content I think it's a good idea to look into alternatives to Apple devices. Sooner rather than latter the same kind of system will be auto-deleting or alerting authorities about copyrighted material and I doubt that too much care will be taken to ensure that you didn't actually have a right to those copies.
In Apple's public comments they repeatedly say things like "Apple can't access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account."
If they are only concerned with iCloud accounts then... why not scan in the cloud? Can anyone explain to me why client-side scanning is actually needed here? As far as I'm aware, Apple only E2E encrypts iMessages, not iCloud photos or backups.
US politicians (to say nothing of countries with less individual freedoms) already openly pressure tech companies to censor specific content. And tech companies will do so even to the point of blocking direct, private, 1-1 messages sharing specific news. In that light, Apple's crossing a line to client-side scanning seems deeply concerning.
I don't see how keeping this as narrowly-targeted as it's being advertised would ever be possible or even intended.
Isn't there the potential for abuse of this to track things like who you talk to in private? Even if the images on your phone do not contain CSAM, the hashes of all your images would need to be shared with Apple, the NCMEC, and who knows what other quasi-gov't agencies.
All it would take to build massive graphs of who talks to who, etc is to match up those hashes. It doesn't matter if they have no idea what image the hashes correspond to...
If they then take the simple step of generating hashes for common images found online, they could even track what sites you browse and such. Ignoring the potential for false positives and other negative side effects of the main goal, this is a horrific breach of privacy.
If you honestly think the gov't won't abuse any and all data they collect this way, I don't know what to say...
How likely are perceptual hashes to give a false positive? If I take a picture of a tree, how likely is it that a few pixels are going to line up just right in a hashing algorithm and say it might be child porn? How likely is it that law enforcement is going to understand the limitations of this technology? How likely is it that the judicial system will understand the limitations?
I can see law enforcement showing up at my door one day with a search warrant demanding to have a look around, and I would have no idea why they’re there, but they’ll want to look through all my personal belongings.
Worse yet, I might come home from work one day, see my windows broken, see my place has been ransacked and my computers are missing. I would call the police to report a burglary only to hear than that I’m under investigation and they need me to give them the key to decrypt my hard drives.
The slippery slope argument is that the use of this method on private files, i.e. not shared with others except for the service provider can legitimise the expansion of such scamming scopes.
While this argument can and have indeed happened in other instances, this is akin to saying that we should not give anyone any powers to do anything because it is a slippery slope that they can use it to do bad things.
What then sets out the difference between what a slippery slope and a non-slippery one is? Checks and balances and the history of USA have shown that this is indeed what can reign in the worst instincts of any single entity. History of course have also shown when these failed and these should serve not as a reason to reject the idea of checks and balances but as acknowledging it's imperfection and think of ways to mitigate it.
Sure. And as I mentioned, there will be screw ups along the way. As with any new capability/tech be it nuclear power or recombinant DNA or ability to locate CP, there can be legitimate uses that we can rally behind and ways for them to be abused.
Checks and balances are never a done deal. If we reject checks and balances and as a result reject new tech because of abuse potential, how then should we as a civilisation advance?
I really have to wonder why Apple chose to do this.
As far as I know, this kind of scanning is not legally mandated. So, either they think that this will truly make the world a better place and are doing it out of some sense of moral responsibility, or they've been pressured into it as part of a sweetheart deal on E2E ("we won't push for crypto backdoors if you'll just scan your users' phones for us"). Either way it doesn't thrill me as a customer that my device is wasting CPU cycles and battery life under the presumption that I might possess data my current jurisdiction deems illegal.
For all the acclaim privacy-forward measures like GDPR get here, I'm surprised there isn't more outright repudiation of this frankly Orwellian situation.
I scanned the comments to find out who this person is and how they would have any inside info and found nothing. Why is this person’s claim being taken at face value? Before debating the merits of Apple scanning photos / hashes, why does anyone believe this is true?
How does one own/use an iPhone and help mitigate any issues from this? How does one help prevent this kind of sneaky photo crawling? I feel like in order to prevent people from spying on me I have to change _everything_ I do on my phone/computer.
Huh. I always took the cynical view and assumed that this was something every proprietary OS was already doing, and that this was part of why dark-web die-hards were so insistent on using TAILS. Guess "not yet."
On another note—OSes may only be starting to do this, but that same cynicism still leads me to presume that arbitrary closed-source third-party apps — or even closed-source binary distributions of open-source apps (e.g. the App Store versions of FOSS software) — could have been quietly scanning the files people are passing them for CSAM for years now, without telling users about it. It always seemed to me like the kind of thing it'd make sense to quietly slip into a media player like VLC.
Another real case is how they will handle photos of my own naked kids, of which i have plenty of photos, because it's quite natural for my kids to be running around and playing naked. And i want to capture the moments, not the nudity. And also i have very close friends who are visiting us with they kids and for us it's ok to see each other children playing naked. We sometimes even share photos of nice moments with our kids, where kids sometimes happen to be naked. Is this already CP? Our kids are 3 and 5 years.
Will they have to update their EULA or something before they have it communicate to their servers? I hate everything about this and would like to know when it actually happens. So far it's just a rumor.
I think a lot of people are missing why apple is doing this now. They're doing this because they have a fairly secure eco system. They have also create a proxy that makes it difficult (impossible according to them) to know the client. More than likely this was implemented so when it does go to congress, they can say look we implemented a system. Otherwise the DOJ will continue to push for no encryption or backdoor encryption. There's no winning here.
Great. I'm curious what would happen if you have auto-save received images enabled in WhatsApp and someone would spam you with child pornography images.
Is there any actual evidence presented here? This is someone who's repeating "But it’s important to remember that this is the same company that won’t encrypt your iCloud backups because the FBI put pressure on them.", for which there is also no evidence and quite a lot of testimony to the contrary. His credibility seems questionable in the absence of evidence.
Maybe the part in every TOS that says the company can change the TOS at any time without warning and you should regularly check the TOS page and stop using the service if you saw a change and didn't like it?
The indian government has recently introduced new laws that give them power to dictate terms over many online platforms and broaden their surveillance powers over online social media and messenger platforms. One of the laws dealing with messenger platforms requires the platform to track shared content, especially "origin of content" (first originator) of any content that is shared through their network. (Facebook / WhatsApp has already gone to the court to challenge this, as it claims that they would need to break end-to-end encryption for this and it thus violates indian privacy laws).
Apple's iMessage platform has more than 25 million users, and thus should come under the ambit of this law. But strangely, the indian government seems to have given them an "exception" .... and now we know why.
So, while everyone discusses how horrible future dystopia will be.
I worry about the method itself: will a simplest of firewalls be very effective against this? The one that forbids any communication except with few hosts, like a pair of them?
So what would the process be for example if a unwitting parent or relative has pictures of a victim on their iPhone that is perceptually similar of their loved one being molested at day care something like that?
How easy is it to generate an image that has the same “perceptual hash” or whatever that are calling it? My guess is it has to be easier than cracking a non fuzzy hash? Do we know the algorithm they are using?
Does this not encourage spawning a new arms race? New or modified apps that randomly change hashes of multimedia files as they are stored? If the CSAM DB is just simple hashes like sha256, md5/md4, etc.. then evading detection would be trivial. Or would Apple block applications that could rewrite randomized data into files? People don't have to be against CSAM to dislike something scanning their devices and many developers love puzzle challenges. I assume perhaps incorrectly that whatever app is doing the scanning could potentially also accept additional hash DB's, allowing Apple to enable categories to detect on per region. One of the iPhone emulators should facilitate reverse engineering the application.
The hash is a pictorial representation of the image, and not quite a checksum of the raw file data (like MD5 etc.). I would expect that even photos of printed photos would still have the same pictorial hash (if the photos are properly aligned), where obviously the cryptographic hash would be much different (since it's not an exact replica of the original image) but in the ML's eyes (bearing in mind the pictorial hash is generated through machine learning afaik), there would be a very strong match between visually similar images.
I suppose that it's a bit like when you do a reverse image search on your favourite search engine. When you upload an image, the engine will try and find images that the ML thinks look the same, even if the bits and bytes that make up the file are different. From what I can see, the similarity detection will be much more specific so as to not generate false positives. As you theorise though, it might be possible to modify images to evade detection if the hash's match specificity is high enough.
All bearing in mind that the pictorial hash also is supposedly designed to be a one-way function to ensure that those who know file hashes don't know what the original contents of the file are.
Remember earlier this year when bing.com returned an empty page for the image search of tank man? We’re now moving towards a world where your phone can show you that same blank page.
That tweet thread is saying it will scan for hashes client side and upload the result, circumventing E2E encryption, but then says theyre just going to do it on your icloud backups because they dont have E2E encryption, so which is it?
It doesn't say that. Today Apple servers scan uploaded photos. Tomorrow Apple phones will scan uploaded photos. The next day Apple phones could scan not uploaded photos.
The next day Apple phones could periodically listen to the environment via the internal microphone. The next day Apple phones could take and upload photos of the environment by itself.
Why do people think this will result in immediate abuse by corrupt governments as opposed to any other Apple service? Just because anime avatar twitter says so?
If you have not already, please check out the wonderful alternatives to iOS/Android that do not trample over your privacy, see https://grapheneos.org or Debian for mobile (which is still very much a WIP).
It is simply unacceptable for a company or government to scan your property that you not allow.
Been scanning the answers for a solution, I am 100% on Apple ecosystem and I am all for protecting little children but as others have pointed out nothing is stopping them from colluding with the government in future to say scan for political dissent:
- weaponizing memes against politicians as “hate speech”
- memes against government suppressing dissent
- and we can look what happened to Uighurs
Can Samsung phones that are de-googled work ? I am specifically interested in a new phone that Samsung launches, can it be degoogled ?
Everyone worried about Apple, but Apple is company and follows the laws of the countries it operates in.
If this is legal, or even more required by the goverment agencies: Apple will comply. And there is nothing wrong with it.
If you don't like the laws of your country, you need to work for that, not just go after Apple on social media.
CSAM? According to Google that's Confocal Scanning Acoustic Microscopy. Or something.
And what with the tweeters? I think my laptop just gave me thighburns from the CPU bloat that clicking on that link caused. Eight seconds to render the page? Why do folks still use this twerker website?
I understand the hesitation here, but fundamentally this is like trying to close pandoras box. If something is technically possible to do AND governments demand it be done, it will be done. If not by Apple, by someone else.
Rather than complain about it, I am interested in what alternative solutions exist, or how concerns regarding privacy and abuse of this system could be mitigated.
I don't understand this argument at all. Look at the Clipper Chip debacle in the 90s. It was technically feasible and the government very much wanted to do it. And the reason they didn't is push back from the public, saying this is a bad idea that can easily be misused, even if it does make some law enforcement things easier. I don't see how this is any different.
Sacrificing the privacy of the many to help catch a relatively small amount of (admittedly some of the worst possible) criminals, while simultaneously enabling yet more effective surveillance and oppression by those inclined governments (of which there are plenty) is a pretty terrible idea.
Eliminating the 4th amendment or mandating clear walls sure would make the cops' job easier. But no one thinks that's even a remotely good idea.
This argument falls on it's head when confronted by reality. Either you have a trustworthy government already that will respect your rights in a slow-rolling fashion that shifts as the dialogue within the courts evolves or you already have a government that doesn't care about you and your laws/desires at all and which will do what it wants anyway. Unless you're in the ladder there's no reason to be so hostile to empowering technologies, especially when they're being used to fight some of the most heinous types of crime.
It’s not trying to close Pandora’s box. It’s liability limitation. They’re effectively saying that if you want to distribute CP don’t do it on an iPhone.
They're basically saying, that they're watching all your multimedia, to see if maybe you're distributing child porn. This is like doing rectal exams on everyone, every day, because someone might be hiding drugs there.
I'm really conflicted about this.
For context, I deeply hate the abuse of children and I've worked on a contract before that landed 12 human traffickers in custody that were smuggling sex slaves across boarders. I didn't need to know details about the victims in question, but it's understood that they're often teenagers or children.
So my initial reaction when reading this Twitter thread was "let's get these bastards" but on serious reflection I think that impulse is wrong. Unshared data shouldn't be subject to search. Once it's shared, I can make several cases for an automated scan, but a cloud backup of personal media should be kept private. Our control of our own privacy matters. Not for the slippery slope argument or for the false positive argument, but for its own sake. We shouldn't be assuming the worst of people without cause or warrant.
That said, even though I feel this way a not-small-enough part of me will be pleased if it is deployed because I want these people arrested. It's the same way I feel when terrorists get captured even if intelligence services bent or broke the rules. I can be happy at the outcome without being happy at the methods, and I can feel queasy about my own internal, conflicted feelings throughout it all.
Having known many victims of sexual violence and trafficking, I feel for the folks that honestly want that particular kind of crime to stop. Humans can be complete scum. Most folks in this community may think they know how low we can go, but you are likely being optimistic.
That said, law enforcement has a nasty habit of having a rather "binary" worldview. People are either cops, or uncaught criminals. ..and they wonder why they have so much trouble making non-cop friends (DISCLAIMER: I know a number of cops).
With that worldview, it can be quite easy to "blur the line" between child sex traffickers, and parking ticket violators. I remember reading a The Register article, about how anti-terrorism statutes are being abused by local town councils to do things like find zoning violations (for example, pools with no CO).
Misapplied laws can be much worse than letting some criminals go. This could easily become a nightmare, if we cede too much to AI.
And that isn't even talking about totalitarian regimes, run by people of the same ilk as child sex traffickers (only wearing Gucci, and living in palaces).
”Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be.”
-Lyndon B. Johnson
> People are either cops, or uncaught criminals. ..and they wonder why they have so much trouble making non-cop friends (DISCLAIMER: I know a number of cops).
Ehh let's not make a habit of asserting anecdote as fact, please. Saying you know cops is like saying you know black people and that somehow it affords you some privilege others do not possess.
This is a weak and ad-hom argument.
5 replies →
> I'm really conflicted about this.
I'm not. I am very unambiguously against this and I think if word gets out Apple could have a real problem.
I would like to think I am against child porn as any well-adjusted adult. That does not mean I wish for all my files to be scanned without my consent or even knowledge for compliance, for submission to who knows where matching to who knows what reporting to, well, who knows.
That's crossing a line. You are now reading my private files, interpreting them, and doing something based on that interpretation. That is surveillance.
I am very not OK with this.
If you truly want to "protect the children" you should have no issue for the police to visit and inspect your, and all of your neighbors houses. Every few days. Unannounced, of course. And if you were to resist, you MUST be a pedophile who is actively abusing children in their basement.
I'm actually more OK with unannounced inspections of my basement (within reason) than with some government agents reading through my files all the time.
1 reply →
- Don't you want to get the terrorists?
- Yea yeah
- Great. Give me access to every part of your life so i know you're not a terrorist.
“If you want a vision of the future, imagine a boot stamping on a human face - forever.” I always think about this Orwell quote and think it’s up to us to try to fight for what is good, but we were too busy doom-scrolling on Twitter to do anything about it.
The NCMEC database that Apple is likely using to match hashes, contains countless non-CSAM pictures that are entirely legal not only in the U.S. but globally.
This should be reason enough for you to not support the idea.
From day 1, it's matching legal images and phoning home about them. Increasing the scope of scanning is barely a slippery slope, they're already beyond the stated scope of the database.
The database seems legally murky. First of all, who would want to actually manually verify that there aren't any images in it that shouldn't be? If the public can even request to see it, which I doubt, would you be added to a watch list of potentially dangerous people or destroy your own reputation? Who adds images to it and where do they get those images from?
My point is that we have no way to verify the database wouldn't be abused or mistaken and a lot of that rests on the fact that CSAM is not something people want to have to encounter, ever.
9 replies →
Could you say more about these legal photos? That's a pretty big difference from what I thought was contained in the DB.
15 replies →
Why would there be legal images in the db? do you have a source for that?
Can you give more information about this? What kind of legal images might it match?
17 replies →
NCMEC is an private organization created by the U.S. Government, funded by the U.S. Government, operates with no constitutional scrutiny, operates with no oversight / accountability, could be prodded by the U.S. Government, and they tell you to "trust them".
To be fair the Twitter thread says (emphasis mine) "These tools will allow Apple to scan your iPhone photos for photos that match a specific perceptual hash, and report them to Apple servers if too many appear."
I don't know what the cutoff is, but it doesn't sound like they believe that possession of a single photo in the database is inherently illegal. That doesn't mean this is overall a good idea. It simply weakens your specific argument about occasional false positives.
1 reply →
Since you worked on an actual contract catching these sorts of people you are perhaps in a unique position to answer the question: will this sort of blanket surveillance technique in general but also in iOS specifically - actually work to help catch them?
I have direct knowledge of examples of where individuals were arrested and convicted of sharing CP online and they were identified because a previous employer I worked for used PhotoDNA analysis on all user uploaded images. So yeah, this type of thing can catch bad people. I’m still not convinced Apple doing this is a good thing, especially on private media content without a warrant, even though the technology can help catch criminals.
24 replies →
Just as being banned from one social media platform for bad behavior pushes people to a different social media platform, this might very well push the exactly wrong sort of people from iOS to Android.
If Android then implements something similar, they have the option to simply run different software, as Android lets you run whatever you want so long as you sign the wavier.
"You're using Android?! What do you have to hide?" -- Apple ad in 2030, possibly
I'm the person you're responding to, and I think so? My contract was on data that wasn't surveilled, it was willingly supplied in bad faith. Fake names, etc. And there was cause / outside evidence to look into it. I can't really go into more details than that, but it wasn't for an intelligence agency. It was for another party that wanted to hand something over to the police after they found out what was happening.
1 reply →
This scanning doesn't prevent the actual abuse and all this surveillance doesn't get to the root of the problem but can be misused by authoritarian governments.
It's a pandoras box. You wouldn't allow the regular search of your home in real life.
If what kills my enemies also kills my friends, then I don't want it.
Let me help you, then, because you shouldn't be conflicted at all.
"Think of the children" is how they force changes that would otherwise be unconscionable.
They've done it with encryption and anonymity for years. Now they're doing it with the hardware in your pocket.
I'm not at all conflicted about this. Obviously CSAM is bad and should be stopped, but it is inevitable this new feature will become a means for governments to attack material of far more debatable 'harm'.
Is this confirmed yet or just something someone believes will happen? How credible are the sources of this twitter account?
Well, it's a lot like everything. No one wants abusers, murderers, and others out and about. But then, we can't search everyone's homes all of the time for dead bodies, or other crimes.
We would all be better off without these things happening, and anyone would want less of it to happen.
> Unshared data shouldn't be subject to search
Since they are only searching for _known_ abusive content, by definition they can only detect data that has been shared, which I think is the important point here.
There have been child abuse victims who have openly condemned this sort of intrusion on privacy, although they obviously don't speak for them all.
The road to hell is paved with good intentions...
> Unshared data shouldn't be subject to search. Once it's shared, I can make several cases for an automated scan, but a cloud backup of personal media should be kept private. Our control of our own privacy matters. Not for the slippery slope argument or for the false positive argument, but for its own sake. We shouldn't be assuming the worst of people without cause or warrant.
I have a much simpler rule: Your device should never willingly* betray you.
*With a warrant, police can attempt to plant a bug, but your device should not help them do so.
I don't think this rule makes any sense, because it just abstracts all the argument into the word "betray".
The vast majority of iPhone users won't consider it a betrayal that they can't send images of child abuse, any more than they consider it a betrayal that it doesn't come jailbroken.
The victims of child abuse depicted in these images may well have considered it a betrayal by Apple that they allowed their privacy to be so flagrantly violated on their devices up until now.
2 replies →
So if I understand correctly, they want to scan all your photos, stored on your private phone, that you paid for, and they want to check if any of the hashes are the same as hashes of child porn?
So... all your hashes will be uploaded to the cloud? How do you prevent them from scanning other stuff (memes, leaked documents, trump-fights-cnn-gif,... to profile the users)?
Or will a huge hash database of child porn hashes be downloaded to the phone?
Honestly, i think it's one more abuse of terrorism/child porn to take away privacy of people, and mark all oposing the law as terrorists/pedos.
...also, as in the thread from the original url, making false positives and spreading them around (think 4chan mass e-mailing stuff) might cause a lot of problems too.
> and they want to check if any of the hashes are the same as hashes of child porn?
... without any technical guarantee or auditability that any of the hashes they're alerting on are actually of child porn.
How much would you bet against law enforcement to abuse their ability to use this, and add hashes to find out who's got anti government memes or police committing murder images on their phones?
And that's just in "there land of the free", how much worst will the abuse of this be in countries who, say, bonesaw journalists to pieces while they are alive?
I remember the story where some large gaming company permanently banned someone because they had a file with a hash that matched a "hacking tool". Turns out the hash was for an empty file.
This will end badly for humanity.
1 reply →
they don't check the unhashed bytes against the child porn bytes after a hash match?
This is the big one right here.
A malware will definitely be created, almost immediately, that will download files that are intentionally made to match CP - either for the purposes of extortion or just watching the world burn.
I'm usually sticking my neck out in defence of more government access to private media than most on HN because of the need to stop CP, but this plan is so naive, and so incredibly irresponsible, that I can't see how anyone with any idea of how easy it would be to manipulate would ever stand behind it.
Signal famously implemented, or at least claimed to implement, a rather similar-sounding feature as a countermeasure against the Cellebrite forensics tool:
https://signal.org/blog/cellebrite-vulnerabilities/
2 replies →
If this was easy to do, it’d already be a problem because Apple is already scanning some iCloud services for CSAM per their terms of service.
If you can recreate a file so it’s hash matches known CP then that file is CP my dude. The probability of just two hashes accidentally colliding is approximately: 4.3*10-60
Even if you do a content aware hash where you break the file into chunks and hash each chunk, you still wouldn’t be able to magically recreate the hash of a CP file without also producing part of the CP.
5 replies →
That document you downloaded that is critical of the party will land you and your family in jail. Enjoy your iPhone.
Seriously, folks, we shouldn't celebrate Apple's death grip over their platform. It's dangerous for all of us. The more of you that use it, the more it creates a sort of "anti-herd immunity" towards totalitarian control.
Apple talks "privacy", but jfc they're nothing of the sort. Apple gives zero shits about your privacy. They're staking more ground against Facebook and Google, trying to take their beachheads. You're just a pawn in the game for long term control.
Apple cares just as much for your privacy as they do your "freedom" to run your own (un-taxed) software or repair your devices (for cheaper).
And after Tim Cook is replaced with a new regime, you'll be powerless to stop the further erosion of your liberties. It'll be too late.
Stop. Using. Apple.
> Stop. Using. Apple.
But is there a realistically better alternative? Pinephone with a personally audited Linux distro? A jailbroken Android device with a non-stock firmware that you built yourself? A homebuilt RaspberryPi based device? A paper notepad and a film camera and an out of print street map?
24 replies →
>So... all your hashes will be uploaded to the cloud?
That isn't how I interpret "client-side".
The privacy implications are far more subtle.
It's still really, really bad.
It always starts with child porn, and in a few years the offline Notes app will be phoning home if you write speech criticising the government in China.
This technology inevitably leads to the sueveillance, suppression and murder of activists and journalists. It always starts with protecting the kids or terrorism.
Perceptual hashes like what Apple is using are already used in WeChat to detect memes that critique the CCP.
What happens on local end user devices must be off limits. It is unacceptable that Apple is actively implementing machine learning systems that surveil and snitch on local content.
12 replies →
I agree, I would add that people have generated legal images that match the hashes.
So I want to ask what happens if you have a photo that is falsely identified as one in question and then an automated mechanism flags you and reports you to the FBI without you even knowing. Can they access your phone at that point to investigate? Would they come to your office and ask about it? Would that be enough evidence to request a wiretap or warrant? Would they alert your neighbors? How do you clear your name after that happens?
edits: yes, the hash database is downloaded to the phone and matches are checked on your phone.
Another point is that these photos used to generate the fingerprints are really legal black holes that the public is not allowed to inspect I assume. No one wants to be involved in looking at them, no one wants to be known as someone who looks at them. It could even be legally dangerous requesting to find out what has been put into the image database I assume.
>I would add that people have generated legal images that match the hashes.
That seems like a realistic attack. Since the hash list is public (has to be for client side scanning), you could likely set your computer to grind out a matching image hash but of some meme which you then distribute.
7 replies →
No need to upload every hash or download a huge database with very hash. If I were building this system, I'd make a bloom-filter of hashes. This means O(1) space and time checking of a hash match, with a risk of false positives. I'd only send hashes to check against a full database.
No, your hashes are not uploaded to the cloud, yes, hashes are downloaded to your phone. Yes, it will be interesting to see if it gets spammed with false positives, although it seems as though that can easily be identified silently to the user.
Interesting? You think it will be interesting? False positives in this case cause swat teams to be sent to people’s houses.
How hard would it be to create a valid image that matches some 128bit hahs
10 replies →
That's the stated purpose, but keep in mind that these databases (NCMEC's in particular, which is used by FB and very likely Apple) contain legal images that are NOT child porn.
Source for that info?
1 reply →
> So... all your hashes will be uploaded to the cloud?
No, it'll be done on-device.
> How do you prevent them from scanning other stuff (memes, leaked documents, trump-fights-cnn-gif
Nothing. Given that it's only done on their closed-source messaging platform though, nothing is preventing them from reading your messages already.
But yes, it could potentially be used to detect images that the current political party doesn't like.
No-no-no. It's not your phone. If it was your phone - you would have a root access to it. It's their phone. And it's their photos. They just don't like when there's something illegal on their photos, so they will scan it, just in case.
your phone phones the phone manufacturer to phone the police to iphone you
It's funny to see anyone here could find this acceptable. I wonder what's comments would be after Apple start to scan phones for anti-censorship or anti-CCP materials in China. Or for some gay porn in Saudi Arabia.
Because you know in some countries there are materials that local government find more offensive than mere child abuse. And once surveillance tech is deployed it's certainly gonna be used to oppress people.
> Because you know in some countries there are materials that local government find more offensive than mere child abuse. And once surveillance tech is deployed it's certainly gonna be used to oppress people.
In Saudi, Bahrain, and Iran there is no minimum age of consent – just a requirement for marriage. In Yemen, the age of consent for women is 9 (but they must be married first). In Macau, East Timor, and UAE, it's 14. [1]
I would allege that in all of those states they would probably find the perceptual hash of government criticism far more important to include on the "evil material" database than anything else.
[1] https://en.wikipedia.org/wiki/Ages_of_consent_in_Asia
Won't anyone think of the children! And Tim Cook personally promised to not look at anything in my unencrypted iCloud backup, they really care about privacy!
And you can be sure that there's no way for the PRC, that already runs its own iCloud, to use this. America's favorite company wouldn't allow that.
> I wonder what's comments would be after Apple start to scan phones for anti-censorship or anti-CCP materials in China.
I'm cynical enough to wonder whether this isn't their actual commercial reason for developing this, with CSAM being a PR fig leaf. Apple is substantially more dependent on China than its major competitors.
Exactly, now Apple has this tech, shady governments know they can mandate Apple to use it for their own databases and Apple will have to do this if they want to keep operating within a territory.
If you don't, you are suspicious.
It's quite easy to extrapolate this and in a few steps end up in a boring dystopia.
First it's iPhone photos, then it's all iCloud files, that spills into Macs using iCloud, then it's client side reporting of local Mac files, and somewhere along all other Apple hardware I've filled my home with have received equivalent updates and are phoning home to verify that I don't have files or whatever data they can see or hear that some unknown authority has decided should be reported.
What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?
> What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?
Apple takes care of everything for you, and they have your best interests at heart. You will be safe, secure, private and seamlessly integrated with your beautiful devices, so you can more efficiently consume.
What's not to like about a world where child crime, terrorism, abuse, radical/harmful content and misinformation can be spotted in inception and at the source and effectively quarantined?
No one here has a problem with the worst criminals being taken out. The problem is the scope creep that always comes after.
In 2021 and 2020 we saw people being arrested for planning/promoting anti lockdown protests. Not for actually participating but for simply posting about it. The scope of what "harmful content" is is infinite. You might agree that police do need to take action against these people but surely you can see how the scope creeped from literal terrorists and pedophiles to edgy facebook mums and how that could move even further to simple criticisms of the government or religion.
It's difficult to say how we draw the line to make sure horrible crimes go punished while still protecting reasonable privacy and freedom. I'm guessing apples justification here is that they are not sending your photos to police but simply checking them against known bad hashes and if you are not a pedophile, there will be no matches and none of your data will have been exposed.
15 replies →
Who will be accountable for the creeps at apple? Or their overlords in the government?
I mean, Apple isn't too far from the Mac thing you mention. Since Catalina running an executable on macOS phones home and checks for valid signatures on their servers.
No, this is entirely different.
> What is the utopian perspective of this
You will make Apple tons of money.
> It's quite easy to extrapolate this and in a few steps end up in a boring dystopia.
It's only boring until we get another Hitler or equivalent.
> "What is the utopian perspective of this which counterbalances the risks for this to be a path worth taking?"
Basically victims of rape don't want imagery of their rape freely distributed as pornography. They consider that a violation of their rights.
It's interesting how many users in this thread are instinctively siding with the offenders in this, and not the victims. Presumably because they made it through their own childhoods without having imagery of their own abuse shared online.
You are actually creating a false dichotomy here. There are more sides to this. And you are creating (as said a false) black and white image here.
I strongly believe that nobody wants to further victimize people by publicly showing images of their abuse.
And I believe very strongly that putting hundreds of millions of people under blanket general suspicion is a dangerous first step.
Imagine if every bank had to search all documents in safe deposit boxes to see if people had committed tax evasion (or stored other illegal things like blood diamonds obtained with child labor). That would be an equivalent in the physical world.
Now add to this, as discussed elsewhere here, that the database in question contains not only BIlder of victims, but also perfectly legal images. This can lead to people "winning" a house search because they have perfectly legal data stored in their cloud.
Furthermore, this means that a single country's understanding of the law is applied to a global user community. From a purely legal point of view, this is an interesting problem.
And yes: I would like to see effective measures to make the dissemination of such material more difficult. At the same time, however, I see it as difficult to use a tool for this purpose that is not subject to any control by the rule of law and cannot be checked if the worst comes to the worst.
2 replies →
> It's interesting how many users in this thread are instinctively siding with the offenders in this, and not the victims.
That is infantile. Painting people advocating privacy as siding with offenders is highly insulting.
4 replies →
I feel it's a little disingenuous to describe millions of innocent people being surveilled as "the offenders" because there are a handful of actual offenders among them.
7 replies →
Presumptuous. I certainly dont want this pseudo-righteous power grab done for me.
What about the victims of the apple employees and government officials that exploit this?
It is a violation of their rights. But we have a justice system set up which makes distributing such images knowingly (and committing such acts with the intent to distribute those images) a crime.
It's also incredibly likely this could be used to send people you don't like to prison by sending their phone innocuous images that look like wrong images to an AI.
It's also also incredibly likely this will evolve into scanning for more than just abuse photos, especially in the hands of governments around the world.
A new aspect of this is that because this is self-reported, and the end goal is to involve the criminal justice system, there is now (essentially) an API call that causes law enforcement to raid your home.
What would be the result of 'curl'ing back a few random hashes as positives from the database? Do I expect to be handcuffed and searched until it's sorted out? What if my app decides to do this to users? A malicious CSRF request even?
A report to the cybertips line does not equal a police raid. Unfortunately the scale of the problem and the pace of growth is such that only the worst of the worst content is likely to be prosecuted.
If a phone calls the API "hello, I found some porn here" the phone (and/or it's owner) become a "person of interest" very quickly.
(I'll wager) The majority of these calls will be false positives. Now a load of resources get deployed to keep an eye on the device's owner, wasting staff time and compute, wasting (tax funded) government budget that could have gone towards proper investigation.
1 reply →
Yeah and sadly many of those who are consumers of illicit content get away with it because it's much more important to target the creators. The unfortunate reality of finite resources.
Also, if they send perceptual hashes to your device - it's possible images could be generated back from those hashes. These aren't cryptographic hashes, so I doubt they are very good one-way functions.
Another thought - notice that they say "if too many appear". This may mean that the hashes don't store many bits of information (and would not be reversible) and that false positives are likely - ie, one image is not enough to decide you have a bad actor - you need more.
But at Apple's scale, statistically, some law-abiding users would likely get snagged with totally innocent images.
Just a bad idea all around.
It's also just plain absurd. Hundreds of pictures of my own children at the beach in their bathing suits? No problem. Hundreds of photos of other peoples' children in bathing suits? Big problem. Of course, the algorithm is powerless to tell the difference.
I believe it's built on hashing, so it'll only find images in the db they have with already known content. Your own photos won't get mixed up.
2 replies →
In cryptography creating a one-way function is not a problem. The only thing required for that is loosing information, which is trivial. For example taking the first n bytes of a file is a one-way hash function (for most files). So reversing the hashes is most definitely not a problem.
Creating collisions could be though, eg. brute forcing a normal picture by modifying random pixels by a bit into matching an illegal content’s hash is a possibility.
Dear humans,
1) You willingly delegated the decision of what code is allowed to run on your devices to the manufacturer (2009). Smart voices warned you of today's present even then.
2) You willingly got yourself irrevocably vendor-locked by participating in their closed social networks, so that it's almost impossible to leave (2006).
3) You willingly switched over essentially all human communication to said social networks, despite the obvious warning signs. (2006-2021)
4) Finally you showed no resistance to these private companies when they started deciding what content should be allowed or banned, even when it got purely political (2020).
Now they're getting more brazen. And why shouldn't they? You'll obey.
Great, so what's the solution? What are you doing to fix it? Do you roll your own silicon? Do you grow your own food (we have no idea what someone could be putting in it)? Are you completely off-grid? Or are you as completely dependent on society writ large as everyone else?
Making holier than thou comments about everyone else being sheep isn't helpful or thought provoking. Offer an alternative if it is a bad one (looking at you Mastodon). So here's mine: we need to change the power of digital advertising. Most of the most rent seeking companies generate revenue primarily selling ads to get more people to buy more crap. I want a VAT on all revenue passing through the digital advertising pipeline. My hope is that if these things are less profitable, it will reduce the over-sized impact these companies (social, infotainment [there is no news anymore], search, etc.) have on our economy and life. People are addicted to to fomo and outrage (faux?), I don't that that will ever change but we can try to make it less profitable.
> Great, so what's the solution?
Seriously? Perhaps heed the warnings? Whenever Apple tightened the reigns, thousand of apologists came to their defense. I wouldn't even have minded if they kept their obedience to personal decisions. But they extended their enlightenment to others.
27 replies →
The solution is to go back to the original spirit of the Internet, when it was a set of open standards connecting people and organizations. Somehow it got forgotten and now we have a bunch of commercial companies giving you the same stuff in exchange for your privacy and who increasingly control everything you do.
20 replies →
Dumping Facebook and its products as I and others have done is one strong step forward but people can't even manage this. It's deeply disappointing. Techno bears its seeds in rebellion but everybody throwing techno parties is coordinating on what is essentially an Orwellian state. Punks too, they're cozied up to this framework of oppression and can't see it for what it is.
I think people have a hard time seeing the ethics in the technology they choose to use. Maybe the next wave of net-natives will be able to rediscover that common thread of rebellion and resist. It's insidious, I'll give you that. It's not obvious what is being surrendered with every participation on these platforms but it doesn't take a genius to see clearly.
3 replies →
What we have been doing would have worked if the majority would have followed.
Chose open standards, use and contribute to FOSS, avoid social networks, get involved in your local community, etc.
No need to go to extreems or complicated plans, corporations follow the customers.
But nobody did listen. Quite the opposite. I never had a facebook account, and now today people are boasting when they leave FB. But 10 years ago ? Oh we were the paranoid extremists .
Even today my friends regularly pressure me to get whatsapp.
The solution won't be technological, it will be in realm of laws and regulations. We are weak peasants and don't have any power over big tech, but we can change legal environment for them.
IANAL but we (via our elected representatives) can push a law that prohibit restrictions on execution of users' code on their own devices. Or we can split app stores from vendors and obligate them to provide access to third-party stores, like we do with IE and windows.
Also, it's completely doable to stop NSA/Prism totalitarian nonsense.
What we can do as tech people?
- raise awareness
- help people to switch from big tech vendor locks
- help people harming BT by installing adblockers, pihole etc
- participate in opensource (with donations or work)
- probably something else
4 replies →
Do ANYTHING.
Buy a feature phone or a phone from a vendor who doesn't have this power.
Switch to Linux.
Stop buying from companies that abuse you.
Elect politicians who care about your rights.
You know what's not helpful? Attacking the messenger, regardless of how sanctimonious you think he is.
https://prism-break.org/en/ is a great start. in the absence of strong regulatory oversight, personal defense is a good measure.
> Great, so what's the solution?
I'd argue: avoid using proprietary networks, avoid vendor lock-in with software and hardware, and use hardware that one is allowed to use their own software on. Champion using open and federated protocols for social tools.
I think solutions exist, but honestly, it isn't easy.
> Making holier than thou comments about everyone else being sheep isn't helpful
I would offer the GP comment isn't necessarily a holier than thou comment, it's a comment of frustration. Frankly, I feel the same frustration.
It's tiring to hear snide remarks of "ohh yeah, we can include you for something because you don't have an iPhone". Hell, I have openly heard, even on this forum, that people don't include folks on social conversations with EVEN THEIR OWN FAMILY because of the dreaded "green bubble". (FYI, MMS is entirely done over HTTP and SMS! How is Apple's MMS client so bad that it can't handle HTTP and SMS?).
Or there is the "why don't you have WhatsApp/Facebook/Instagram/etc." and people think your some sort of weirdo because you don't want to use those networks.
So to be honest, when I see something like that, I think "Well I'm not surprised, this is what happens when you are locked out of your own hardware".
> What are you doing to fix it?
While GP may not be doing anything, others are helping and actively working for alternatives. For example, I have been working to get the Pinephone to have MMS and Visual Voicemail support so I can use it daily. I an very fortunate to work with a lot of very talented and motivated folks who want to see it succeed.
It's incredible how people are going to blame absolutely everything on ads. We're talking about a company for which ads are only a small part of their revenue doing something following government pressure, and somehow ads are the problem.
How about not supporting it as a start? Approximately half the country, and a majority of tech workers, were happy with #4 and in fact encouraging it.
What am I doing to fix it? Nothing!
I'm dependent, just as you say, and have no illusions about that.
Getting into this situation wasn't my decision (it was a collective "decision" of our society), and getting out of this won't be due to anything I'll personally do either.
The only difference between me and the average joe is having understood that we have a problem earlier than most.
I bought a pinephone recently, that's one fairly simple way to prevent corporations from scanning your life.
Pretty cheap, too.
> Great, so what's the solution? What are you doing to fix it?
Nothing, because my phone is rooted Android.
1 reply →
The fact that even the 'smart' people from HN can't wait for their new M1 laptop to arrive convinced me that humans are a lost cause.
So, it’s not that other platforms are any better!
Are you sure intel, AMD, Arm or windows TPM aren’t snitching on you? Do we need to make our own silicon from ingot?
There’s no technological solution to this problem, only social and legislative.
14 replies →
Not too lost 'cause you can run Linux on M1.
38 replies →
This is a moot point unless you always verify and check all hardware and software that you use, including communications devices.
20 replies →
FYI: M1 Macs can run Linux.
1 reply →
What do you recommend instead?
3 replies →
The fact that you think people waiting for Intel and Windows 11 are any better off makes me think the same.
"People like Coldplay and voted for the Nazis. You can't trust people" - Super Hans :)
> closed social networks
It’s not clear that governments would give the open social networks an easier ride either. It could be argued that distributed FOSS developers are easier to pressurise into adding back doors, unless we officially make EFF our HR/Legal department.
The other problem is workers have a right to be paid. The alternatives are FOSS and/or distributed social media. Who in good conscience would ask a tech worker to give away their labour for free, in the name of everyone else’s freedom?
In a world of $4k rent, who amongst us will do UX, frontend, backend, DevOps, UO, and Security for 7 billion people, for anything but the top market rate?
The real alternative is to attack the actual problem: state overreach. Don’t attack people for using SnapChat — get them to upend the government’s subservience to intrusive law enforcement.
> … who amongst us will do UX…
imho, we have everything in the foss world working tightly except great UX/UI. in my experience in the open source world – which is not insignificant – great UX is the only thing stopping us from a paradigm shift to actual tech liberation.
even outside of corporate funded work/commits, we see an astounding number of people donating incredible amounts of their time towards great quality code. but we still thoroughly lack great UX/UI.
i’m not talking about “good”, we have some projects with “good” UX, but very very few with great.
there are many reasons and I’d be happy to share what some of them are, but in my mind great UX is unquestionably one of two primary things holding us back from actual truly viable software liberation.
1 reply →
> It could be argued that distributed FOSS developers are easier to pressurise into adding back doors, unless we officially make EFF our HR/Legal department.
How could this be argued?
> It could be argued that distributed FOSS developers are easier to pressurise into adding back doors
All millions of them at the same time?
4 replies →
UO?
>Who in good conscience would ask a tech worker to give away their labour for free, in the name of everyone else’s freedom?
Here's the hope: the tech workers doing it for 'free' because they're scratching their own itch. So it would not be an act of onerous charity. The techies make some free open source decentralised clone of Reddit, say, then some folks among knitting communities, origami enthusiasts, parents groups, etc. copy it for free and pay to run it on their own hardware.
If it seems like this scanning is working as advertised, this will be a great marketing stunt for Apple. Actual predators will stop using Apple products out of fear of getting caught and they will be forced to use Android.
Now any person who owns an Android is a potential predator. Also, if you are trying to jailbreak your iPhone, you are a potential predator.
Some 'predators' are dumb. They'll keep using iPhones, get caught, and have their mugshots posted in the press. Great PR for the policy makers who decided this. Such stories will be shoved in the faces of the privacy advocates who were against it, to the detriment of their credibility.
3 replies →
The twitter comments also mentioned scanning for political propaganda etc. This could work against Apple if normal folks don't want all their stuff scanned on behalf of unnamed agencies.
Or they will just get one step deeper into the dark, by using a designated device for the dirty stuff. Potentially only used within tor/vpn with no connection to "normal"-life.
Congratz, investigations got a bit harder, but now all people have to life with a tool that will be used against them when needed. No sane person can believe that this isn't used for other "crimes" (how ever those are defined) tomorrow.
I think having a manufacturer that is able to read the contents of your device at any point is good marketing. Although, I know some Apple users that would certainly buy that excuse.
This sort of scanning has existed for well over a decade, and was originally developed by Microsoft (search PhotoDNA).
The only thing that's changed here is that there is more encryption around, and so legal guidelines are being written to facilitate this, which has been happening for a long, long time.
(I don't disagree with your overall point, and child porn is definitely the thin edge of the wedge, but this isn't new and presumably shouldn't be too surprising for any current/former megacorp as they all have systems like this).
That's a very strong frog slowly boiling attitude.
"It's been happening for a long time already, the only difference now is a 0.1 degree increase", says the frog while being boiled alive.
1 reply →
Nothing is ever new. You can always find some vague prototype of an idea that failed to become ubiquitous ten years ago.
When I read that this shouldn't be surprising, it has an aftertaste of "Dropbox is not interesting/surprising because ftpfs+CVS have existed for well over a decade"
1 reply →
I honestly don't like too much these smug takes
> 1) You willingly delegated the decision of what code is allowed to run on your devices to the manufacturer (2009). Smart voices warned you of today's present even then.
99% of the population will delegate the decision of what code is allowed to run to someone, be it the manufacturer, the government, some guy on the Internet or whatever. For that 99% of the population, by the way, it's actually more beneficial to have restrictions on what software can be installed to avoid malware.
> 2) You willingly got yourself irrevocably vendor-locked by participating in their closed social networks, so that it's almost impossible to leave (2006).
"Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Not to mention that, again, 99% of people will get vendor-locked because in the end nobody wants to run their own instance of a federated social network.
> You willingly switched over essentially all human communication to said social networks, despite the obvious warning signs. (2006-2021)
Yes, it's been years since I talked someone face to face or on the phone and I cannot send letters anymore.
> 4) Finally you showed no resistance to these private companies when they started deciding what content should be allowed or banned, even when it got purely political (2020).
No resistance? I mean, it's been quite a lot of discussion and pushback on social networks for their decisions on content. Things move slow, but "no resistance" is quite the understatement.
> Now they're getting more brazen. And why shouldn't they? You'll obey.
Is this Mr. Robot talking now?
But now more seriously, in December the European Electronic Communications Code comes into effect, and while it's true that there's a temporary derogation that allows these CSAM scanners, there's quite a big debate around it and things will change.
The main problem with privacy and computer control is a collective one that must be solved through laws. Thinking that individual action and free software will solve it is completely utopic. A majority of the people will delegate control over their computing devices to another entity because most people don't have both knowledge and time to do it, and that entity will always have the option to go rogue. And, unfortunately, regulation takes time.
Anyways, one should wonder why, after all these years of these kinds of smug messages, we're in this situation. Maybe the solutions and the way of communicating the problems is wrong, you know.
Not GP but...
>99% of the population will delegate the decision of what code is allowed to run to someone, be it the manufacturer, the government, some guy on the Internet or whatever. For that 99% of the population, by the way, it's actually more beneficial to have restrictions on what software can be installed to avoid malware
I do not agree with this. You are saying people are too stupid to make decisions and that is amoral in my opinion.
>"Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Data portability is a thing. This was the original problem with FB and thats how we got 'takeout'.
>Yes, it's been years since I talked someone face to face or on the phone and I cannot send letters anymore.
>Is this Mr. Robot talking now?
Using the extreme in arguments is dishonest. We are talking on HN where it is a selective group of like minded people(bubble). How does your delivery driver communicate with their social circles? Or anyone that services you? You will find different technical solutions are used as you move up and down the social hierarchy.
>The main problem with privacy and computer control is a collective one that must be solved through laws.
Technology moves faster than what any law maker can create. We do not need more laws as technology advances but rather an enforcement of personal rights and protections enabling users to be aware of what is happening. It appears you are stating "people aren't smart enough to control their devices" and "We need laws to govern people" vs my argument that "people should be given the freedom to chose" and "existing laws should be enforced and policy makers should protect citizens with informed consent".
10 replies →
> "Impossible to leave" is not a matter of closed or open, but it's a matter of social networks in general. You could make Facebook free software and its problems wouldn't disappear.
Not true. If you have interoperability between different networks, you can leave. This is how ActivityPub (e.g. Mastodon, PeerTube, PixelFed) works.
> Not to mention that, again, 99% of people will get vendor-locked because in the end nobody wants to run their own instance of a federated social network.
You just switch to any other instance, because Mastodon doesn't prevent you from doing that.
> The main problem with privacy and computer control is a collective one that must be solved through laws. Thinking that individual action and free software will solve it is completely utopic.
We need both. You cannot force Facebook to allow interoperability when there is no other social network.
9 replies →
You could go back even further if you wanted. Possibly to the first handwritten letter delivered by a third party. That's where all the potential for censorship and tampering started.
Truth is even if our tools evolve, our chains evolve faster.
Case in point being the Black Chamber run by the Thurn und Taxis post in Brussels in the 16th Century.
https://link.springer.com/chapter/10.1057/9780230298125_11
Signals intelligence intercept and analysis centres have been called Black Chambers for a long time, including the first such group in the US, predecessor to the NSA:
https://en.wikipedia.org/wiki/Black_Chamber
It's over the top comments like this that make me visit ycombinator less and less each week.
What do you think is over the top with it? The manufacturer just told you he will implement a complete read permission on your device.
I don't think this is a fair characterization; it should not be most people's life goal to fight for their privacy against big companies. Some people make it theirs, and that's fine, but I think it's def not something to expect from most people, in the same way that you don't expect everyone to be actively fighting for clean tap water or drivable roads.
Instead, we as a collective decided to offload these tasks to the government and make broad decisions through voting. This allows us to focus on other things (at work, with our actual job, at home, you can focus with what matters for you, whatever that might be).
For instance, I tried to avoid Facebook for a while and it was working well, I just missed few acquaintances but could keep in touch with the people who matter for me. Then suddenly they acquired Whatsapp. What am I to do? Ask my grandmother and everyone in between to switch to Telegram? Instead, I'm quite happy as a European about the GDPR and how the EU is regulating companies in these regards. It's definitely not yet there, but IMHO we are going in the right direction.
Is this anything new, though? The communications haven't been E2E protected ever since people started using phones.
No, but stock phones haven't always been spying on users client side.
Because E2E didn't exist when the phone was invented but there have been significant improvements since then.
3 replies →
Probably because for most people they estimate the risk to be low enough (correctly or not). If I was a politically sensitive person in China for instance I’ll definitely be more weary.
First they came for the Communists. And I did not speak out Because I was not a Communist
Then they came for the Socialists. And I did not speak out Because I was not a Socialist
Then they came for the trade unionists And I did not speak out Because I was not a trade unionist
Then they came for the Jews. And I did not speak out Because I was not a Jew
Then they came for me. And there was no one left To speak out for me
I would trade all my personal privacy if that meant eliminating pompous drivel producing dolts like you from all aspects of my life.
Fuck you.
Could you please stop posting flamebait and unsubstantive comments? We ban accounts that post like this, obviously.
https://news.ycombinator.com/newsguidelines.html
(1) happened with the first multitasking OS, or possibly when CPUs got microcode; Android and iOS are big increases in freedom in comparison to the first phones.
(2) and (3) are bad, but a tangential bad to this: it’s no good having an untainted chat layer if it’s running on an imperfect — anywhere from hostile to merely lowest-bidder solution — OS. (And most of the problems we find in software have been closer to the later than the former).
(4) for all their problems, the American ones held off doing that until there was an attempted coup, having previously resisted blocking Trump despite him repeatedly and demonstrably violating their terms.
Re (1): That's technically true, but missing the point when viewed holistically. Those first feature phones were mostly just used to make quick calls to arrange an appointment or discuss one or two things. They were not a platform to mediate a majority chunk of our social lifes like today's phones are.
1 reply →
Will this work differently depending on what country you are in? For instance, back in 2010 there was that thing about Australia ruling that naked cartoon children count as actual child porn. [1]
It's perfectly legal elsewhere (if a bit weird) to have some Simpsons/whatever mash-up of sexualised images, but if I flew on a plane to the land down under, would I then be flagged?
edit: If this is scanning stuff on your phone automatically, and you have whatsapp or whatever messenger set to save media automatically, then mass texting an image that is considered 'normal' in the sender country, but 'bad' in the recipients, you could get a lot of people flagged just by sending a message.
[1] https://arstechnica.com/tech-policy/news/2010/01/simpsons-po...
Get ready for the witch hunts
Sorry to say that, but stuff like this has to happen at some point when people don't own their devices. Currently, nearly no one owns their phone and at least EU legislation is underway to ensure that it stays this way. The next step will be to reduce popular services (public administration, banking, medicine) to access through such controlled devices. Then we are locked in.
And you know what? Most people deserve to be locked in and subject to automatic surveillance. They will wake up when their phone creates a China-Style social score automatically, but then it will be far too late. It's a shame for those people that fought this development for years, though. But the "I have nothing to hide" crowd deserves to wake up in a world of cyber fascism.
Let us comb this a bit.
When you mention that set of population as deserving the consequences, it does not seem too far to me from "People who want trains instead of cars deserve trains". Is this relevant? The big problem is, people buy controversial services, hence finance them and endorse them, hence strengthen them, and in some cases these services make the acceptable ones extinct: the big problem is that people do not refuse what is not sensible, and sensible people have to pay.
Already here where I live, I cannot get essential services¹ because that practice made them extinct!!!
¹(exactly: public administration, tick; banking, very big tick; medicine, not yet. And you did not mention ___the cars___, and more...)
Other note: you wrote
> nearly no one owns their phone
and some of us are stuck with more reliable older devices, which soon may need some kind of replacement. If you know the exceptions to the untrustable devices, kindly share brand/model/OS/tweak.
> If you know the exceptions to the untrustable devices, kindly share brand/model/OS/tweak.
https://puri.sm/products/librem-5
1 reply →
Why do they DESERVE to be so? Despite what you say there was and is no mechanism to really change or affect the course of these affairs.
Apple? How? Your other option is Android, who do you choose when they start to do it?
Or when governments decide to mandate that ALL phones need to legally have "scanning all the files on it and report them back to the police database" mechanisms?
The EU? Particularly how? An organization that has been deliberately structured to supersede the legitimacy of nation states and export it's power to all of it's members at the whim -- sometimes it seems -- of some aging out of touch bureaucrats.
I'm not even a #brexiter, btw.
Should Scotland become an independent nation? There was a public debate and people had opinions -- and there were mechanisms in place to act on and make a change, as an example.
There has been no public debate on this in a national sense (anywhere), and also no mechanisms by which people could decide to change it. I'm not sure people deserve it.
Yeah but I don’t and I don’t like being collateral damage.
>EU legislation is underway to ensure that it stays this way.
which one?
Radio Equipment Directive https://ec.europa.eu/growth/sectors/electrical-engineering/r...
Devices with radio capabilities (i.e., all mobile devices) must be designed to prevent executing "unauthorized" software.
4 replies →
Well this really debunks my common phrase “Apple is a Privacy company, not a Security company”
I can’t say I’m surprised they are implementing this (if true), under the radar. I can’t imagine a correct way or platform for Apple to share this rollout publicly. I’m sure nothing will come of this, press will ignore the story, and we all go back to our iPhones
Apple only claim to care about privacy because they couldn't manage to compete with Google on running ad-serving cloud service. Since they couldn't sell ads in meaningful number, they figured they might as well brag about it.
But Apple iCloud for ex doesn't intrude on your privacy any more or less than Google Photos.
Apple, like all companies is a Money company.
This is really a pointless comment - yes all companies are ultimately there to make money, but that does not mean always blindly doing whatever makes the most money in the short term. Clearly apple sees value in marketing themselves as privacy friendly.
And a "subject to other powers" company.
This will go great with zero-click iMessage exploits like this one: https://9to5mac.com/2021/07/19/zero-click-imessage-exploit/
Edit: Actually, this won't even require an exploit if they also scan media for people who have enabled "iMessage in iCloud".
Just send someone an image in the DB (or an image that's been engineered to generate a false positive) and wait for them to get raided.
Yup, there is now a single API call for planting an evidence onto dissident's phone and sending a SWAT team to retrieve him.
Authoritarian regimes love this.
One could envision false positive images that don’t even display in iMessage when sent or that are nested in other file types, etc.
The terrifying part about this is potential abuse. We have seen people arrested for having child porn in their web cache just from clicking on a bad link. I could inject your cache with any image I want using JS.
Presumably the same could apply to your phone. Most messengers save images automatically. I presume the images are immediately scanned against hashes once saved. And the report is immediately made if it passes the reported threshold. There’s no defence against this. Your phone number is basically public information and probably in a database somewhere. You have no protection here from abuse, if you’re a normal citizen. I bet most people don’t even turn the auto save setting off on WhatsApp.
I've read that people intentionally bomb telegram groups with CP. Telegram automatically downloads shared images. This is going to be a crapfest.
Maybe this is a necessary crapfest though, to demonstrate how absurd these measures are.
This has worrying privacy implications. I hope Apple makes a public announcement about this but wouldn’t be surprised if they don’t. I also would expect EFF will get on this shortly.
What are the implications?
To quote another tweet from Matthew Green, the author of the Twitter thread (https://twitter.com/matthew_d_green/status/14231103447303495...):
> Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content.
> That’s the message they’re sending to governments, competing services, China, you.
5 replies →
False positives, what if someone can poison the set of hashes, engineered collisions, etc. And what happens when you come up positive - does the local sheriff just get a warrant and SWAT you at that point? Is the detection of a hash prosecutable? Is it enough to get your teeth kicked in, or get you informally labeled a pedo by your local police? On the flip side, since it's running on the client, could actual pedophiles use it to mutate their images until they can evade the hashing algorithm?
12 replies →
A country can collect a list of people sharing any content they put on a hash list.
Like gay porn, 'save Khashoggi' meme, or a photo from documentary about missing Uighurs.
It's hard to imagine how this could be misused, right?
3 replies →
Ok. I can say this since I don't have anything to hide (edit: 1. that I am aware of and 2. yet).
I switched to the Apple ecosystem 2 years ago and have been extremely happy.
I couldn't see a single reason to switch back.
Today that reason came. What goes on on my phone is my business.
I guess fairphone next.
Again, I think have nothing to hide now so I can sat this loud and clear now. Given what recent elections have shown us we cannot know if I have something to hide in a few years (political, religious? Something else? Not that I plan to change but things have already changed extremely much since I was a kid 30 years ago.)
I'm gonna go out on a limb here.
At the end of the day laws are relative so to say. The thought behind such a system is noble indeed, but as we've seen, anything any government gets their hands on, they will abuse it. Classic example being PRISM et al. In theory it's great to be able to catch the bad guys, but it was clearly abused. This is from countries that are meant to be free, forward thinking etc, not any authoritarian regimes.
People in this thread are asking what Saudi Arabia, China etc will do with such power that Apple is adding, you bet your ass that they'll use it for their own gain.
I want to believe in such systems for the good. I want child abusers caught. But a system that equally can be abused by the wrong people (and I guarantee you that will be western countries too) ain't it.
It's not even hypothetical, it's already known that Apple has to use servers operated by China for their operations there [1] so this capability will be fully within their hands now too to arbitrarily censor and report iPhone users for any material they want to disallow.
[1] https://www.businessinsider.com/apple-data-china-censors-app...
how the fuck am i supposed to know if that image i downloaded from some random subreddit is of a girl who is 17.98 years old? how long until we just use a NN to identify images of children automatically? she looks pretty young so i guess you will get disemboweled alive in prison? what is stopping someone from planting an image on your phone or a physical picture somewhere on your property? im so tired of this fucking dogma around child porn. you can always identify the presence of dogma by the accompanying vacuum of logic that follows in its wake. a teenage girl can go to jail for distributing pictures that she took of herself. do i even need to say more?
And with this, the fear politics are in effect. Just from reading the comments it seems one can no longer be 100% sure their phone is clean. So people will live in constant fear that on some random Tuesday the cops will come knocking, your reputation will be destroyed and in the end when you’re cleared, you will have incurred incredible financial and mental costs. This is just aside the fact that your phone should be your phone and no one should be allowed.
You demo this tech working with child porn, it maybe shows it's worth with some Isis training videos but before long China will be demanding access on their terms as a condition of accessing their markets.
And at that point the well meaning privacy advocate who worked hard to get some nice policies to protect users is booted off the project because you can hardly tell the shareholders and investors who own the company that you're going to ignore $billions in revenue or let your rival get ahead because of some irrelevant political movement on the other side of the world.
It's happened plenty of times before and it'll happen again.
What I find disturbing is that almost all commenters here took that rumour for a fact. There's nothing to substantiate it, there's no evidence of scan actually happening, and there's no historical precedence of similar thing done by Apple. And yet, people working in tech with supposedly developed critical thinking took the bait.
Why? Is it simply because it fits their world view?
The world has changed. May not be completely accurate yet, but fits 2021 like a glove:
https://sneak.berlin/20210202/macos-11.2-network-privacy/
https://sneak.berlin/20201112/your-computer-isnt-yours/
You’re right of course but I think in this case it was due to the reputation of the poster on Twitter. At least, that’s the only reason I would take this rumor seriously. But yeah, a rumor is a rumor still.
I found another source (https://www.macobserver.com/analysis/apple-scans-uploaded-co...) saying apple was already running these scans on iCloud using homomorphic encryption… in 2019. It doesn’t really make sense for them to run it on device. Apple has the keys to unlock iCloud backups on their server and a sizable portion of users have those enabled, so why bother to run these on device?
I’m not sure if it’s a rumor or not but there was a thread on HN the other day about Facebook exploring homomorphic encryption for running ads on WhatsApp and I wonder if wires got crossed?
The article said they could see Apple using homomorphic encryption. Not currently.
Scanning on phones is a step toward end to end encryption for photos probably. Or scanning not uploaded photos. Or both.
This matches up with how I view Apples corporate thinking. "we know what's best" "the consumer is not to be trusted". Apple limits access to hardware, system settings, they block apps that don't meet moral standards, are "unsafe", or just might cause apple to not make as much money. They do not significantly care what people say they want after all they know best.
A lot of people love not having options and having these decisions made for them.
I would never want a device like that or with something that scans my device, but I think the vast majority of their customers if they even hear about it will think "I trust apple, they know what's best, it wont affect me"
Im ok with apple doing it because i think most apple users will be ok with it. I would not be ok with it if all Android devices started doing it though.
That’s sort of the genius of Apple though, isn’t it? They make products that hold your hand tighter than any designer but their marketing department obscures that fact as much as possible with their “think different” campaigns.
It’s less “I trust Apple” and more “if I really cared, I’d have bought from another designer”.
This isn't exclusive to Apple - Microsoft recently decided that starting from August Windows Defender will have the option for blocking PUAs enabled by default for those users who doesn't have other third-party security software [1]. This also I belive falls under "we know what's best" and "the customer is not to be trusted" or "is too stupid to run things by on its own".
This does looks good on paper - caring for customers and their security, peace of mind but tomorrow it might be a total vendor-lock with no ways of installing any other software than one approved by the corporate entities.
[1] - https://www.ghacks.net/2021/08/03/windows-10-blocks-potentia...
I don’t see why a circumventable default block of random executables is bad. People (even technically adept ones, who are a small minority) are very easy to fool. Depending on how easy it is to allow execution of such a program (which is a UX problem) it can indeed be what prevents a botnet on a significant number of computers.
I'm a little bit confused here and hope maybe some of you can clear this up.
My parents took lots of photos of me as a baby/small child. Say lying naked on a blanket or a naked 2yr old me in a kiddie pool in the summer in our backyard. Those are private photos and because it was the 1970s those were just taken with a normal non-digital camera. They were OBVIOUSLY never shared with others, especially outside immediate family.
Transform that into the 2020s and today these type of pictures would be taken with your iPhone. Would they now be classified as child pornography even though they weren't meant to be shared with anyone nor were they ever shared with anyone? Just your typical proud parent photo of your toddler.
Sounds a bit like a slippery slope, but maybe I am misunderstanding the gravity here. I'm specifically highlighting private "consumption" (parent taking picture of their child who happens to be naked as 1yr olds tend to be sometimes) vs "distribution" (parent or even a nefarious actor taking picture of a child and sharing it with third parties). I 100% want to eliminate child pornography. No discussion. But how do we prevent "false positives" with this?
As with all horribly-ill-defined laws, it depends how the judge is feeling that day and their interpretation of the accused's intent. If the case can be made that the images arouse inappropriate gratification, they can be deemed illegal.
If that sounds absurd - most laws are like that. For better or worse, there's a human who interprets the law, not a computer. It's unfortunate Apple is choosing to elect a computer as the judge here, for exactly concerns like yours.
I believe there is a large database of known child pornography.
Unless someone has been distributing photos of your kids as child porn (which would probably be good to know) it's unlikely any of your photos will match the hashes of the photos in that database.
I'm not sure that's how it works, but that's what I've gathered from the other comments on this post.
The idea is it detects specific images humans classified. Not any unclothed child.
So far. Many websites already use NN trained to detect any nudity. It is only a matter of time before it lands on all consumer computing devices. The noose will keep on tightening because people keep debating instead of protesting.
1 reply →
Well, this is very problematic for a privacy concerned company. Under no circumstances do I want Apple to scan my private files/photos, aspecially so if it means that an alarm can allow someone to determine if it is a positive or a false positive.
Also, this functionality isn't something they should be able to implement without telling their end users.
It is also problematic because it will just make the cyber criminals more technical aware of what counter measures they must take to protect their illegal data.
The consequence is very bad for the regular consumer: the cyber criminal will be able to hide, and the government has the possibility to scan your files. End consumer lose, again.
Every so often I feel a wave of revulsion that the computer I use the most — my iPhone — is an almost completely closed system controlled by someone else.
Contrast this with my desktop where, in the press of a few buttons, I am presented with the source code for the CPU frequency scaling code.
Bring on the Linux phones.
Can you recommend one?
Either a PinePhone or a Librem 5. There's not much more choice than that atm.
3 replies →
This will be used for anti-piracy, government censorship, and targeted attacks, as always. There's no such thing as "were only scanning for CP". By creating the tool the company can be compelled to use the tool in other ways by U.S. or foreign governments. Apple already complies with anti-lgbt countries and will change their app store to suite each one of them. What happens when they're required to also scan for LGBT materials? They'll comply, because apple doesn't actually have morals.
Ontop of this, it gives apple far too much power. What happens when someone they don't like owns an iphone? They can pull an FBI and put the content onto the device, and having it then "automatically detected".
Saudis: We want a list of everyone who ever shared a photo of Khashoggi (no matter in which app).
Apple: Say no more, here they are. Hope you won't imprison all of them, as that would decrease our services revenue substantially, lol.
Also Apple: Privacy is a human right, buy more iphones.
For those not familiar with the acronym, CSAM = Child Sexual Abuse Media
Since Snowden I use my phone in minimalistic way. Phone calls. Minimal texting. No games. Banking apps if necessary.
Treat your phones as an enemy. Use real computers with VPN and software like Little Snitch when online. Use cameras for photography and video.
The benefits of this approach are immense. I have long attention span. I don't have fear of missing out.
If governments wan't the future to be painted by tracing and surveillance mediated towards people trough big tech - lets make it mandatory by law. And since big tech will reap benefits from the big data they must provide phones for free. :)
>Treat your phones as an enemy. Use real computers with VPN and software like Little Snitch when online.
I'm assuming your "real computer" is a mac (since little snitch is mac only). What makes you think apple won't do the same for macos? Also, while you have greater control with a "real computer", you also have less privacy from the apps themselves, since they're unsandboxed and have full access to your system.
They said "_software like_ Little Snitch"... Don't assume.
1 reply →
Not the right logic here. Check your idea more seriously. Mac os is just an example.
2 replies →
> Since Snowden I use my phone in minimalistic way.
Dude, you carry it around with you, with its radio enabled. You're just fooling yourself.
Yes, I am a dumb person obviously. Thanks for your invaluable input. Dude. But my use case is not to hide or remove digital exhaust. Creating habit of limited usage is more important and realistic. Funny part is that as a side-effect I don't cary my smartphone around so much. I have separate GPS system in my cars and dumb phone for emergency.
If you're treating your phone as hostile why would you skip gaming apps but use banking ones? That seems backwards if you're assuming your mobile is the weak point.
In the EU the PSD2 directive obliged banks to provide strong authentication for customers login process and various operations on the account incl. payments ofc. Most of the time mobile applications are being used in the result - for either login confirm or as software OTP generators (biometric verification is also supported); the lists of printed codes are rather obsolete now and some banks may actually charge your extra for sending you text messages with such codes. I know there are hardware security tokens but in all these years I haven't seen anyone using such here.
So, it's rather hard to avoid banking apps.
Also, the PSD2 directive implements the duty of providing API infrastructure for third-parties. [1]
https://www.ecb.europa.eu/paym/intro/mip-online/2018/html/18...
1 reply →
Actually we are the weak point. The phone stuff is just unregulated capitalism.
4 replies →
I was under the impression that one of the reasons why these tools aren’t available for public download is because the hashes and system can be used to design defeat mechanisms? Doesn’t this mean that someone who has an image and a jail broken device can just watch the system, identify how the photo is detected, and modify it so that it doesn’t trip the filter?
PhotoDNA and systems like it are really interesting, but it seems like clientside scanning is a dangerous decision, not just from the privacy perspective. It seems like giving a CSAM detector and hashes to people is a really risky idea, even if it’s perfect and it does what it says it does without violating privacy.
I see it as a huge risk too.
If the algorithm and the blocklists leaked, then not only it would be possible to develop tools that reliably modify CSAM to avoid detection, but also generate new innocent-looking images that are caught by the filter. That could be used to overwhelm law enforcement with false positives and also weaponized for SWAT-ing.
Fortunately, it seems that matching is split between client-side and server-side, so extraction of the database from the device will not easily enable generation of matching images.
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
These days you can trump on anyone's (right to) privacy, freedom or general rights for that matter with just 2 keywords:
- terrorism - child pornography
Try to protest it and you will be prompted with a nice "do you have anything to hide?" question by the masses.
The advertised intention of these tools could not be farther from the truth and people happily fills their pockets at each new launch.
4 good reasons why you should buy a iphone
- (Check) fragile
- (Check) best monopoly app store
- (Check) high price
- (Check) phones the police in case of a sha collision
I seriously doubt Google is going to have a different opinion on this. They already scan google drive content for mere copyright violations.
I just assumed the entire FAANG group scanned user content for CP already. I mean, EU recently passed a vote that extended the permission for companies to be able to scan and report user content for this purpose without it being considered a privacy violation [1] (first introduced in 2020). And I recall MS[2]/Google[3] being open about this practice way in the past.
Personally I somehow doubt that MS/Google weren't scanning private content (aka not shared) for this type of material. But can't have transparency with these behemoths.
[1] https://www.euronews.com/2021/07/07/eu-adopts-temporary-rule...
[2] https://www.theverge.com/2014/8/7/5977827/microsoft-tips-off...
[3] https://www.theverge.com/2014/8/5/5970141/how-google-scans-y...
This might be unpopular opinion but catching people sharing CP images is like catching end users of drugs. Yes it's illegal but the real criminals are the ones producing drugs. But it's very difficult to get to them, so you just arrest end users.
Another side note is about near future when someone comes up with synthetic CP images, will they also be criminalised?
It's not just unpopular, it's also wrong: when you use drugs, you are almost entirely harming yourself (leaving aside funding all sorts of illegal activities, just focusing on the act itself). When you propagate CSAM material, you are causing psychological harm to the victims, plus can cause them to physically harm themselves or get harmed by others. So you are a criminal, harming a victim as well.
You can read about this directly from a victim via this NYT article: https://www.nytimes.com/2020/12/04/opinion/sunday/pornhub-ra...
How would a victim of CSA ever find out that I downloaded a particular file? Surely the harm there is caused by the distributor, not the consumer.
Conversely, when I use drugs, I'm paying someone, so I'm actually directly funding criminals. Depending on the country and the drugs, this is often putting cash in the hands of a very violent cartel.
1 reply →
Consumption encourages production.
I have so many questions about the implementation details.
1) Does this work only on iPhones or will it be iPads, as well?
2) Is this part of a system software update? I wonder if that will show up in the notes and how it would be spun. "In order to better protect our users ..."
3) If it is part of the system software update, will they be trying to make it run on older iDevices?
4) Is it just photos in your photo bin, iCloud, or does it start grabbing at network drives it attaches to? I could see the latter being prone to blowing up in their proverbial faces.
The Four Horsemen of the Infocalypse ride again!
I’m certain if it gets added to iPhones, it’ll get added to iPads. iPadOS is iOS.
Even when this reaches its final conclusion, policing copyrighted and political content, people will still be content to use their i-spy-devices. The future is grim; it's now.
The idea of this scanner is good, but as you say it's almost certainly bound to evolve into something else.
A court order should be mandatory, not this blanket scanning.
Thankfully, there is the option to steer clear of Apple devices.
How do they determine if an image is child porn? My wife has an iPhone and we take pictures of our baby daughter on it, sometimes in diapers and sometimes naked. Our intentions are not pornographic but now I am worried about apple's algorithm flagging them as such.
Has apple published its training data somewhere?
It just gathers hashes without judging or interpreting. This is the first phase. When a child porn picture is discovered and inquired about, they just compare the hash of it with what they have on database and see who had that picture on their phone as well, allowing them to build a nicely timelined trace of it and even discover the first source.
Wouldn't their scanner be defeated by a tool that randomly modified rgba values for a small percentage of pixels in an image?
This is something that child porn enthusiasts using apple would invest their time and money into. A regular person would not think to do this (yet).
What happens when a theocracy demands that Apple check for hashes of images that disrespect their prophet? To me this sounds potentially more scary and distopian than surveillance in China. But if I'm honest, I don't know that China isn't scanning citizens' devices for illegal hashes.
I'm not worried about China. I'm worried about the U.S. This is a step along the path to the Buttle/Tuttle dystopia that Brazil warned us about.
I've filed a twenty seven B stroke 6 reporting this comment.
It is lovely when your "own" device is working against you to catch if you are in possession of illegal numbers https://en.wikipedia.org/wiki/Illegal_number. And surely we can trust Apple that it will only be used for this kind of content instead of for example government leaks.
I would like to hear the strongest case for the privacy trade-off. How many more children will be physically recovered versus existing methods? What is the reduction in money flow to abduction activities?
This might be naive, but I would guess that the best way to fight this kind of thing is to let people know more of the case details. People would protect themselves, find the crimes, and stop unwittingly supporting them. For instance, if it can be shown that cryptocurrency or encrypted messengers are used to a significant extent, the community will either find a technical solution, or stop using it.
This is terrifying. The possibilities of extraordinary abuse are endless. What's surprising to me is the complete lack of media focus on this topic ? Why isn't this being hotly debated on TV ? Scanning people's photos is just OK now ?
Back to an Android phone, once I confirm this story is true.
If you like this, I have some other innovations that you may be interested in:
* A car that automatically pulls over when a police cruiser attempts to intercept you
* A front door that unlocks when a cop knocks
* A camera that uses AI to detect and prevent the photography of minors, police, and critical infrastructure
* A Smart TV that counts the number of people in your living room to ensure you aren't performing an unauthorized public broadcast of copyrighted content
Surely, at least one of those sounds ridiculous to you. As well-intentioned as this scanning may be, it violates a core principle of privacy and human autonomy. Your own device should not betray you. As technologists, just because we can do something doesn't mean we should.
No need for hypotheticals, 2020 was a huge win for the police state.
The UK and Israel allowed cops to monitor cell phone locations to crack down on unlawful gatherings in private homes.
https://www.theguardian.com/world/2020/mar/17/israel-to-trac...
The problem with allowing this is that you’re paving the way for future tyrants to use it against us.
> The problem with allowing this is that you’re paving the way for future tyrants to use it against us.
It's funny how everybody talk about the future. This is happenning now. Remember how a certain german guy took the power some 90 years ago ? He was elected.
3 replies →
Why does this surprise anybody?
People nowadays voluntarily carry tracking devices. This will not stop getting worse until that behavior is denormalized.
The power to be gained from abusing it is beyond irresistable. Expecting those in power to not abuse it is like expecting a heroin junkie to be a good pharmacist.
1 reply →
All this feels like it's just matter of time.
The technology is there, now we only need the motivation. If politicians decides that they want it now, they can simply orchestrate a media campaign and have it. The next time an "outrageous" act of crime is conducted, they can make sure that it stays at the media attention and be portrayed as "If we don't act now very bed things will happen", then slide in their solution.
* Cars can automatically pull over by installing a cheap cut fuel cut switch that can be activated by short range radio. In many places people are used to add devices for toll collection anyway. People are also used to pay for regulatory inspections on their vehicles.
* For the old cars, simply connect an NFC reader that unlocks the central lock system of a car by a master key. For the new cars, simply make manufacturers add a police master key.
* Commercial drones are already stopping their users from flying over forbidden areas, simply extend that to smartphones. Smartphones have enough power and sensors to identify forbidden locations and persons. Add NFC kill switch, meaning the police can send a signal to lock down cameras.
* There were reports of Smart TVs that record all the time, simply mandate it to all manufacturers and enforce automated inspection of the recordings.
Uneven application of the law seems crucial to keep the system functioning and technology can erode that. Many simple laws, if enforced thoroughly and without prejudice, would become absolutely draconian. It is not even possible for a human to know all the laws we are meant to follow at all times, yet computers can.
Printers and scanners have refused to process imagines containing certain patterns of stars for decades and it seems to have worked out OK.
No it really hasn't worked out 'OK' because here we are now.
Could you elaborate?
2 replies →
> Your own device should not betray you.
Apple devices already betray their "owners", and they've been doing it for a long time.
You can't repair them.
You can't run your own software.
You can't use a better, more compliant web browser.
Businesses have to pay a 30% tax.
Businesses are forced to use login with Apple and forfeit a customer relationship.
Businesses have to dance to appease Apple. Their software gets banned, randomly flagged, or unapproved for deployment, sometimes completely on a whim.
Soon, more iDevices and Apple Pay will lead to further entrenchment. Just like in the movie Demolition Man, everything will eventually be Apple. Your car, your movies, your music, your elected officials.
While these things are reprehensible, I don't see much of them as "betraying" me, the user, as much as I do this new tool.
A lot of folks here need to go into a city.
People WILL be a fan of all of this if the alternative is lots of robbery / car theft etc etc.
We see this globally. If the state is not offering security now - they will accept incredible craziness for security.
One note - in most cases folks trust Apple MORE than they would for example the Trump administration. Food for thought.
I live in a major city and I’m a fan of none of these things. I’m only one data point, but yours is a sweeping and inaccurate generalization that “cities are frighteningly unsafe”.
Maybe one trusts Apple more than <insert politician>, but they cannot so easily elect away Apple.
1 reply →
I'm okay with all of those things. Keep everyone protected and safe.
That is a really esoteric flavor of boot you seem to be enjoying to lick there…
1 reply →
How is that fourth point keeping anyone safe?
15 replies →
On the light side perceptual hashing is actually very interesting technology.
If you're interested I suggest you also have a look at photo DNA[0]
[0]: https://www.microsoft.com/en-us/photodna
Ive already been itching to de-cloud, and de-tech my life. If were already getting to this stage of surveillance I guess thats just another sign I should be getting on top of it.
Today its csam. Tomorrow "misleading information". etc.
Im looking to do the same - this pandemic has made me feel quite claustrophobic about the encroachment of tech, and work into my personal life. Im planning on getting a dumb-ish nokia phone and leaving my smartphone at home to try and wean myself off it. What are your plans?
Similar. Ive been looking into "dumb phone" options, and moving many tasks to old school solutions. Eg desk calendars, notepads, physical books, etc.
For tech stuff I do need Id like to pick dedicated "appliance like" tools, but each of those requires some research.
Follow-up thread: https://twitter.com/matthew_d_green/status/14230910979334266...
So many questions that akes this Tweet look odd. It's a "Client side tool" - so what? An app you install? That law enforcement can install? That Apple can silently install.
It lets "Apple Scan"? So Apple is going to proactively scanning your photos using a tool then install?
So many questions about this. It doesn't add up.
This is just horrible… the people who actually abuse children and download such photos will now stop using Apple devices, and now the rest of us is vulnerable to misuse/abuse/corruption.
Instead of specifically targeting suspects, everyone is surveyed by default. Welcome to a world of mass surveillance.
This is horrifying. Does this only affect iMessage, or the photos library? Is it remote? Does it require physical access?
As I understand it: it's a tool (that sends a command of some sort) that compels an iphone to perform the hashing match operation, and output results. Is that correct? Does it notify the user?
If I had to build it within apple's privacy framework, that'd probably be my approach: remote command causes sepos unlock of photos library (even running the job on sepos?) to do photo scanning. sepos returns hashes that match
Ah I’m done now. That’s one step too far. When something overreaches into my data and risks integrity and privacy problems it’s game over.
So which Linux desktop sucks the least at the moment?
Soon enough this will scope creep into anything containing what a puritanic neoliberal corporation considers contraband. Time for a serious look at Linux phones.
> Linux phones
Contraband detected
I won't get into the CSAM discussion but for anyone that has a stash of non-DRMed content I think it's a good idea to look into alternatives to Apple devices. Sooner rather than latter the same kind of system will be auto-deleting or alerting authorities about copyrighted material and I doubt that too much care will be taken to ensure that you didn't actually have a right to those copies.
In Apple's public comments they repeatedly say things like "Apple can't access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account."
If they are only concerned with iCloud accounts then... why not scan in the cloud? Can anyone explain to me why client-side scanning is actually needed here? As far as I'm aware, Apple only E2E encrypts iMessages, not iCloud photos or backups.
US politicians (to say nothing of countries with less individual freedoms) already openly pressure tech companies to censor specific content. And tech companies will do so even to the point of blocking direct, private, 1-1 messages sharing specific news. In that light, Apple's crossing a line to client-side scanning seems deeply concerning.
I don't see how keeping this as narrowly-targeted as it's being advertised would ever be possible or even intended.
Isn't there the potential for abuse of this to track things like who you talk to in private? Even if the images on your phone do not contain CSAM, the hashes of all your images would need to be shared with Apple, the NCMEC, and who knows what other quasi-gov't agencies. All it would take to build massive graphs of who talks to who, etc is to match up those hashes. It doesn't matter if they have no idea what image the hashes correspond to... If they then take the simple step of generating hashes for common images found online, they could even track what sites you browse and such. Ignoring the potential for false positives and other negative side effects of the main goal, this is a horrific breach of privacy. If you honestly think the gov't won't abuse any and all data they collect this way, I don't know what to say...
How likely are perceptual hashes to give a false positive? If I take a picture of a tree, how likely is it that a few pixels are going to line up just right in a hashing algorithm and say it might be child porn? How likely is it that law enforcement is going to understand the limitations of this technology? How likely is it that the judicial system will understand the limitations?
I can see law enforcement showing up at my door one day with a search warrant demanding to have a look around, and I would have no idea why they’re there, but they’ll want to look through all my personal belongings.
Worse yet, I might come home from work one day, see my windows broken, see my place has been ransacked and my computers are missing. I would call the police to report a burglary only to hear than that I’m under investigation and they need me to give them the key to decrypt my hard drives.
The slippery slope argument is that the use of this method on private files, i.e. not shared with others except for the service provider can legitimise the expansion of such scamming scopes.
While this argument can and have indeed happened in other instances, this is akin to saying that we should not give anyone any powers to do anything because it is a slippery slope that they can use it to do bad things.
What then sets out the difference between what a slippery slope and a non-slippery one is? Checks and balances and the history of USA have shown that this is indeed what can reign in the worst instincts of any single entity. History of course have also shown when these failed and these should serve not as a reason to reject the idea of checks and balances but as acknowledging it's imperfection and think of ways to mitigate it.
I think the checks and balances are pretty fragile and very susceptible to public opinion.
Two instances: 1) Post 9/11 Patriot Law 2) McCarthy era: https://www.e-ir.info/2011/11/03/the-extraordinary-injustice...
Sure. And as I mentioned, there will be screw ups along the way. As with any new capability/tech be it nuclear power or recombinant DNA or ability to locate CP, there can be legitimate uses that we can rally behind and ways for them to be abused.
Checks and balances are never a done deal. If we reject checks and balances and as a result reject new tech because of abuse potential, how then should we as a civilisation advance?
I really have to wonder why Apple chose to do this.
As far as I know, this kind of scanning is not legally mandated. So, either they think that this will truly make the world a better place and are doing it out of some sense of moral responsibility, or they've been pressured into it as part of a sweetheart deal on E2E ("we won't push for crypto backdoors if you'll just scan your users' phones for us"). Either way it doesn't thrill me as a customer that my device is wasting CPU cycles and battery life under the presumption that I might possess data my current jurisdiction deems illegal.
For all the acclaim privacy-forward measures like GDPR get here, I'm surprised there isn't more outright repudiation of this frankly Orwellian situation.
Sales on China and wealthy arabic totalitarian regimes. Customer asks, customer gets.
I scanned the comments to find out who this person is and how they would have any inside info and found nothing. Why is this person’s claim being taken at face value? Before debating the merits of Apple scanning photos / hashes, why does anyone believe this is true?
How does one own/use an iPhone and help mitigate any issues from this? How does one help prevent this kind of sneaky photo crawling? I feel like in order to prevent people from spying on me I have to change _everything_ I do on my phone/computer.
According to this this has been a thing for quite a while:
https://fightthenewdrug.org/apple-fights-child-porn-by-scann...
Huh. I always took the cynical view and assumed that this was something every proprietary OS was already doing, and that this was part of why dark-web die-hards were so insistent on using TAILS. Guess "not yet."
On another note—OSes may only be starting to do this, but that same cynicism still leads me to presume that arbitrary closed-source third-party apps — or even closed-source binary distributions of open-source apps (e.g. the App Store versions of FOSS software) — could have been quietly scanning the files people are passing them for CSAM for years now, without telling users about it. It always seemed to me like the kind of thing it'd make sense to quietly slip into a media player like VLC.
This will end being a nightmare due to false positives especially for parents with kids.
Another real case is how they will handle photos of my own naked kids, of which i have plenty of photos, because it's quite natural for my kids to be running around and playing naked. And i want to capture the moments, not the nudity. And also i have very close friends who are visiting us with they kids and for us it's ok to see each other children playing naked. We sometimes even share photos of nice moments with our kids, where kids sometimes happen to be naked. Is this already CP? Our kids are 3 and 5 years.
Will they have to update their EULA or something before they have it communicate to their servers? I hate everything about this and would like to know when it actually happens. So far it's just a rumor.
I think a lot of people are missing why apple is doing this now. They're doing this because they have a fairly secure eco system. They have also create a proxy that makes it difficult (impossible according to them) to know the client. More than likely this was implemented so when it does go to congress, they can say look we implemented a system. Otherwise the DOJ will continue to push for no encryption or backdoor encryption. There's no winning here.
Great. I'm curious what would happen if you have auto-save received images enabled in WhatsApp and someone would spam you with child pornography images.
Is there any actual evidence presented here? This is someone who's repeating "But it’s important to remember that this is the same company that won’t encrypt your iCloud backups because the FBI put pressure on them.", for which there is also no evidence and quite a lot of testimony to the contrary. His credibility seems questionable in the absence of evidence.
So, the Chinese government could force Apple to add the hash for the Tienanmen Square tank man picture and find it on all iPhones?
I wonder when I gave Apple permission to do this?
It's their phone, surely they can do whatever they want with it?
I’m sure It’s in the TOS.
Maybe the part in every TOS that says the company can change the TOS at any time without warning and you should regularly check the TOS page and stop using the service if you saw a change and didn't like it?
To start scanning my local photos on my local phone and report back?
3 replies →
Wow, this explains so much on why the Indian Government "withdrew" a letter seeking Apple's compliance with new IT surveillance rules - https://thewire.in/government/centre-withdrew-letter-seeking... ...
The indian government has recently introduced new laws that give them power to dictate terms over many online platforms and broaden their surveillance powers over online social media and messenger platforms. One of the laws dealing with messenger platforms requires the platform to track shared content, especially "origin of content" (first originator) of any content that is shared through their network. (Facebook / WhatsApp has already gone to the court to challenge this, as it claims that they would need to break end-to-end encryption for this and it thus violates indian privacy laws).
Apple's iMessage platform has more than 25 million users, and thus should come under the ambit of this law. But strangely, the indian government seems to have given them an "exception" .... and now we know why.
Betcha the Chinese government already has their own DB of hashes that they want to scan for.
So, while everyone discusses how horrible future dystopia will be.
I worry about the method itself: will a simplest of firewalls be very effective against this? The one that forbids any communication except with few hosts, like a pair of them?
So what would the process be for example if a unwitting parent or relative has pictures of a victim on their iPhone that is perceptually similar of their loved one being molested at day care something like that?
How easy is it to generate an image that has the same “perceptual hash” or whatever that are calling it? My guess is it has to be easier than cracking a non fuzzy hash? Do we know the algorithm they are using?
No, it's undisclosed:
>Hashes using a new and proprietary neural hashing algorithm Apple has developed, and gotten NCMEC to agree to use.
>We don’t know much about this algorithm. What if someone can make collisions?
https://twitter.com/matthew_d_green/status/14230792585163448...
How absurd, insecurity through obscurity.
> neural
Oh boy..
Thank for letting me know. Now I know I'm not getting Apple smartphone.
Honestly, it's probably about time to switch to GrapheneOS or LineageOS
Does this not encourage spawning a new arms race? New or modified apps that randomly change hashes of multimedia files as they are stored? If the CSAM DB is just simple hashes like sha256, md5/md4, etc.. then evading detection would be trivial. Or would Apple block applications that could rewrite randomized data into files? People don't have to be against CSAM to dislike something scanning their devices and many developers love puzzle challenges. I assume perhaps incorrectly that whatever app is doing the scanning could potentially also accept additional hash DB's, allowing Apple to enable categories to detect on per region. One of the iPhone emulators should facilitate reverse engineering the application.
The hash is a pictorial representation of the image, and not quite a checksum of the raw file data (like MD5 etc.). I would expect that even photos of printed photos would still have the same pictorial hash (if the photos are properly aligned), where obviously the cryptographic hash would be much different (since it's not an exact replica of the original image) but in the ML's eyes (bearing in mind the pictorial hash is generated through machine learning afaik), there would be a very strong match between visually similar images.
I suppose that it's a bit like when you do a reverse image search on your favourite search engine. When you upload an image, the engine will try and find images that the ML thinks look the same, even if the bits and bytes that make up the file are different. From what I can see, the similarity detection will be much more specific so as to not generate false positives. As you theorise though, it might be possible to modify images to evade detection if the hash's match specificity is high enough.
All bearing in mind that the pictorial hash also is supposedly designed to be a one-way function to ensure that those who know file hashes don't know what the original contents of the file are.
This is terrifying. I'd be pissed if I owned any Apple hardware. Encrypt your systems and run your own private cloud.
Privacy is about the only thing keeping innocent people free in today's world.
Remember earlier this year when bing.com returned an empty page for the image search of tank man? We’re now moving towards a world where your phone can show you that same blank page.
Sucks since we pay for icloud backups and get the lesser service
This isn't even tied to iCloud, they can scan your device even if you don't use iCloud.
iOS downloads CSAM scanning code from Apple and runs it locally—hence, client-side tool.
You may turn off iCloud backups and use local backups instead.
I am done buying apple products. This is the final straw.
OK, I guess the temperature is high enough after ~20 years, frogs are boiled, it's time to move in and consolidate power everywhere.
The comments here suggest very few on HN have run a public service that allows arbitrary upload of photo and video content.
Can we get a statistic how many people switch to Android tomorrow, separated by profession, politicians for instance?
What about people storing their children's photos on the iPhone? How would a system differentiate those?
Apple have separate operations in China, how long until anti-party content is on the hash list over there
I really hope Apple have addressed hash collision. Otherwise we are going to have a bad time.
Can activist shareholders mitigate changes like CSAM scanning?
That tweet thread is saying it will scan for hashes client side and upload the result, circumventing E2E encryption, but then says theyre just going to do it on your icloud backups because they dont have E2E encryption, so which is it?
All?
It doesn't say that. Today Apple servers scan uploaded photos. Tomorrow Apple phones will scan uploaded photos. The next day Apple phones could scan not uploaded photos.
The next day Apple phones could periodically listen to the environment via the internal microphone. The next day Apple phones could take and upload photos of the environment by itself.
CSAM also means Cyber Security Asset Management.
What if I just don't want this feature using my battery time or my network bandwidth?
They have to download the hashes in order to compare them, I wonder if a pihole could help here?
Why do people think this will result in immediate abuse by corrupt governments as opposed to any other Apple service? Just because anime avatar twitter says so?
Now Pegasus can really cause some damage!
General Failure reading drive A:
>Twitter for iPhone Classic
Soviet Apple sees you.
Mac up next, I bet.
I'm assuming android/google must do this already?
Android uploads everything to the cloud and it gets scanned there. But maybe in the future the Tensor SoC can rat you out client-side.
No, it doesn't if you choose to keep it local.
iPhone only? Why not iPad and iMac too?
If you really believe this will stop at child porn, then I have a bridge to sell you.
If you have not already, please check out the wonderful alternatives to iOS/Android that do not trample over your privacy, see https://grapheneos.org or Debian for mobile (which is still very much a WIP).
It is simply unacceptable for a company or government to scan your property that you not allow.
Tomorrow's headline: Unknown drive for freedom phone among the youth as FAANG roll out CSAM.
soon youll be in jail for a meme against the leading political party and the leading line of thought that the press carries.
Whatever THAT is.
Been scanning the answers for a solution, I am 100% on Apple ecosystem and I am all for protecting little children but as others have pointed out nothing is stopping them from colluding with the government in future to say scan for political dissent: - weaponizing memes against politicians as “hate speech” - memes against government suppressing dissent - and we can look what happened to Uighurs
Can Samsung phones that are de-googled work ? I am specifically interested in a new phone that Samsung launches, can it be degoogled ?
source: "trust me bro".
Source: "trust me bro". By all means don't let me interrupt the circle jerk.
Everyone worried about Apple, but Apple is company and follows the laws of the countries it operates in. If this is legal, or even more required by the goverment agencies: Apple will comply. And there is nothing wrong with it. If you don't like the laws of your country, you need to work for that, not just go after Apple on social media.
Could we get a more useful link here, people?
CSAM? According to Google that's Confocal Scanning Acoustic Microscopy. Or something.
And what with the tweeters? I think my laptop just gave me thighburns from the CPU bloat that clicking on that link caused. Eight seconds to render the page? Why do folks still use this twerker website?
CSAM = Child Sexual Abuse Material
If your machine takes 8 seconds to render a Twitter page, perhaps you need a new machine.
Or maybe the web is getting too bloated?
https://idlewords.com/talks/website_obesity.htm
2 replies →
I understand the hesitation here, but fundamentally this is like trying to close pandoras box. If something is technically possible to do AND governments demand it be done, it will be done. If not by Apple, by someone else.
Rather than complain about it, I am interested in what alternative solutions exist, or how concerns regarding privacy and abuse of this system could be mitigated.
I don't understand this argument at all. Look at the Clipper Chip debacle in the 90s. It was technically feasible and the government very much wanted to do it. And the reason they didn't is push back from the public, saying this is a bad idea that can easily be misused, even if it does make some law enforcement things easier. I don't see how this is any different.
Sacrificing the privacy of the many to help catch a relatively small amount of (admittedly some of the worst possible) criminals, while simultaneously enabling yet more effective surveillance and oppression by those inclined governments (of which there are plenty) is a pretty terrible idea.
Eliminating the 4th amendment or mandating clear walls sure would make the cops' job easier. But no one thinks that's even a remotely good idea.
> Eliminating the 4th amendment or mandating clear walls sure would make the cops' job easier. But no one thinks that's even a remotely good idea.
Yet.
1 reply →
This argument falls on it's head when confronted by reality. Either you have a trustworthy government already that will respect your rights in a slow-rolling fashion that shifts as the dialogue within the courts evolves or you already have a government that doesn't care about you and your laws/desires at all and which will do what it wants anyway. Unless you're in the ladder there's no reason to be so hostile to empowering technologies, especially when they're being used to fight some of the most heinous types of crime.
3 replies →
No, we can complain about it, and we can win. Remember when the government tried to ban encryption in early 1990s?
Remember SOPA that the internet killed?
And the Clipper chip!
>what alternative solutions exist,
stop trading freedom for security.
This is too vague. We need something concrete.
Apple beat government spying in the past so I don't see why they can't again.
It’s not trying to close Pandora’s box. It’s liability limitation. They’re effectively saying that if you want to distribute CP don’t do it on an iPhone.
They're basically saying, that they're watching all your multimedia, to see if maybe you're distributing child porn. This is like doing rectal exams on everyone, every day, because someone might be hiding drugs there.