Comment by themafia

3 days ago

"You can only turn off this setting 3 times a year."

Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?

Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.

It's hilarious that they actually say that right on the settings screen. I wonder why they picked 3 instead of 2 or 4. Like, some product manager actually sat down and thought about just how ridiculous they could be and have it still be acceptable.

  • My guess is it was an arbitrary guess and the limit is due to creating a mass scan of photos. Depending on if they purge old data when turned off, it could mean toggling the switch tells microsoft's servers to re-scan every photo in your (possibly very large) library.

    Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.

    • I would be sceptical too, if I was still using Windows.

      I’ve seen reports in the past that people found that syncing to the cloud was turned back on automatically after installing Windows updates.

      I would not be surprised if Microsoft accidentally flip the setting back on for people who opted out of AI photo scanning.

      And so if you can only turn it back off three times a year, it only takes Microsoft messing up and opting you back in three times in a year against your will and then you are stuck opted in to AI scanning for the rest of the year.

      Like you said, they should be limiting the number of times it can be turned back on, not the number of times it can be turned off.

      8 replies →

    • Microsoft crossed that line so many years ago with their constant re-enabling without consent all the various anti-privacy stuff during upgrades.

    • If they are worried about the cost of initial ingestion then a gate on enabling would make a whole lot more sense than a gate on disabling.

    • > I wouldn't assume this was intentionally evil bad faith.

      Then you are hopelessly naive.

  • The number seems likely to be a deal that could be altered upward someday for those willing to rise above the minimal baseline tier.

    Right now it doesn't say if these are supposed to be three different "seasons" of the year that you are able to opt-out, or three different "windows of opportunity".

    Or maybe it means your allocation is limited to three non-surveillance requests per year. Which should be enough for average users. People aren't so big on privacy any more anyway.

    Now would these be on a calendar year basis, or maybe one year after first implementation?

    And what about rolling over from one year to another?

    Or is it use it or lose it?

    Enquiring minds want to know ;)

  • 3 is the smallest odd prime number. 3 is a HOLY number. It symbolizes divine perfection, completeness, and unity in many religions: the Holy Trinity in Christianity, the Trimurti in Hinduism, the Tao Te Ching in Taoism (and half a dozen others)

    • I'd rather guess that they've pick 3 as a passive-aggressive attempt to provide a false pretense of choice in "you can change it but in the end it's gonna be our way" style than thinking they're attributing some cultural significance of number 3 behind this option. But that's still interesting concept tho

      2 replies →

> Why is Microsoft so eager to also be able to know this?

A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.

  • Combine face recognition on personal photos with age checks which include photos,and you can link stuff directly to Microsoft/Google accounts for ads

I agree with you but there's nothing astonishing about any of this unfortunately, it was bound to happen. Almost all of cautionary statements about AI abuse fall on deaf ears of HN's overenthusiastic and ill-informed rabble, stultified by YC tech lobbyists.

  • Worst part about it was all the people fretting on about ridiculous threats like the chatbot turning into skynet sucked the oxygen out of the room for the more realistic corporate threats

    • Right. But then the AI firms did that deliberately, didn't they? Started the big philosophical argument to move the focus away from the things they were doing (epic misappropriation of intellectual property) and the very things their customers intended to do: fire huge numbers of staff on an international, multi-industry scale, replace them with AI, and replace already limited human accountability with simple disclaimers.

      The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.

      And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.

      2 replies →

Actually, most users probably don't understand, that this ridiculous policy is more effort to implement. They just blindly follow whatever MS prescribes and have long given up on making any sense of the digital world.

Can someone explain to me why the immediate perception is that this is some kind of bad, negative, evil thing? I don't understand it.

My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.

Their disclaimer already suggests they don't train on your photos.

  • This is Microsoft. They have a proven record of turning these toggles back on automatically without your consent.

    So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.

    And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.

    • A bug, or a dialog box that says ”Windows has reviewed your photo settings and found possible issues. Press Accept now to reset settings to secure defaults”

      This is how my parents get Binged a few times per year

      10 replies →

    • There’s dark pattern psychology at play here. You are very likely to forget to do something that you can only do three times a year.

      The good news is that the power of this effect is lost when significant attention is placed on it as it is in this case.

    • Of course, the problem with having your data available even for a day or so, lets say because that day you didn't read your e-mails, will mean, that your data will be trained on, used for M$ purposes. They will have powerful server farms at the ready holding your data at gun point, so that the moment they manage to fabricate fake consent, they are there to process your data, before you can even finish reading any late notification e-mail, if any.

      Someone show me any cases, where big tech has successfully removed such data from already trained models, or in case of being unable to do that with the blackboxes they create, removed the whole blackbox, because a few people complain about their data being in those black boxes. No one can, because this has not happened. Just like ML models are used as laundering devices, they are also used as responsibility shields for big tech, who rake in the big money.

      This is M$ real intention here. Lets not fool ourselves.

  • > to prevent wasted processing.

    If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.

    Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.

    > You are trying to reach really far out to find a plausible

    This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.

    Their spokes person also avoided answering why they are doing this.

    On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.

  • That they limit opt-outs instead of opt-ins, when the opt-in is the only plausibly costly step, speaks for itself.

  • Yeah exactly. Some people have 100k photo collections. The cost of scanning isn’t trivial.

    They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.

    • > Yeah exactly. Some people have 100k photo collections. The cost of scanning isn’t trivial.

      Then you can guess Microsoft hopes to make even more money than it costs them running this feature.

    • This is irrelevant to opting out, nobody is forcing MS to scan the photos in the first place.

  • If it was that simple, there would be no practical reason to limit that scrub to three ( and in such a confusion inducing ways ). If I want to waste my time scrubbing, that should be up to me -- assuming it is indeed just scrubbing tagged data, because if anything should have been learned by now, it is that:

    worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company

    Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.

    • Just because you can't personally think of a reason why the number shall be 3, and no more than 4, accepting that thou hast first counted 1 and 2, it doesn't mean that the reason is unthinkable.

      I feel like you're way too emotionally invested in whatever this is to assess it without bias. I don't care what the emotions are around it, that's a marketing issue. I only care about the technical details in this case and there isn't anything about it in particular that concerns me.

      It's probably opt-out, because most users don't want to wait 24 hours for their photos to get analyzed when they just want to search for that dog photo from 15 years ago using their phone, because their dog just died and they want to share old photos with the family.

      This doesn't apply to your encrypted vault files. Throw your files in there if you don't want to toggle off any given processing option they might add 3 years from now.

      9 replies →

  • > Their disclaimer already suggests they don't train on your photos.

    We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.

    • I am aware that many companies train on illegally acquired content and that bothers me too.

      There is that initial phase of potential fair use within reason, but the illegal acquisition is still a crime. Eventually after they've distilled things enough, it can become more firmly fair use.

      So they just take the legal risk and do it, because after enough training the legal challenges should be within an acceptable range.

      That makes sense for publicly released images, books and data. There exists some plausible deniability in sweeping up influences that have already been released into the world. Private data can contain unique things which the world has not seen yet, which becomes a bigger problem.

      Meta/Facebook? I would not and will never trust them. Microsoft? I still trust them a lot more than many other companies. The fact many people are even bothered by this, is because they actually use OneDrive. Why not Dropbox or Google Drive? I certainly trust OneDrive more than I trust Dropbox or Google Drive. That trust is not infinite, but it's there.

      If Microsoft abuses that trust in a truly critical way that resonates beyond the technically literate, that would not just hurt their end-user personal business, but it would hurt their B2B as well.

  • Then you would limit the number of times the feature can be turned on, not turned off. Turned off uses less resources, while turned on potentially continues using their resources. Also I doubt if they actually remove data that requires processing to obtain, I wouldn't expect them to delete it until they're actually required to do so, especially considering the metadata obtained is likely insignificant in size compared to the average image.

  • It's an illusion of choice. For over a decade now companies either are spamming you with modals/notifications up until you give up and agree to a compromising your privacy settings or "accidentally" turn these on and pretend that change happened by a mistake or bug.

    Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.

    Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.

    Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".

    And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.

    ---

    And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:

    > Microsoft's publicist chose not to answer this question

    and

    > We have nothing more to share at this time

    but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.

  • Your explanation would make sense if the limit was on turning the feature on. The limitation is on turning it off.

  • > So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.

    That would be a limit on how many times you can enable the setting, not preventing you from turning it off.

    • Both enabling and disabling incur a cost (because they delete the data, but then have to recreate it), but they wouldn't want to punish you for enabling it so it makes sense that the limitation is on the disabling side.

      3 replies →

  • > Can someone explain to me why the immediate perception is that this is some kind of bad, negative, evil thing? I don't understand it.

    I bet you "have nothing to hide".

    We work with computers. Every thing that gets in the way of working is wasting time and nerves.

  • It sounds like you have revoked their permission to tag(verb) the photos, why should this interfere with what tag(noun) the photo already has?

    But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.

  • > Their disclaimer already suggests they don't train on your photos.

    Did you read it all ? They also sugest that they care about your privacy. /s

Favebook introducing photo tagging was when I exited Facebook.

This was pre-AI hype, perhaps 15 years ago. It seems Microsoft feel it is normalised. More you are their product. It strikes me as great insecurity.

  • Honestly, I hated when they removed automatic photo tagging. It was handy as hell when uploading hundreds of pictures from a family event, which is about all I use it for.

> I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?

Presumably it can be used for filtering as well - find me all pictures of me with my dad, etc.

  • Sure but if it was for your benefit, not theirs, they wouldn't force it on you.

    • Precisely. The logic could just as easily be "you can only turn this ON three times a year." You should be able to turn it off as many times as you want and no hidden counter should prevent you from doing so.

"You can only turn off this setting 3 times a year."

I look forward to getting a check from Microsoft for violating my privacy.

I live in a state with better-than-average online privacy laws, and scanning my face without my permission is a violation. I expect the class action lawyers are salivating at Microsoft's hubris.

I got $400 out of Facebook because it tagged me in the background of someone else's photo. Your turn, MS.

You seem to be implying that users won't accept this. But users have accepted all the other bullshit Microsoft has pulled so far. It genuinely baffles me why anyone would choose to use their products yet many do and keep making excuses why alternatives are not viable.

Tip:

If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.

  • It's rather annoying that high-entropy files (also known as encrypted files... unknown magic header files) in OneDrive trigger ransomware protection.

I assume this would be a ... call it feature for now, so a feature not available in the EU due to GDPR violations.

My initial thoughts were so they could scan for csam while pretending as if users have a choice to not have their privacy violated.

  • From my understanding, CSAM scanning is always considered a separate, always on and mandatory subsystem in any cloud storage system.

    • Yes, any non E2EE cloud storage system has strict scanning for CSAM. And it's based on perceptual hashes, not AI (because AI systems can be tricked with normal-looking adversarial images pretty easily)

      22 replies →