Comment by jchw

1 month ago

What I want is very simple: I want software that doesn't send anything to the Internet without some explicit intent first. All of that work to try to make this feature plausibly private is cool engineering work, and there's absolutely nothing wrong with implementing a feature like this, but it should absolutely be opt-in.

Trust in software will continue to erode until software stops treating end users and their data and resources (e.g. network connections) as the vendor's own playground. Local on-device data shouldn't be leaking out of radio interfaces unexpectedly, period. There should be a user intent tied to any feature where local data is sent out to the network.

So why didn't Apple just simply ask for user permission to enable this feature? My cynical opinion is because Apple knows some portion of users would instantly disallow this if prompted, but they feel they know better than those users. I don't like this attitude, and I suspect it is the same reason why there is an increasing discontent growing towards opt-out telemetry, too.

This mindset is how we got those awful cookie banners.

Even more dialogs that most users will blindly tap "Allow" to will not fix the problem.

Society has collectively decided (spiritually) that it is ok signing over data access rights to third parties. Adding friction to this punishes 98% of people in service of the 2% who aren't going to use these services anyway.

Sure, a more educated populous might tip the scales. But it's not reality, and the best UX reflects reality.

  • Nope, collective indifference to subpar user experiences has gotten us those lousy cookie banners.

    Web sites could legally use cookies for non-tracking purposes without cookie banners but considering people have not stopped visiting sites despite the fugly click-through cookie banners makes them a failure.

    All it takes is for 50% of the internet users to stop visiting web sites with them, and web site authors will stop tracking users with external cookies.

    • I read an article that said something along the lines of people aren't prepared to pay for apps, so instead we get app store silo advert supported crap-ware. And if it's not the apps its click bait making fractional gains by being supported by ad networks. That some of, but not all of us recoil from.

    • > All it takes is for 50% of the internet users to stop visiting web sites with them, and web site authors will stop tracking users with external cookies.

      How would the content creators or news sites earn then? Web is built on ads, and ads are built on tracking as untargeted ads pays significantly lower than targeted.

      2 replies →

  • No. A significant number of people care about Privacy which is why 1. Apply was targeting them with Ads and 2. AdBlock did hurt Google's business. Also care is different from go to war (as in install Linux and manually setup a privacy shield + Tor + only transact in Monero). Some people do that out of principal. Many people want the Privacy features but with the ease of use.

    • Define "significant," and do you have a source?

      I'd bet if you ask people "do you care about privacy?" Close to 100% would say yes.

      If you ask "you have to give up privacy to be able to log in to your email automatically. Are you ok with that?" Close to 100% would say yes.

      If you ask "we will give you this email service for free but in exchange we get to squeeze every ounce of juice that we can out of it to persuade you to buy things you don't need. Are you ok with that?" Close to 100% would say yes.

      It doesn't matter what people say they care about. Their actions say otherwise, if the privacy-friendly option is in any way less convenient.

      1 reply →

  • > This mindset is how we got those awful cookie banners.

    The only thing I've found awful is the mindset of the people implementing the banners.

    That you feel frustration over that every company has a cookie banner, is exactly the goal. The companies could decide that it isn't worth frustrating the user over something trivial like website analytics, as they could get that without having to show a cookie banner at all.

    But no, they want all the data, even though they most likely don't use all of it, and therefore are forced to show the cookie banner.

    Then you as a user see that banner, and instead of thinking "What a shitty company that don't even do the minimal work to not having to show me the cookie banner", you end up thinking "What a bad law forcing the company to inform me about what they do with my data". Sounds so backwards, but you're not the first with this sentiment, so the PR departments of the companies seems like they've succeed in re-pointing the blame...

    • Seconded: and we need to have worthy competitors spring up without those bad practices and lousy cookie banners, and people to flock to them.

      Once that happens, the "originals" will feel the pressure.

      3 replies →

    • The non-use of collected data is the most ridiculous part of all this. I work with many companies that collect tons of data and only use a small percentage of it. All they're doing is building a bigger haystack.

      This is partially due to the fact that Google Analytics is free and the default for most website/app builders. But, still, it's ridiculous.

    • In my experience, most people that have semi or full decision-making control over this kind of thing have absolutely no idea if they even need cookie consent banners. They just fall for the marketing speak of every single SAAS product that sells cookie-consent/GDPR stuff and err on the side of caution. No one wants to be the guy that says: "hey, we're only logging X, Y and not Z. And GDPR says we need consent only if we log Z, so therefore we don't need cookie consent." For starters, they need a lawyer to tell them it's "A OK" to do it this way, and secondly it's plain old cheaper and a lot less political capital to just go with the herd on this. The cost of the banner is off-loaded outside of the company and, for the time being, the users don't seem to mind or care.

      This is why half the web has cookie-consent banners. No amount of developers who know the details screaming up the ladder will fix this. The emergent behavior put in place by the legal profession and corporate politics favors the SAAS companies that sell GDPR cookie banner products and libraries. Even if they're in the right, there is a greater-than-zero percent chance that if they do the wrong thing they'll go to court or be forced to defend themselves. And even then if it's successful, the lawyers still need to be paid, and the company will look at "that fucking moron Joe from the website department" which caused all their hassles and countless hours of productivity as a result of being a "smart ass".

      1 reply →

    • People think in terms of what is inconveniencing them directly. Great examples are when consumers yell at low level workers when a company has horrible policies that run back to cost cutting...

      or union workers strike against Imaginary Mail Service Corp. because they are being killed on the job, and people (consumers) get angry at the workers because their package wont show up on time (or the railways arent running, etc...) instead of getting mad at the company inflicting that damage on other people...

      or when [imaginary country] puts sanctions on [other poorer country] the people of that country blame the government in power instead of the people directly inflicting harm on them.

      I'm not sure why this is the case, but we have been conditioned to be resistant to the inconvenience and not the direct cause. Maybe its because the direct cause tends to be a faceless, nameless entity that directly benefits from not being the target of ire.

    • It’s odd that you think the people implementing the banners want them so they can get more data. They want them because they provide a shield from litigation. I don’t know about you, but in the past year, most of my ads on Facebook are from law firms with headlines like “have you browsed (insert random minor e-commerce site) in the past two years? Your data may have been shared. You may be entitled to compensation.” If I’m a random mom and pop e-commerce site and I do not add a cookie banner, and I use any form of advertising at all, then I am opening myself up to a very expensive lawsuit - and attorneys are actively recruiting randos to serve as plaintiffs despite them never being harmed by “data collection.”

      It’s that simple. That’s the situation with CCPA. Not sure the exact form that GDPR penalties take because I’m not European. But it’s not a complicated issue. you have to display some stupid consent thing if you’re going to have the code that you’re required to have in order to buy ads which take people to your website.

      Note that plenty of these cookie banner products don’t actually work right, because they’re quite tricky to configure correctly, as they’re attempting to solve a problem within the webpage sandbox that should be solved in the browser settings (and could easily be solved there even today by setting it to discard cookies at close of browser). However, the legal assistants or interns at the law firm pick their victims based on who isn’t showing an obvious consent screen. When they see one, it’s likely that they will move onto the next victim because it’s much easier to prove violation of the law if they didn’t even bother to put up a cookie banner. A cookie banner that doesn’t work correctly is pretty easy to claim as a mistake.

      2 replies →

  • Actually, if my mindset were leading, we wouldn't have cookie consent banners because we would've just banned non-essential tracking altogether.

    • Now we just have to define what’s “essential” and how to identify it, across states, countries and jurisdictions. Should be easy. ;)

      2 replies →

  • With cookie banners, legislation said that every website needed to ask for consent -- a thousand sites, a thousand banners.

    Operating system level controls, though, provide a single control plane. One can very easily imagine OS-level toggles per application of:

    [No Internet, No Internet outside your own app-sandbox, Ask me every time, Everything is allowed].

    No opt in from apps required -- they might break if the network is disabled, but the user is still in control of their data.

  • I think society has collectively "decided" in the same way they "decided" smoking in a restaurant is great.

    There's little to no conscious choice in this. But there is a lot of money in this. Like... a LOT of money. If I were to try to influence society to be okay with it, it would be a no brainer.

    So, to me, it's obvious that society has been brainwashed and propagandized to accept it. But doing so generates hundreds of billions if not trillions of dollars. How, exactly, such manipulation is done is unknown to me. Probably meticulously, over the course of decades if not centuries. I know that the concept of privacy during the writing of the constitution was much, much more stringent than it was in the 70s, which is much more stringent than it is today.

    But, I am very confident it is happening.

  • I think it's clear that users should be able to have their own agents that make these decisions. If you want an agent that always defers to you and asks about Internet access, great. If you want one that accepts it all great. If you want one that uses some fancy logic, great.

  • u-Block Origin's annoyances filters take care of the cookie banners, giving the best of both worlds: no banners and a minimal amount of tracking.

    (The "I don't care about cookies" extension is similarly effective, but since I'm already running u-block origin, it makes more sense to me to enable it's filter.)

    • > u-Block Origin's annoyances filters take care of the cookie banners, giving the best of both worlds: no banners and a minimal amount of tracking.

      Word of caution though, that might silently break some websites. I've lost count of the times some HTTP request silently failed because you weren't meant to be able to get some part of the website, without first rejecting/accepting the 3rd party cookies.

      Usually, disabling uBlock, rejecting/accepting the cookies and then enabling it again solves the problem. But the first time it happened, it kind of caught me by surprise, because why in holy hell would you validate those somehow?!

      1 reply →

  • Why does it have to be more friction?

    Users had a global way to signal “do not track me” in their browser. I don’t know why regulators didn’t mandate respecting that instead of cookie consent popups.

    Apple IDs could easily have global settings about what you are comfortable with, and then have their apps respect them.

  • I’m spitballing here but wouldn’t another way to handle it would be to return dummy / null responses by redirecting telemetry calls to something that will do so?

    This would have the added benefit of being configurable and work on a bunch of apps instead of just one at a time too

  • I use Firefox focus on android and Firefox with ubo and others..

    On desktop and Firefox app, I only browse through private browsing so cookies are mostly irrelevant as session ends as soon as all windows close.

  • I always click disallow.

    And if you design software that uses tracking and what not. Go fuck yourself.

  • Not really. A mandatory opt-in option at the browser level would be the correct way to do it, but legislation forced instead those cookie banners onto the webpage.

    • No, legislation (the GDPR) doesn’t say anything about cookie pop ups. It says that private data (or any kind) can only be used with opt in consent, given freely, with no strings attached, with the ability to be withdrawn, that it will be kept secure, deleted when not needed for the original purpose, etc. All very reasonable stuff. Tracking cookies are affected, but the legislation covers all private data (IP, email address, your location, etc) … And if Browsers agreed on a standard to get and withdraw opt-in consent, it would be compatible with what the legislation requires.

Opt in doesn't work, it never did.

The vast majority (>95%) of users does not understand what those pop-ups say, seems fundamentally incapable of reading them, and either always accepts, always rejects, or always clicks the more visually-appealing button.

Try observing a family member who is not in tech and not in the professional managerial class, and ask them what pop-up they just dismissed and why. It's one of the best lessons in the interactions between tech and privacy you can get.

  • Well, then >95% of users won't be using $FEATURE. Simple as that. The fact that users for some reason no not consent to $FEATURE the way corporations/shareholders would want them to does not give anyone the right to stop asking for consent in the first place.

  • When looked at from another angle, opt-in does work.

    By adding that extra step forcing users to be aware of (and optionally decline) the vendors collection of personal data, it adds a disincentive for collecting the data in the first place.

    In other words, opt-in can be thought of as a way to encourage vendors to change their behaviour. Consumers who don't see an opt-in will eventually know that the vendor isn't collecting their information compared to others and trust the product more.

  • As much as I hate cookie consent dialogs everywhere, the fact is that it is clearly working. Some companies are going as far as to force users to pay money in order to be able to opt out of data collection. If it wasn't so cumbersome to opt-out, I reckon the numbers for opt-out would be even higher. And if companies weren't so concerned about the small portion of users that opt-out, they wouldn't have invested in finding so many different dark patterns to make it hard.

    It is definitely true that most users don't know what they're opting out of, they just understand that they have basically nothing to gain anyway, so why opt-in?

    But actually, that's totally fine and working as intended. To be fair to the end user, Apple has done something extremely complicated here, and it's going to be extremely hard for anyone except for an expert to understand it. A privacy-conscious user could make the best call by just opting out of any of these features. An everyday user might simply choose to not opt-in because they don't really care about the feature in the first place: I suspect that's the real reason why many people opt-out in the first place, you don't need to understand privacy risks to know you don't give a shit about the feature anyway.

  • Opt in works!

    If you do not want it (and that is >90% of people, who never asked for it, never requested it, but was forced upon them these 'enriched' lies and exposure to corporate greed).

  • > Try observing a family member who is not in tech

    This is everyone, it is universal, I've met many people "in tech" who also click the most "visually appealing" button because they are trying to dismiss everything in their way to get to the action they are trying to complete.

    The microcosm that is HN users might not just dismiss things at the 95%+ rate, but that is because we are fed, every day, how our data is being misappropriated ate every level. I think outside of these tiny communities, even people in tech, are just clicking the pretty button and making the dialog go away.

  • The issue really isn't opt-in itself but how the option is presented.

    I agree that a lot of people don't read, or attempt to understand the UI being presented to them in any meaningful manner. It really is frustrating seeing that happen.

    But, think about the "colorful" option you briefly mentioned. Dark patterns have promoted this kind of behaviour from popups. The whole interaction pattern has been forever tainted. You need to present it in another way.

  • Informed consent is sexy. In the Apple ecosystem, we’re literally paying customers. This is ridiculous. This line you parroted is ridiculous. This needs to stop.

  • [flagged]

    • Except that, still, to this day, most sexual consent is assumed, not explicit, even in the highest brow circles where most people are pro-explicit-sexual-consent.

      The same way, most tech privacy consent is assumed, not explicit. Users dismiss popups because they want to use the app and don't care what you do with the data. Maybe later they will care, but not in the moment...

      2 replies →

> So why didn't Apple just simply ask for user permission to enable this feature?

That’s an interesting question. Something to consider, iOS photos has allowed you to search for photos using the address the photo was taken at. To do that requires the Photos app to take the lat/long of a photos location, and do a reverse-geo lookup to get a human understandable address. Something that pretty much always involves querying a global reverse-geo service.

Do you consider this feature to be a violation of your privacy, requiring an opt-in? If not, then how is a reverse-geo lookup service more private than a landmark lookup service?

  • > To do that requires the Photos app to take the lat/long of a photos location, and do a reverse-geo lookup to get a human understandable address.

    It seems trivially possible to do this in a more privacy preserving way: geocode the search query and filter photos locally.

    No idea how Apple implements it though.

  • It's a complete violation if it's a new or changed setting from the default state of the user not having it possible.

    Something to consider - location is geo-encoded already into photos and doesn't need this uploaded to Apple servers. Searching can be done locally on device for location.

    Apple goes as far as to offer a setting to allow the user to share photos and remove the geocoding from it.

    Offering a new feature is opt-in.

    Unfortunately, against my better wishes, this only erodes trust and confidence in Apple that if this is happening visibly, what could be happening that is unknown.

  • > Do you consider this feature to be a violation of your privacy, requiring an opt-in?

    I suppose in some sense it is, as it a reverse-geo lookup service, but it's also no where near to the front in the location privacy war.

    Cell phone providers basically know your exact position at all times when you have your phone on you, credit card companies know basically everything, cars track driving directly, etc. etc.

    I can see why some people would be up in arms but for me this one doesn't feel like missing the forest for the trees, it feels like missing the forest for the leaves.

    • I very much agree with your position. There are legitimate questions to be asked about this feature being opt-in, although we may find that you implicitly opt-in if you enable Apple Intelligence or similar.

      But the argument that this specific feature represents some new beachhead in some great war against privacy strikes me as little more that clickbate hyperbole. If Apple really wanted to track people’s locations, it would be trivial for them to do so, without all this cloak and dagger nonsense people seem to come up with. Equally, is a state entity wanted to track your location (or even track people’s locations at scale), there’s a myriad of trivially easy ways for them to do so, without resorting to forcing Apple to spy on their customers via complex computer vision landmark lookup system.

  • You’re right. But: Anyone in IT or tech, thinking deeply about the raw facts. They know it always boils down to trust, not technology.

    The interesting thing is that Apple has created a cathedral of seemingly objective sexy technical details that feel like security. But since it’s all trust, feelings matter!

    So my answer is, if it feels like a privacy violation, it is. Your technical comparison will be more persuasive if you presented it in Computer Modern in a white paper, or if you are an important Substack author or reply guy, or maybe take a cue from the shawarma guy on Valencia Street and do a hunger strike while comparing two ways to get location info.

    • Apple chose to implement things like OHTTP and homomorphic encryption when they could easily have done without it. Doesn't that count for something?

      2 replies →

    • > So my answer is, if it feels like a privacy violation, it is. Your technical comparison will be more persuasive if you presented it in Computer Modern in a white paper, or if you are an important Substack author or reply guy, or maybe take a cue from the shawarma guy on Valencia Street and do a hunger strike while comparing two ways to get location info.

      They’re broadly similar services, both provided by the same entity. Either you trust that entity or you don’t. You can’t simultaneously be happy with an older, less private feature, that can’t be disabled. While simultaneously criticising the same entity for creating a new feature (that carries all the same privacy risks) that’s technically more private, and can be completely disabled.

      > The interesting thing is that Apple has created a cathedral of seemingly objective sexy technical details that feel like security. But since it’s all trust, feelings matter!

      This is utterly irrelevant, you’re basically making my point for me. As above, either you do or do not trust Apple to provide these services. The implementation is kinda irrelevant. I’m simply asking people to be a little more introspective, and take a little more time to consider their position, before they start yelling from the rooftops that this new feature represents some great privacy deception.

  • This would work only if you've already given the Camera app permission to geotag your photos, which I haven't, so it may be a nonissue.

    • It works if you use the Photos app to look at any image with a geo EXIF tag.

      But thank you for one more demonstration that even the HN crowd can’t reliably give or deny informed consent here.

      11 replies →

And the result is https://chromewebstore.google.com/detail/i-still-dont-care-a...

Personally I do not believe these popups serve any purpose, because I ultimately cannot (at least in a reasonable way) prove that the website is acting in good faith. Asking me whether the app should phone home doesn't really guarantee me pressing "no" will actually prevent the tracking.

I am continuously surprised at how we convince ourselves privacy at scale will work with a varying amount of yes/no buttons. There are 2 ways to trust software 1. be naive and check whether "privacy first" is written somewhere 2. understand the software you are running, down to the instructions it is able to execute.

The permission popups also lack granularity. When giving access to my contact list, which contacts does it actually access? Can I only give access to contacts name and not phone numbers? Is it for offline or online processing? If online, should we have another popup for internet access? But then, can I filter what kind of internet stuff it does? You go down the rabbit hole and eventually end up with a turing-complete permission system, and if you don't, your "privacy" will have some hole to it.

Even with opt-in a vendor will keep harassing the user until they tap "yes" in an inattentive moment.

And I've been in situations where I noticed a box was checked that I'm sure I didn't check. I want to turn these things off and throw away the key. But of course the vendor will never allow me to. Therefore I use Linux.

  • I want to turn these things off and throw away the key. But of course the vendor will never allow me to. Therefore I use Linux.

    I hate to break it to you, but these things happen in Linux, too.

    It's not the operating system that's the problem. It's that the tech industry has normalized greed.

    • It is true that there are not absolutely zero instances of telemetry or "phoning home" in Linux, but Desktop Linux is not a similar experience to Windows or macOS in this regard, and it isn't approaching that point, either. You can tcpdump a clean install of Debian or what-have-you and figure out all of what's going on with network traffic. Making it whisper quiet typically isn't a huge endeavor either, usually just need to disable some noisy local networking features. Try Wiresharking a fresh Windows install, after you've unchecked all of the privacy options and ran some settings through Shutup10 or whatever. There's still so much crap going everywhere. It's hard to even stop Windows from sending the text you type into the start menu back to Microsoft, there's no option, you need to mess with Group Policy and hope they don't change the feature enough to need to change a different policy later to disable it again. macOS is probably still better (haven't checked in a while), but there are still some features that basically can't be disabled that leak information about what you're doing to Apple. For example, you can't stop macOS from phoning home to check OCSP status when launching software: there's no option to disable that.

      The reason why this is the case is because while the tech industry is rotten, the Linux desktop isn't really directly owned by a tech industry company. There are a few tech companies that work on Linux desktop things, but most of them only work on it as a compliment to other things they do.

      Distributions may even take it upon themselves to "fix" applications that have unwanted features. Debian is infamous for disabling the KeepassXC networking features, like fetching favicons and the browser integration, features a lot of users actually did want.

      2 replies →

    • Yes, but if it happens at least there is no greedy intent and it will be corrected by the community.

  • For what it's worth, I use Linux, too, but as far as phones go, stock phones that run Linux suffer from too many reliability and stability issues for me to daily drive them. I actually did try. So, as far as phones go, I'm stuck with the Android/iOS duopoly like anyone else.

> I want software that doesn't send anything to the Internet without some explicit intent first

I want this too, but when even the two most popular base OSes don't adhere to this, I feel like it's an impossible uphill battle to want the software running on those platforms to behave like that.

"Local-first" just isn't in their vocabulary or best-interest, considering the environment they act in today, sadly.

Developers of software want, and feel entitled to, the data on your computer, both about your usage within the app, as well as things you do outside of the app (such as where you go and what you buy).

Software will continue to spy on people so long as it is not technically prohibited or banned.

  • I don’t.

    I highly suggest everyone else does their darnedest not too either. Don’t do it in your own software. Refuse and push back against it at $dayJob.

    I realize that my small contribution as a privacy and data-respecting SWE is extremely small, but if we all push back against the MBAs telling us to do these things, the world will be better off.

    • So long as a significant portion of companies harvest user data to provide “free” services, no well-meaning business can compete with their paid apps. Not in a real way.

      It’s the prisoner’s dilemma, but one vs many instead of one vs one. So long as someone defects, everyone either defects or goes out of business.

      It’s the same as with unethical supply chains. A business using slave labour in their supply chain will out-compete all businesses that don’t. So well-meaning business owners can’t really switch to better supply chains as it is the same as just dissolving their business there and then.

      Only universal regulation can fix this. If everyone is forced not to defect, we can win the prisoners dilemma. But so long as even 10% of big tech defects and creates this extremely lucrative business of personal data trade that kills every company not participating, we will continue to participate more and more.

      Read Meditations on Moloch for more examples.

    • Why do you assume it's MBA driven? As a software developer, I like knowing when my software crashes so that I can fix it. I don't care or even want to know who you are, your IP address, or anything that could be linked back to you in any way, but I can't fix it if I don't know that it's crashing in the first place.

      10 replies →

  • In the OP article it seems more like users demand to search their photos by text, and Apple has put in a huge effort to enable that without gaining access to your photos.

    • Seems?

      It's important to read, and not skim.

      This is extracting location based data, not content based data in an image (like searching for all photos with a cup in it).

      "Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. "

  • Years ago I developed for iOS as an employee. In my case, it was the product managers that wanted the data. I saw it as a pattern and I hated it. I made my plans to leave that space.

  • Only recently. If anyone's grown up with a world only knowing this, it might be part of why it might not stand out as much.

> So why didn't Apple just simply ask for user permission to enable this feature? My cynical opinion is because Apple knows some portion of users would instantly disallow this if prompted, but they feel they know better than those users. I don't like this attitude, and I suspect it is the same reason why there is an increasing discontent growing towards opt-out telemetry, too.

I'm just not sure why Apple needed to activate this by default, other than not draw attention to it... and doing so that was more important than the user's rights to the privacy they believe they are purchasing on their device.

I don't care what convenience i'm being offered or sold. If the user has decided what they want and the premium they are paying for Apple, it must be respected.

This makes me wonder if there is an app that can monitor all settings in an iPhone both for changes between updates, and also new features being set by default to be enabled that compromise the user's known wishes.

All this AI, and this is still overlooked.

I'm hoping it was an oversight.

Consent for complex issues is a cop out for addressing privacy concerns. Users will accept or reject these things without any understanding of what they are doing either way. Apple seems to have taken a middle ground where they de-risked the process and made it a default.

This is a “look at me, Apple bad” story that harvests attention. It sets the premise that this is an unknown and undocumented process, then proceeds to explain it from Apple documentation and published papers.

"What I want is very simple: I want software that doesn't send anything to the Internet without some explicit intent first."

It exists. I use such software everyday. For example, I am submitting this comment using a text-only browser that does not auto-load resources.

But this type of smaller, simpler software is not popular.

For example, everyone commenting in this thread is likely using a browser that auto-loads resources to submit their comments. HN is more or less a text-only website and this "feature" is not technically necessary for submitting comments. All so-called "modern" web browsers send requests to the internet without explicit intent first. IN addition to auto-loading resources, these browsers automatically run Javascript which often sends further requests never intended by the web user.

Brand new Apple computers now send packets to the internet as soon as the owner plugs them in for the first time. This may enable tracking and/or data collection. Apple proponents would likely argue "convenience" is the goal. This might be true. But the goal is not the issue. The issue is how much the computer owner is allowed to control the computer they buy. Some owners might prefer that the computer should not automatically send packets to remote Apple servers. Often it is not even possible to disable this behaviour. Computer purchasers never asked for these "convenenience" features. Like the subject of this submission, Apple Photos, these are Apple's decisions. The computer owner is not allowed to make decisions about whether to enable or disable "convenience" features.

As the court acknowledged in its opinion in US v Google, default settings are significant. In this case, it is more than a default setting. It is something the owner cannot change.

>I want software that doesn't send anything to the Internet without some explicit intent first.

I too want exactly that, which got me thinking, that's what firewalls are for! DROP OUTBOUND by default, explicit allow per-app.

On Andoid, iptables-based firewalls require root, which wasn't a good option for me (no twrp support for my device), so after some searching I stumbled upon NetGuard - open source and rootless, implements a firewall using Android's VPN service (you can configure Android to route all traffic through this "VPN" which is actually a local firewall). The downside is you can't use an actual VPN (except with some complicated setup involving work profiles and other apps). I've been using it for a couple of weeks and am very satisfied, I noticed apps phoning home which I did not want to, like a scanning app I had used to scan private documents in the past, perhaps an oversight on my part.

Use a rooted Android phone with AFWall+ installed, with default block rules. Even just LineageOS allows you to set granular network settings per app, though it's not preemptive like AFWall.

  • Can't run various banking apps and can't run PagerDuty on a rooted device due to Google Play API Integrity Check. The ecosystem is closing in on any options to not send telemetry, and Google is leading the way in the restrictions on Freedom.

    • > Google is leading the way in the restrictions on Freedom.

      They're the ones allowing you to root your phone or flash a custom ROM in the first place, so that's not a fair characterisation. Banks have a vested interest in reducing fraud, and a rooted Android might allow for easier and additional attack vectors into their apps and thus systems.

    • Naw, using Magisk and it's zygisk denylist it usually works. I haven't been blocked by an app yet, including pagerduty.

IMO, there should be 3 categories of users, and they can choose a system wide setting that applies across all their apps and settings:

* Bulletproof

* Privacy Conscious

* Normal (recommended)

That way users are roughly opting in and opting out in a way that aligns with their desires

> Trust in software will continue to erode

> there is an increasing discontent growing towards opt-out telemetry

Really? That's news to me. What I observed is people giving up more and more privacy every year (or "delegating" their privacy to tech giants).

  • Absolutely! The important bit is that users have no choice in the matter. They're pushed into agreeing to whatever ToS and updating to whatever software version.

    The backlash against Microsoft's Windows Recall should serve as a good indicator of just how deeply people have grown to distrust tech companies. But Microsoft can keep turning the screws, and don't you know it, a couple years from now everyone will be running Windows 11 anyways.

    It's the same for Android. If you really want your Android phone to be truly private, you can root it and flash a custom ROM with microG and an application firewall. Sounds good! And now you've lost access to banking apps, NFC payments, games, and a myriad of other things, because your device no longer passes SafetyNet checks. You can play a cat-and-mouse game with breaking said checks, but the clock is ticking, as remote attestation will remove what remains of your agency as soon as possible. And all of that for a notably worse experience with less features and more problems.

    (Sidenote: I think banking apps requiring SafetyNet passing is the dumbest thing on planet earth. You guys know I can just sign into the website with my mobile browser anyways, right? You aren't winning anything here.)

    But most users are never going to do that. Most users will boot into their stock ROM, where data is siphoned by default and you have to agree to more data siphoning to use basic features. Every year, users will continue to give up every last bit of agency and privacy so as long as tech companies are allowed to continue to take it.

    • > Absolutely! The important bit is that users have no choice in the matter.

      If people don’t have a choice, then they’re not giving up privacy, like the person you’re agreeing with said, it’s being taken away.

      1 reply →

    • If you accept Android as an option, then GrapheneOS probably check a lot of your boxes on an OS level. GrapheneOS developers sit between you and Google and make sure that shit like this isn't introduced without the user's knowledge. They actively strip out crap that goes against users interests and add features that empower us.

      I find that the popular apps for basic operation from F-Droid do a very good job of not screwing with the user either. I'm talking about DAVx⁵, Etar, Fossify Gallery, K-9/Thunderbird, AntennaPod etc. No nonsense software that does what I want and nothing more.

      I've been running deGoogled Android devices for over a decade now for private use and I've been given Apple devices from work during all those years. I still find find the iOS devices to be a terrible computing experience. There's a feeling of being reduced to a mere consumer.

      GrapheneOS is the best mobile OS I've ever tried. If you get a Pixel device, it's dead simple to install via your desktop web browser[1] and has been zero maintenance. Really!

      [1] https://grapheneos.org/install/web

      13 replies →

    • Completely agree, just one minor point:

      > I think banking apps requiring SafetyNet passing is the dumbest thing on planet earth. You guys know I can just sign into the website with my mobile browser anyways, right?

      No, you're not. For logging in, you need a mobile app used as an authentication token. Do not pass go, do not collect $200... (The current state of affairs in Czechia, at least; you still _do_ have the option of not using the app _for now_ in most banks, using password + SMS OTP, but you need to pay for each SMS and there is significant pressure to migrate you from it. The option is probably going to be removed completely in future.)

      2 replies →

    • fwiw, on Android, you can install a custom certificate and have an app like AdGuard go beyond just DNS filtering, and actually filter traffic down to a request-content level. No root required. (iOS forbids this without jailbreaking though :/)

      1 reply →

  • One of the reasons is because telemetry and backdoors are invisible. If the phone was showing a message like "sending your data to Cupertino" then users were better aware of this. Sadly I doubt there will be a legal requirement to do this.

    • Anything is possible through lobbying for regulation and policy.

      It's the same way that bills come out to crack people's policy.

      Only people don't always know they can demand the opposite so it never gets messed with again, and instead get roped into fatigue of reacting to technology bills written by non-technology people.

  • Apple seems to be the best option here too. They seem to have put in a huge effort to provide features people demand (searching by landmarks in this case) without having to share your private data.

    It would have been so much easier for them to just send the whole photo as is to a server and process it remotely like Google does.

  • > What I observed is people giving up more and more privacy every year (or "delegating" their privacy to tech giants).

    Are people giving up their privacy? Looks to me it’s being taken without consent, via enormous legalese and techniques of exhaustion.

    • Totally.

      Individuals who grew up primarily as consumers of tech, also have consented to a relationship of being consumed, bought, and sold themselves as the product.

      Those who grew up primarily as creators with tech, have often experienced the difference.

      This creates a really big blind spot potentially.

  • Whether or not people in general are aware of this issue and care about it, I think it's pretty disingenuous to characterize people as willfully giving up their privacy because they own smartphone. When stuff like this is happening on both iOS and Android, it's not feasible to avoid this without just opting out of having a smartphone entirely, and representing as a binary choice of "choose privacy or choose not to care about privacy" is counterproductive, condescending, and a huge oversimplification.

    • Maybe not privacy in general but this is about location privacy.

      If you have a smartphone in your pocket, then, for better or worse, you're carrying a location tracker chip on your person because that's how they all work. The cell phone company needs to know where to send/get data, if nothing else.

      It seems disingenuous to put a tracker chip in your pocket and be up in arms that someone knows your location.

      Unless this kerfuffle is only about Apple.

  • Come on, being forced to give up privacy is eroding privacy and increasing discontent.

    forced can also mean the whole no privacy by default and dark patterns everywhere.

  • Do you honestly believe people understand what they’re doing?

    Nowhere in marketing materials or what passes for documentation on iOS we see an explanation of the risks and what it means for one’s identity to be sold off to data brokers. It’s all “our 950 partners to enhance your experience” bs.

The shorter answer is that it's your data, but it's their service. If you want privacy, you should use your own service.

And for how cheap and trivial syncing photos is, any mandatory or exclusive integration of services between app/platform/device vendors needs to be scrutinized heavily by the FTC.

> Trust in software will continue to erode until software stops treating end users and their data and resources (e.g. network connections) as the vendor's own playground. Local on-device data shouldn't be leaking out of radio interfaces unexpectedly, period. There should be a user intent tied to any feature where local data is sent out to the network.

I find that there is a specific niche group of people who care very much about these things. But the rest of the world doesn't. They don't want to care about all these little settings they're just "Oh cool it knows it's the Eiffel tower". The only people who are becoming distrusting of software are a specific niche group of people and I highly suspect they're going to be mad about something.

> So why didn't Apple just simply ask for user permission to enable this feature?

Because most people don't even care to look at the new features for a software update. And let's be serious that includes most of us here otherwise, this feature would have been obvious. So why create a feature that no one will use? It doesn't make sense. So you enable it for everyone and those who don't want it opt-out.

>> Trust in software will continue to erode until software stops treating end users and their data and resources

Trust in closed-source proprietary software. In other words: trust in corporate entities. Trust in open-source software is going strong.

  • Not a given though. Ubuntu phones home a lot by default.

    Try disabling the motd stuff - it's quite pernicious by design.

    And removing the ubuntu-advantage package disables the desktop. lol.

I want a hardware mic switch. We are an iHouse with one exception and that's a SheildTV that is currently out of order because I want to reset it and haven't found time in, oh..., weeks. Anyway, out of the blue one of the kids asked about Turkish delights and wonders where the name came from. SO and I facepalm then explain. Not an hour later she gets something in her Facebook feed: 15 interesting facts about Turkey.

This is just too much of a coincidence. I know, I know, this "... isn't Apple's fault" blah blah. Bullshit it's not. They can't have it both ways where they say their app store process is great and then they allow this shit.

So you don't want a browser?

  • A browser (without telemetry) is surely a good definition of something that doesn't initiate network calls before user intent

  • Browsing the Internet is explicit intent! Some of the stuff enabled by JavaScript definitely tows the line but at the very least that's not really the direct fault of the browser.

Most people nowadays use Web based apps, which don't even need to ask anything, who knows what server side is doing.

Which is kind of ironic in places like HN, where so many advocate for Chromebooks.

  • Your location data, encoded in photo you take with the phone's camera, being extracted by Apple is what this article is about.

    How many people use a web based camera or web based photo album app?

Would you mind giving an example of something bad that could happen to somebody as a result of Apple sending this data to itself? Something concrete, where the harm would be realized, for example somebody being hurt physically, emotionally, psychologically, economically, etc

  • Once upon a time, I worked for a pretty big company (fortune 500ish) and had access to production data. When a colleague didn't show up at work as they were expected, I looked up their location in our tracking database. They were in the wrong country -- but I can't finish this story here.

    Needless to say, if an Apple employee wanted to stalk someone (say an abusive partner, creep, whatever), the fact that this stuff phones home means that the employee can deduce where they are located. I've heard stories from the early days of Facebook about employees reading partner's Facebook messages, back before they took that kind of stuff seriously.

    People work at these places, and not all people are good.

    • Your first story sounds like a good outcome.

      I doubt Apple employees could deduce location from the uploaded data. Having worked at FB I know that doing something like that would very quickly get you fired post 2016

      2 replies →

  • Easy, consider a parent taking pictures of their kid's genitals to send to their doctor to investigate a medical condition, the pictures getting flagged and reported to the authorities as being child pornography by an automated enforcement algorithm, leading to a 10-month criminal investigation of the parent. This exact thing happened with Google's algorithm using AI to hunt for CP[1], so it isn't hard to imagine that it could happen with Apple software, too.

    [1] https://www.koffellaw.com/blog/google-ai-technology-flags-da...

and there's absolutely nothing wrong with implementing a feature like this, but it should absolutely be opt-in

This feature is intended to spy on the user. Those kinds of features can't be opt-in. (And yeah, holomorophic "privacy preserving" encryption song-and-dance, I read about that when it came out, etc).

  • This is an incredibly shallow dismissal that states the opposite of Apple's claim with zero evidence or reasoning and hand-waves away the very real and well-researched field of homomorphic encryption.