Comment by cesarb

1 day ago

> Any power users who prefer their own key management should follow the steps to enable Bitlocker without uploading keys to a connected Microsoft account.

Once the feature exists, it's much easier to use it by accident. A finger slip, a bug in a Windows update, or even a cosmic ray flipping the "do not upload" bit in memory, could all lead to the key being accidentally uploaded. And it's a silent failure: the security properties of the system have changed without any visible indication that it happened.

There's a lot of sibling comments to mine here that are reading this literally, but instead, I would suggest the following reading: "I never selected that option!" "Huh, must have been a cosmic ray that uploaded your keys ;) Modern OS updates never obliterate user-chosen configurations"

This is correct, I also discovered while preparing several ThinkPads for a customer based on a Windows 11 image i made, that even if you have bitlocker disabled you may also need to check that hardware disk encryption is disabled as well (was enabled by default in my case). Although this is different from bitlocker in that the encryption key is stored in the TPM, it is something to be aware of as it may be unexpected.

If users are so paranoid that they worry about a cosmic ray bit flipping their computer into betraying them, they're probably not using a Microsoft account at all with their Windows PC.

  • If your security requirements are such that you need to worry about legally-issued search warrants, you should not connect your computer to the internet. Especially if it's running Windows.

    • Right, this is just a variation on "If you have nothing to hide..."

      ETA: You're not wrong; folk who have specific, legitimate opsec concerns shouldn't be using certain tools. I just initially read your post a certain way. Apologies if it feels like I put words in your mouth.

>even a cosmic ray flipping the "do not upload" bit in memory

Stats on this very likely scenario?

  • > IBM estimated in 1996 that one error per month per 256 MiB of RAM was expected for a desktop computer.

    From the wikipedia article on "Soft error", if anyone wants to extrapolate.

    • That makes it vanishingly unlikely. On a 16GB RAM computer with that rate, you can expect 64 random bit flips per month.

      So roughly you could expect this happen roughly once every two hundred million years.

      Assuming there are about 2 billion Windows computers in use, that’s about 10 computers a year that experience this bit flip.

      5 replies →

    • Rounding that to 1 error per 30 days per 256M, for 16G of RAM that would translate to 1 error roughly every half a day. I do not believe that at all, having done memory testing runs for much longer on much larger amounts of RAM. I've seen the error counters on servers with ECC RAM, which remain at 0 for many months; and when they start increasing, it's because something is failing and needs replaced. In my experience RAM failures are much rarer than for HDDs and SSDs.

  • Given enough computers, anything will happen. Apparently enough bit flips happen in domains (or their DNS resolution) that registering domains one bit away from the most popular ones (e.g. something like gnogle.com for google.com) might be worth it for bad actors. There was a story a few years ago, but I can't find it right now; perhaps someone will link it.

  • At google "more than 8% of DIMM memory modules were affected by errors per year" [0]

    More on the topic: Single-event upset[1]

    [0] https://en.wikipedia.org/wiki/ECC_memory

    [1] https://en.wikipedia.org/wiki/Single-event_upset

    • At the time Google was taking RAM that had failed manufacturer QA that they had gotten for cheap and sticking it on DIMMs themselves and trying to self certify them.

    • > At google "more than 8% of DIMM memory modules were affected by errors per year"

      That's all errors including permanent hardware failure, not just transient bit flips or from cosmic rays.

      1 reply →

>A finger slip, a bug in a Windows update, or even a cosmic ray flipping the "do not upload" bit in memory, could all lead to the key being accidentally uploaded.

This is absurd, because it's basically a generic argument about any sort of feature that vaguely reduces privacy. Sorry guys, we can't have automated backups in windows (even opt in!), because if the feature exists, a random bitflip can cause everything to be uploaded to microsoft against the user's will.

[flagged]

  • I can't believe it took this long.

    We have mandatory identification for all kinds of things that are illegal to purchase or engage in under a certain age. Nobody wants to prosecute 12 year old kids for lying when the clicked the "I am at least 13 years old" checkbox when registering an account. The only alternative is to do what we do with R-rated movies, alcohol, tobacco, firearms, risky physical activities (i.e. bungee jumping liability waiver) etc... we put the onus of verifying identification on the suppliers.

    I've always imagined this was inevitable.

    • I don't think that's quite right. The age-gating of the internet is part of a brand new push, it's not just patching up a hole in an existing framework. At least in my Western country, all age-verified activities were things that could've put someone in direct, obvious danger - drugs, guns, licensing for something that could be dangerous, and so on. In the past, the 'control' of things that were just information was illusory. Movie theaters have policies not to let kids see high-rated movies, but they're not strictly legally required to do so. Video game stores may be bound by agreements or policy not to sell certain games to children, but these barriers were self-imposed, not driven by law. Pornography has really been the only exception I can think of. So, demanding age verification to be able to access large swaths of the internet (in some cases including things as broad as social media, and similar) is a huge expansion on what was in the past, instead of just them closing up some loopholes.

    • The problem is the implementation is hasty.

      When I go buy a beer at the gas station, all I do is show my ID to the cashier. They look at it to verify DOB and then that's it. No information is stored permanently in some database that's going to get hacked and leaked.

      We can't trust every private company that now has to verify age to not store that information with whatever questionable security.

      If we aren't going to do a national registry that services can query to get back only a "yes or no" on whether a user is of age or not, then we need regulation to prevent the storage of ID information.

      We should still be able to verify age while remaining psuedo-anonymous.

      11 replies →

    • The problem is that there is nothing done to protect privacy.

      There is already plenty of entities that not only have reliable way of proving it's you that have access to account, but also enough info to return user's age without disclosing anything else, like banks or govt sites, they could (or better, be forced to) provide interface to that data.

      Basically "pick your identity provider" -> "auth on their site" -> "step showing that only age will be shared" -> response with user's age and the query's unique ID that's not related to the user account id

      1 reply →

> a cosmic ray flipping the "do not upload" bit in memory, could all lead to the key being accidentally uploaded.

Nah, no shot.