Comment by gundmc

1 day ago

Not surprising. Search and browsing history has been used as evidence for some time.

Nearly anything that isn't end-to-end encrypted is fair game, assuming there is probable cause. Access to your physical location history (even if you weren't suspected of a crime) wasn't off limits until 2024 [1]. (It still isn't off limits if you are suspected of a crime, but is no longer collected at the scale of "most Android users" [2].)

[1] https://www.eff.org/deeplinks/2024/08/federal-appeals-court-...

[2] https://techcrunch.com/2023/12/16/google-geofence-warrants-l...

  • End-to-end encrypted data is also fair game -- the only difference is that there are simply fewer parties that have the data to give.

  • Chats with chatgpt are end to end encrypted (it's https), but one of the ends is OpenAI.

    • While I understand (and agree with) the general sentiment of what you're saying, you are not correct in saying that this is end-to-end encryption, and HTTPS itself does not guarantee that end-to-end encryption is in use.

      In this case, there's an explicit middle point - chatgpt.com resolves to a CloudFlare server, so CloudFlare is actually one of the ends here. It likely acts as a reverse proxy, meaning that it will forward your requests to a different, OpenAI-owned server. This might be over a new HTTPS connection, or it might be over an unencrypted HTTP connection.

      It really is super important to emphasize this point. End-to-end encryption is not simply that your data is encrypted between you and the ultimate endpoint. It's that it can't be decrypted along the way - and decrypting your HTTPS requests is something that CloudFlare needs to do in order to work.

      (To be clear, I'm not accusing CloudFlare of anything shady here. I'm just saying that people have forgotten what end-to-end encryption really means.)

      1 reply →

    • Yep. This is why the estimates of compute needed for AI (if it turns out to be useful) are many orders of magnitude too low — the technology isn’t mature until it actually succeeds at tasks, with fully homomorphic encryption from my prompt through the response.

      1 reply →

It still boggles my mind that in this day and age most people use the one search engine that keeps the most copious records of everything that is entered and that ties that to the most information any corporation probably has about any random person. I wouldn't be surprised if everyone moved to using the NSA search engine if they ever came out with one.

Just for general peace of mind, use a privacy-oriented search engine. I use leta.mullvad.net or search.brave.com usually. I haven't used Google in years. And if you just happen to have a curiosity about something fringe that might be misinterpreted in the wrong circumstances, download an LLM and use it locally.

  • Not using Google but instead use a Mullvad or Brave search engine isn't solving any problem. Because if you cannot trust company A, you also shouldn't trust company B.

    If you want real and total anonymous search, use a public computer.

Coming into the thread(and general discussion about chatgpt being used as evidence) with this context, I’m confused about the reactions to this. Online activity has been used as evidence as far as I remember. OpenAI also has a couple high profile cases against them with chatgpt history used as the primary evidence