Comment by Topfi
21 hours ago
Honest question, why consider the personal home, letters or spoken words at all, considering most countries around the world already have ample and far more applicable laws/precedent for cloud hosted private documents?
For the LLM input, that maps 1:1 to documents a person has written and uploaded to cloud storage. And I don't see how generated output could weigh into that at all.
A simple answer to this is: I use local storage or end-to-end encrypted cloud backup for private stuff, and I don't for work stuff. And I make those decisions on a document-by-document basis, since I have the choice of using both technologies.
The question you are asking is: should I approach my daily search tasks with the same degree of thoughtfulness and caution that I do with my document storage choices, and do I have the same options? And the answers I would give are:
* As a consumer I don't want to have to think about this. I want to be able to answer some private questions or have conversations with a trusted confidant without those conversations being logged to my identity.
* As an OpenAI executive, I would also probably not want my users to have to think about this risk, since a lot of the future value in AI assistants is the knowledge that you can trust them like members of your family. If OpenAI can't provide that, something else will.
* As a member of a society, I really do not love the idea that we're using legal standards developed for 1990s email to protect citizens from privacy violations involving technologies that can think and even testify against you.
> [...] should I approach my daily search tasks with the same degree of thoughtfulness and caution that I do with my document storage choices [...]
Then treat them with the same degree of thoughtfulness and caution you have treated web searches on Google, Bing, DuckDuckGo or Kagi for the last decade.
Again, there is no confidant or entity here, no more so than the search algorithms we have been using for decades are at least.
> I really do not love the idea that we're using legal standards developed for 1990s email to protect citizens [...]
Fair, but again, that is in no way connected to LLMs. I still see no reason presented why LLM input should be treated any differently to cloud hosted files or web search requests.
You want better privacy? Me too, but that is not in any way connected to or changed by LLMs being common place. Same logic I find any attempt to restrict a specific social media company for privacy and algorithmic concerns laughable, if the laws remain so that any local competitors are allowed to do the same invasions.
It's not at all clear how easy it is to obtain a user's search history, when users don't explicitly log in to those services (e.g., incognito/Private browsing), and don't keep history on their local device. I've been trying to find a single example of a court case where this happened, and my Google/ChatGPT searches are coming up completely empty. Tell me if you can find one.
The closest I can find is "keyword warrants" where police ask for users who searched on a given term, but that's not quite the same thing as an exhaustive search history.
Certainly my personal intuition is that historically there has been a lot of default privacy for non-logged in "incognito" web search, which used to be most search -- and is also I think why we came to trust search so much. I expect that will change going forward, and most LLMs require user logins right from the jump.
As far as the "I can see no reason" why LLMs should be treated differently than email, well, there are plenty of good reasons why we should. If you're saying "we can't change the law," you clearly aren't paying attention to how the law has been changing around tech priorities like cryptocurrency recently. AI is an even bigger priority, so a lot of opportunity for big legal changes. Now's the time to make proposals.
2 replies →