Comment by matthewdgreen
8 hours ago
Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them. I already think too much about everything I put into ChatGPT, since my default assumption is it will all be made public. Now I also have to consider the possibility that random discussions will be used against me and taken out of context if I'm ever accused of committing a crime. (Like all the weird questions I ask about anonymous communications and encryption!) So everything I do with these tools will be with an eye towards the fact that it's all preserved and I'll have to explain it, which has a huge chilling effect on using the system. Just make it easy for me not to log history.
> These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.
But you do, just like you have confidentiality in what you write in your diary.
> Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.
Don't expect that from products with advertising business models
OpenAI and Anthropic do not have advertising business models
OpenAI is clearly moving in that direction, look at their recent verbiage and hiring.
yet, but surely they will move that way over time?
If you're not the customer, you're most likely the product.
4 replies →
yet
name a similar sized tech company that hasn't
I think there is a non-zero chance they had no idea about this guy until OpenAI employees uncovered this, reported it, and additional cell phone data backed up the entire thing.
Why do employees need to be involved? It's AI. It is entirely capable of doing the surveillance, monitoring and reporting entirely by itself. If not now, then in the near future.
Serious question. Why should someone have more privacy in a software system than they do within their home?
I have enormous privacy in my home. I can open up any book and read it with nobody logging what I read. I can destroy any notes I take and know they'll stay destroyed. I can even visit the library and do all these things in an environment with massive information access; only the card catalog usage might get logged, and I probably still don't have to tie usage to my identity because once upon a time it was totally normal to make knowledge tools publicly-accessible without the need for authentication credentials.
They maybe (not taking a stance) shouldn't, but I don't think this argument is as simple as one thinks. Doing surveillance on someone's home generally requires a court order beforehand. And depending on the country (I don't believe this applies to the US), words spoken at home also enjoy extended legal protection, i.e. they can't subpoena a friend you had a discussion with.
Now the real question is, do you consider it a conversation or a letter. Any opened¹ letters you have lying around at home can be grabbed with a court-ordered search warrant. But a conversation—you might need the warrant beforehand? It's tricky.
(Again, exact legal situation depends on the country.)
¹ Secrecy of correspondence frequently only applies to letters in sealed envelopes. But then you can get another warrant for the correspondence…
Honest question, why consider the personal home, letters or spoken words at all, considering most countries around the world already have ample and far more applicable laws/precedent for cloud hosted private documents?
For the LLM input, that maps 1:1 to documents a person has written and uploaded to cloud storage. And I don't see how generated output could weigh into that at all.
4 replies →
Just give the ai to user relationship a protection like attorney client privilege.
Edit: ai has already passed the bar exam.
Seems natural to extend privilege here. People are using it as a therapist.
There are a lot of counterarguments I could bring up, but just of the top, plainly, just because people use LLMs as therapists, lawyers, doctors, deities, doesn't make LLMs such.
My personal believes (we should not rely on models for such things at this stage, let's not anthropomorphize, etc.) to one side, let me ask, do you think if I used my friend Steve, who is not a lawyer but sounds very convincingly like one, to advice me on a legal dispute, that should be covered by attorney client privilege?
Cause, even given the scenario that LLMs suddenly become perfectly reliable enough to verifiably carry out legal/medical/etc. services to a point where they can actually be accepted into day-to-day practice by actual professionals and the companies are willing to take on the financial risks of any malpractice for using their models in such areas (as part of enterprise offerings for an extra fee of course), that still wouldn't and shouldn't mean that your run-of-the-mill private ChatGPT instance has the same privileges or protections that we afford to e.g. patient data when handled digitally as part of medical practice. At best (again, I dislike anthropomorphizing models, but it is easier to talk about such a scenario this way), a hypothetical ChatGPT that provides 100% accurate legal information would be akin to a private person who just happens to know a lot about the law, but never got accredited and does not have the same responsibilities.
Again though, we are far from that hypothetical anyways, "people" using LLMs that way does not change this fact. I know, unfortunately, there are people who are convinced that current day LLMs have already attain Godhood and are merely biding their time and that doesn't become real either, just because they act according to their assumptions.
I really struggle to understand, nor do I see any cogent arguments across this comment section why current day LLMs in such a scenario should be treated differently to e.g. a PKM software or cloud hosted diary and afforded the same legal protections (or lack thereof depending on viewpoint, personal stance and your local data privacy laws).
5 replies →
It only "passes the bar exam" when AI, or some other flawed process, is the examiner. See e.g. https://doi.org/10.1007/s10506-024-09396-9 for a debunk.
That's not a debunk. "Calls into question" does not equal "in truth, it failed the exam. "
2 replies →
Attorney-client privilege has limits. For obvious reasons I haven’t read any affidavits associated with the warrant, but it sure sounds like this would fall outside the bounds of attorney-client privilege.
With an attorney you have a clear sense of when you pass outside of that privilege. With a friend or colleague you have a social sense of what's going to remain confidential, plus memories aren't perfect. "Preserving, recording and reporting every word" is not the same as any of these things. This cannot be the world we all have to live in going forward; it's not safe or healthy.
1 reply →