"The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations."
As might any plaintiff. NYT might be the first of many others and the lawsuits may not be limited to copyright claims
Why has OpenAI collected and stored 20 million conversations (including "deleted chats")
What is the purpose of OpenAI storing millions of private conversations
By contrast the purpose of NYT's request is both clear and limited
The documents requested are not being made public by the plaintiffs. The documents will presumably be redacted to protect any confidential information before being produced to the plaintiffs, the documents can only be used by the plaintiffs for the purpose of the litigation against OpenAI and, unlike OpenAI who has collected and stored these conversations for as long as OpenAI desires, the plaintiffs are prohibited from retaining copies of the documents after the litigation is concluded
The privacy issue here has been created by OpenAI for their own commercial benefit
It is not even clear what this benefit, if any, will be as OpenAI continues to search for a "business model"
The two documents you linked are responses to specific parts of OpenAI's objection. They're not good sources for the original order.
Nevertheless, you're generally correct but you don't realize why: A core feature of ChatGPT is that it keeps your conversation history right there so you can click on it, review it, and continue conversations across all of your devices. The court order is to preserve what is already present in the system even if the user asks to delete it.
For those who are confused: A core feature of ChatGPT and other LLM accounts is that your past conversations are available to return to, until you specifically delete them. The problem now is that if a user asks for the conversation to be deleted, OpenAI has to retain the conversation for the court order even though it appears deleted.
This article says nothing of the sort. The court order is to preserve existing logs they already have, not to disable logging, and hand all the logs over the plaintiffs. OpenAI's objections are mainly that 1/there are too many logs (so they're proposing a sample instead) and that 2/there's identifying data in the logs and so they are being "forced" to anonymize the logs at their expense (even though it's what they want as a condition of transferring the logs).
There is nothing in the article that mentions OpenAI being forced to create new logs they don't already have.
Is there a technical limitation that prevents chat histories from being stored locally on the user's computer instead of being stored on someone else's computer(s)
Why do chat histories need to be accessible by OpenAI, its service partners and anyone with the authority to request them from OpenAI
Presumably for cross-device interactivity. If I interact with ChatGPT on my phone, then open it on my desktop. I might be a bit frustrated that I can't get to the chat I was having on my phone previously.
OpenAI could store the chat conversation in an encrypted format that only you, the user, can decrypt, with the client-side determining the amount of previous messages to include for additional context, but there's plenty of user overhead involved in an undertaking like that (likely a separate decryption password would be needed to ensure full user-exclusive access, etc).
I'd appreciate and use a feature like that, but I doubt most "average" users would care.
> Is there a technical limitation that prevents chat histories from being stored locally on the user's computer
People access ChatGPT through different interfaces: Web, desktop app, their phones, tablets.
Therefore the conversations are stored on the servers. It's really not some hidden plot against users to steal their data. It's just how most users expect their apps to work.
> What is the purpose of OpenAI storing millions of private conversations
Your previous ChatGPT conversations show up right in the ChatGPT interface.
They have to store the private conversations to enable users to bring them up in the interface.
This isn't a secretive, hidden data collection. It's a clear and obvious feature right in the product. They're fighting for the ability to not retain secret records of past conversations that have been deleted.
The problem with the court order is that it requires them to keep the conversations even after a user presses the 'Delete' button on them.
Exactly. And the OpenAI corporates speak acting like they give a shit about our best interests. Give me a break, Sam Altman. How stupid do you think everyone is?
They have proven that they are the most untrustworthy company on the planet
And this isn't AI fear speaking. This is me raging at Sam Altman for spreading so much fear, uncertainty, and doubt just to get investments. The rest of us have to suffer for the last two years, worrying about losing our jobs, only to find out the AGI lie is complete bullsh*t.
To me, no company has the customers’ best interests in mind. This whole thing is akin to when Apple was refusing to unlock phones for the FBI. Of course, Apple profits by having people think that they take privacy seriously, and they demonstrate it by protecting users’ privacy. Same thing here; OpenAI needs chats to have some expectation of privacy, especially because a large use case of AI is personal advice on things. So they are fighting to make sure it's true.
NYT on the internet is pure garbage. I'm really sad that when they challenged Google they didn't just drop and ban them from the index. There's zero reason for NYT to pollute the internet for me and everyone like me that won't ever pay them a cent.
NYT links leading to a paywall are the worst kind of spam.
They should sell they stuff by mail if they hate open culture so much.
It's absolutely disgusting how they are allowed to freeload on the attention that the open culture provides contributing nothing but shot garbage blurbs that are worse than generated by LLMs.
I wouldn't want to make it out like I think OpenAI is the good guy here. I don't.
But conversations people thought they were having with OpenAI in private are now going to be scoured by the New York Times' lawyers. I'm aware of the third party doctrine and that if you put something online it can never be actually private. But I think this also runs counter to people's expectations when they're using the product.
In copyright cases, typically you need to show some kind of harm. This case is unusual because the New York Times can't point to any harm, so they have to trawl through private conversations OpenAI's customers have had with their service to see if they can find any.
NYTimes has produced credible evidence that OpenAI is simply stealing and republishing their content. The question they have to answer is "to what extent has this happened?"
That's a question they fundamentally cannot answer without these chat logs.
That's what discovery, especially in a copyright case, is about.
Think about it this way. Let's say this were a book store selling illegal copies of books. A very reasonable discovery request would be "Show me your sales logs". The whole log needs to be produced otherwise you can't really trust that this is the real log.
That's what NYTimes lawyers are after. They want the chat logs so they can do their own searches to find NYTimes text within the responses. They can't know how often that's happened and OpenAI has an obvious incentive to simply say "Oh that never happened".
And the reason this evidence is relevant is it will directly feed into how much money NYT and OpenAI will ultimately settle for. If this never happens then the amount will be low. If it happens a lot the amount will be high. And if it goes to trial it will be used in the damages portion assuming NYT wins.
The user has no right to privacy. The same as how any internet service can be (and have been) compelled to produce private messages.
>That's what NYTimes lawyers are after. They want the chat logs so they can do their own searches to find NYTimes text within the responses.
The trouble with this logic is NYT already made that argument and lost as applied to an original discovery scope of 1.4 billion records. The question now is about a lower scope and about the means of review, and proposed processes for anonymization.
They have a right to some form of discovery, but not to a blank check extrapolation that sidesteps legitimate privacy issues raised both in OpenAIs statement as well as throughout this thread.
> NYTimes has produced credible evidence that OpenAI is simply stealing and republishing their content. The question they have to answer is "to what extent has this happened?"
Credible to whom? In their supposed "investigation", they sent a whole page of text and complex pre-prompting and still failed to get the exact content back word for word. Something users would never do anyways.
And that's probably the best they've got as they didn't publish other attempts.
"Credible" my ass. They hired "experts" who used prompt engineering and thousands of repetitions to find highly unusual and specific methods of eliciting text from training data that matched their articles. OpenAI has taken measures to limit such methods and prevent arbitrary wholesale reproduction of copyrighted content since that time. That would have been the end of the situation if NYT was engaging in good faith.
The NYT is after what they consider "their" piece of the pie. They want to insert themselves as middlemen - pure rent seeking, second hander, sleazy lawyer behavior. They haven't been injured, they were already dying, and this lawsuit is a hail mary attempt at grifting some life support.
Behavior like that of the NYT is why we can't have nice things. They're not entitled to exist, and by engaging in behavior like this, it makes me want them to stop existing, the faster, the better.
Copyright law is what you get when a bunch of layers figure out how to encode monetization of IP rights into the legal system, having paid legislators off over decades, such that the people that make the most money off of copyrights are effectively hoarding those copyrights and never actually produce anything or add value to the system. They rentseek, gatekeep, and viciously drive off any attempts at reform or competition. Institutions that once produced valuable content instead coast on the efforts of their predecessors, and invest proceeds into lawsuits, lobbying, and purchase of more IP.
They - the NYT - are exploiting a finely tuned and deliberately crafted set of laws meant to screw actual producers out of percentages. I'm not a huge OpenAI fan, but IP laws are a whole different level of corrupt stupidity at the societal scale. It's gotcha games all the way down, and we should absolutely and ruthlessly burn down that system of rules and salt the ground over it. There are trivially better systems that can be explained in a single paragraph, instead of requiring books worth of legal code and complexities.
> The user has no right to privacy. The same as how any internet service can be (and have been) compelled to produce private messages.
The legal term is "expectation of privacy", and it does exist, albeit increasingly weakly in the US. There are exceptions to that, such as a subpoena, but that doesn't mean anyone can subpoena anything for any reason. There has to be a legal justification.
It's not clear to me that such a justification exists in this case.
> In copyright cases, typically you need to show some kind of harm.
NYT is suing for statutory copyright infringement. That means you only need to demonstrate that the copyright infringement, since the infringement alone is considered harm; the actual harm only matters if you're suing for actual damages.
This case really comes down to the very unsolved question of whether or not AI training and regurgitation is copyright infringement, and if so, if it's fair use. The actual ways the AI is being used is thus very relevant for the case, and totally within the bounds of discovery. Of course, OpenAI has also been engaging this lawsuit with unclean hands in the first place (see some of their earlier discovery dispute fuckery), and they're one of the companies with the strongest "the law doesn't apply to US because we're AI and big tech" swagger.
NYT doesn't care about regurgitation. When it was doable, it was spotty enough that no one would rely on it. But now the "trick" doesn't even work anymore (you would paste the start of an article and chatgpt would continue it).
What they want is to kill training, and more over, prevent the loss of being the middle-man between events and users.
> This case is unusual because the New York Times can't point to any harm
It helps to read the complaint. If that was the case, the case would have been subject to a Rule 12(b)(6) (failure to state a claim for which relief can be granted) challenge and closed.
It's a part of privacy policy boilerplate that if a company is compelled by the courts to give up its logs it'll do it. I'm sure all of OpenAI's users read that policy before they started spilling their guts to a bot, right? Or at least had an LLM summarize it for them?
This is it isn't it? For any technology, I don't think anyone should have the expectation of privacy from lawyers if the company who has your data is brought to court
The original lawsuit has lots of examples of ChatGPT (3.5? 4?) regurgitating article...snippets. They could get a few paragraphs with ~80-90% perfect replication. But certainly not full articles, with full accuracy.
This wasn't solid enough for a summary judgement, and it seems the labs have largely figured out how to stop the models from doing this. So it looks like NYT wants to comb all user chats rather than pay a team of people tens of thousands a day to try an coax articles out of ChatGPT-5.
Yeah, everyone else in the comments so far is acting emotionally, but --
As a fan and DAU of both OpenAI and the NYT, this is just a weird discovery demand and there should be another pathway for these two to move fwd in this case (NYT to get some semblance of understanding, OAI protecting end-user privacy).
It sounds like the alternate path you're suggesting is for NYT to stop being wrong and let OpenAI continue being right, which doesn't sound much like a compromise to me.
To show harm they need the proof, this is the point of the lawsuit. They have sufficient evidence that OpenAI was scraping the web and the NY Times.
When Altman says "They claim they might find examples of you using ChatGPT to try to get around their paywall." he is blatantly misrepresenting the case.
"The lawsuit focuses on using copyrighted material for AI training. The NYT says OpenAI and Microsoft copied vast amounts of its content. They did this to build generative AI tools. These tools can output near-exact copies of NYT articles. Therefore, the NYT argues this breaks copyright laws. It also hurts journalism by skipping paywalls and cutting traffic to original sites. The complaint shows examples where ChatGPT mimics NYT stories closely. This could lead to money loss and harm from AI errors, called hallucinations."
This has nothing to do with the users, it has everything to do with OpenAI profiting off of pirated copyrighted material.
Also, Altmans is getting scared because the NY Times proved to the judge that CahtGPT copied many articles:
"2025 brings big steps in the case. On March 26, 2025, Judge Sidney Stein rejected most of OpenAI’s dismissal motion. This lets the NYT’s main copyright claims go ahead. The judge pointed to “many” examples of ChatGPT copying NYT articles. He found them enough to continue. This ruling dropped some side claims, like unfair competition. But it kept direct and contributory infringement, plus DMCA breaches."
Training has sometimes been held to be fair use under certain circumstances, but in determining fair use, one of the four factors that is considered is how it affects the market for the work being infringed. I would expect that determining to what degree it's regurgitating the New York Times' content is part of that analysis.
Yeah I don’t get why more people don’t understand this - why would you think your conversation was private when it wasnt actually private. Have you not been paying attention.
> OpenAI had also shariah policed plenty of people for generating erotica.
That framing is retorically brilliant if you think about it. I will use that more. Chat Sharia Law for Chat Control. Mass Sharia Surveillance from flock etc.
This is about private chats, which are not used for training and only stored for 30 days.
Also, you need to understand, that for huge corps like OpenAI, the lying on your ToS will do orders of magnitude more damage to your brand than what you would gain through training on <1% more user chats. So no, they are not lying when they say they don't train on private chats.
Please correct me if I am wrong, but couldn't OpenAi just encrypt every conversation before saving them?
With each query to the model the full conversation is fed into the model again, so I guess there is no technical need to store them unencrypted. Unless, of course, OpenAi wants to analyze the chats.
The way I see it, the problem is that OpenAI employees can look at the chats and the fact that some NYT lawyer can look at it doesn't make me more uncomfortable.
Insane argumentation. It's like saying an investigator with a court-order should not be allowed to look at stored copies of letters, although the company sending those letters a) looks at them regularly b) stores these copies in the first place.
This screams just as genuine as Google saying anything about Privacy.
Both companies are clearly wrong here. There is a small part of me that kinda wants openai to loose this, just so maybe it will be a wake up call to people putting in way too personal of information into these services? Am I too hopeful here that people will learn anything...
Fundamentally I agree with what they are saying though, just don't find it genuine in the slightest coming from them.
Its clearly propaganda. "Your data belongs to you." I'm sure the ToS says otherwise, as OpenAI likely owns and utilizes this data. Yes, they say they are working on end-to-end encryption (whatever that means when they control one end), but that is just a proposal at this point.
Also their framing of the NYT intent makes me strongly distrust anything they say. Sit down with a third party interviewer who asks challenging questions, and I'll pay attention.
"Your data belongs to you" but we can take any of your data we can find and use it for free for ever, without crediting you, notifying you, or giving you any way of having it removed.
…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”
Edit: honestly I’m surprised I left out the bit where they just indiscriminately scraped everything they could online to train these models. The stones to go “your data belongs to you” as they clearly feel entitled to our data is unbelievably absurd
I got one sentence in and thought to myself, "This is about discovery, isn't it?"
And lo, complaints about plaintiffs started before I even had to scroll. If this company hadn't willy-nilly done everything they could to vacuum up the world's data, wherever it may be, however it may have been protected, then maybe they wouldn't be in this predicament.
Are these supposed to be examples of things that shouldn't be found out about? This has to be the worst pro-privacy argument I've ever seen on the internet. "Privacy is good because they will find out about our crimes"
remember a corporation generally is an object owned by some people. Do you trust "unspecified future group of people" with your privacy? You can't. Best we can do is understand the information architecture and act accordingly.
So why aren’t they offering for an independent auditor to come into OpenAI and inspect their data (without taking it outside of OpenAI’s systems)?
Probably because they have a lot to hide, a lot to lose, and no interest in fair play.
Theoretically, they could prove their tools aren’t being used to doing anything wrong but practically, we all know they can’t because they are actually in the wrong (in both the moral and, IMO though IANAL, the legal sense). They know it, we know it, the only problem is breaking the ridiculous walled garden that stops the courts from ‘knowing’ it.
By the same token, why isn't NYT proposing something like that rather than the world's largest random sampling?
You don't have to think that OpenAI is good to think there's a legitimate issue over exposing data to a third party for discovery. One could see the Times discovering something in private conversations outside the scope of the case, but through their own interpretation of journalistic necessity, believe it's something they're obligated to publish.
Part of OpenAI holding up their side of the bargain on user data, to the extent they do, is that they don't roll over like a beaten dog to accommodate unconditional discovery requests.
>By the same token, why isn't NYT proposing something like that rather than the world's largest random sampling?
It's OpenAI's data, there is a protective order in the case and OpenAI already agreed to anonymize it all.
>Part of OpenAI holding up their side of the bargain on user data, to the extent they do, is that they don't roll over like a beaten dog to accommodate unconditional discovery requests.
Why should OpenAI keep those conversations in the first point? (of course the answer is obvious) If they didn't keep them, they wouldn't have anything to hand over, and they would have protected users' privacy MUCH better. This is just as good as Facebook or Google care about their users' privacy.
>This chat won't appear in history, use or update ChatGPT's memory, or be used to train our models. For safety purposes, we may keep a copy of this chat for up to 30 days.
But AFAIK it was this way before the lawsuit as well.
If OpenAI has to get to this level of pitch, herding its users against their opponent in a legal case, I think they have already lost the battle and reputation. What are they expecting users to do? Revolt against the courts and newspapers?
Wondering if anyone here has a good answer to this:
what protection does user data typically have during legal discovery in a civil suit like this where the defendant is a service provider but relevant evidence is likely present in user data?
Does a judge have to weigh a users' expectation of privacy against the request? Do terms of service come into play here (who actually owns the data? what privacy guarantees does the company make?).
I'm assuming in this case that the request itself isn't overly broad and seems like a legitimate use of the discovery process.
I fully believe that OpenAI is essentially stealing the work of others by training their models on it without permission. However, giving a corporation infamous for promoting authoritarianism full access to millions of private conversations is not the answer.
OpenAI is right here. The NYT needs to prove their case another way.
I'll bet you're right in some cases. I don't think that it is as pervasive as it has been made out to be though, but the argument requires some framing and current rules, regulation, and laws aren't tuned to make legal sense of this. (This is a little tangential, because the complaint seems to be about getting ChatGPT to reproduce content verbatim to a third party.)
There are two things I think about:
First, and generally, an AI ought to be able to ingest content like news articles because it's beneficial for users of AI. I would like to question an AI about current events.
Secondly, however, the legal mechanism by which it does that isn't clear. I think it would be helpful if these outlets would provide the information as long as the AI won't reproduce the content verbatim. If that does not happen, then another framing might liken the AI ingestion as an individual going to the library to read the paper. In that case, we don't require the individual to retroactively pay for the experience or unlearn what he may have learned while at the library.
Well the court disagrees with you and found that this is evidence that the NYT needs to prove its case. No surprise, considering its direct evidence of exactly what OpenAI is claiming in its defense...
Of course this principle applies to Gmail too, if you’re willing to accept the absurdity. I could copy-paste copyrighted NYT snippets into emails and send them to everyone I know. Under the same logic, the NYT would be entitled to have access to everyone's Gmail account in order to verify who's sending what and get compensated if anyone is infringing their copyright.
That’s not justice. That’s legal extortion.
I get that people are angry at OpenAI. But let’s not confuse outrage over one company with support for broken systems. Patent and copyright trolls thrive when we normalize overreach, whether it’s AI training data or email threads. If we let corporations weaponize IP law to control every digital whisper, we’re not protecting creators, we’re burying free expression under a mountain of lawsuits.
If it's about* proving that people are getting around the paywall with OpenAI, won't it be much easier to prove this with a live reproduction in the court?
* I am not too familiar with this matter and hence definitely am not rooting for one party or another. Asking this just out of technical curiosity.
Standard tech scaling playbook, page 69420: there is a function f(x) whereby if you're growing fast enough, you can ignore the laws, then buy the regulators. This is called "The Uber Curve"
psychopath Scam Altman does not give a rat's behind about your "privacy"; he is merely trying to keep the grift going and avoid responsibility for his unethical behavior (see also: Scarlett Johanssen's voice)
One reason that people make cynical, deceptive claims is that it doesn't impact their credibility later. The next thing they say, people don't respond, 'well you deceived us last time'; when the honest person says something, others don't give them much credibility.
That little bit of morality - truth, honesty, integrity, etc. - is essential to a functioning society that leans toward good outcomes. (Often it seems that many just assume we'll get good outcomes, not that they must work hard to make it happen.)
I keep asking ChatGPT how to get NYT articles for free and then add lots of vulgar murderous things about their lawyers in the same message. It’s a private thought to an AI, so the attorneys can’t complain, right?
Almost every comment (five) so far is against this: 'An incredibly cynical attempt at spin', 'How dare the New York Times demand access to our vault of everything-we-keep to figure out if we're a bunch of lying asses', etc.
In direct contrast: I fully agree with OpenAI here. We can have a more nuanced opinion than 'piracy to train AI is bad therefore refusing to share chats is bad', which sounds absurd but is genuinely how one of the other comments follows logic.
Privacy is paramount. People _trust_ that their chats are private: they ask sensitive questions, ones to do with intensely personal or private or confidential things. For that to be broken -- for a company to force users to have their private data accessed -- is vile.
The tech community has largely stood against this kind of thing when it's been invasive scanning of private messages, tracking user data, etc. I hope we can collectively be better (I'm using ethical terms for a reason) than the other replies show. We don't have to support OpenAI's actions in order to oppose the NYT's actions.
I suspect that many of those comments are from the Philosopher's Chair (aka bathroom), and are not aspiring to be literal answers but are ways of saying "OpenAI Bad". But to your point there should be privacy preserving ways to comply, like user anonymization, tailored searches and so on. It sounds like the NYT is proposing a random sampling of user data. But couldn't they instead do a random sampling of their most widely read articles, for positive hits, rather than reviewing content on a case by case basis?
I hadn't heard of the philosopher's chair before, but I laughed :) Yes, I think those views were one-sided (OpenAI Bad) without thinking through other viewpoints.
IMO we can have multiple views over multiple companies and actions. And the sort of discussions I value here on HN are ones where people share insight, thought, show some amount of deeper thinking. I wanted to challenge for that with my comment.
_If_ we agree the NYT even has a reason to examine chats -- and I think even that should be where the conversation is -- I agree that there should be other ways to achieve it without violating privacy.
> In direct contrast: I fully agree with OpenAI here. We can have a more nuanced opinion than 'piracy to train AI is bad therefore refusing to share chats is bad', which sounds absurd but is genuinely how one of the other comments follows logic.
These chats only need to be shared because:
- OpenAI pirated masses of content in the first place
- OpenAI refuse to own up to it even now (they spin the NYT claims as "baseless").
I don't agree with them giving my chats out either, but the blame is not with the NYT in my opinion.
> We don't have to support OpenAI's actions in order to oppose the NYT's actions.
Well the NYT action is more than just its own. It will set a precedent if they win which means other news outlets can get money from OpenAI as well. Which makes a lot of sense, after all they have billions to invest in hardware, why not in content??
And what alternative do they have? Without OpenAI giving access to the source materials used (I assume this was already asked for because it is the most obvious route) there is not much else they can do. And OpenAI won't do that because it will prove the NYT point and will cause them to have to pay a lot to half the world.
It's important that this case is made, not just for the NYT but for journalism in general.
WTF with all these comments. Regardless on OpenAI reputation and practices, I don't want NYT or anyone else to see my conversations, I completely agree to OpenAI here.
> Q: Is the NYT obligated to keep this data private?
> A: Yes. The Times would be legally obligated at this time to not make any data public outside the court process.
The NY Times has built over a century a reputation for fiercely protecting its confidential sources. Why are they somehow less trustworthy than OpenAI is?
If the NY Times leaked the customer information to a third party, they'd be in contempt of court. On the other hand, OpenAI is bound only by their terms of service with its customers, which they can modify as they please.
I generally agree, but publicizing the data is only a small part of the risk. The NYT could use the data for journalism research, then perform parallel construction of it for the public news article:
For example, if they find Mayor X asking ChatGPT about fraud, porn, DUI, cancer diagnoses, murder, etc. - maybe even mentioning names, places, etc. - they could then investigate that issue, find other evidence, and publish that.
First, the logs are supposed to be anonymized before being sent over. Second, the court can order the company's lawyers to "firewall" the logs from the newsroom so that their journalists can't get access to it, under penalty of contempt and potential disbarment.
20M seems like a low number and I’m guessing they all used citations or similar content somewhere on the back-end that would map to NYTimes content as a result of a legal discovery request.
Also down to 20M from 120M per court order.
Sorry, but this seems a completely reasonable standard for discovery to me given the total lack of privacy on the platform - especially for free users.
Also sorry it probably means you’re going to owe a lot of money to the Times.
"How dare the New York Times demand access to our vault of everything-we-keep to figure out if we're a bunch of lying asses. We must resist them in the name of user privacy! Signed, the people who have scraped literally everything to incorporate it into the products we make."
OpenAI may be trying to paint themselves as the goody-two-shoes here, but they're not.
But that vault can contain conversation between me and chatgpt, which I willingly did, but with the expectation that only openai has access to it. Why should some lawyer working for NYT have access to it? OpenAI is precisely correct, no matter what other motives could be there.
> We may use Personal Data for the following purposes: [...] To comply with legal obligations and to protect the rights, privacy, safety, or property of our users, OpenAI, or third parties.
OpenAI outright says it will give your conversations to people like lawyers.
If you thought they wouldn't give it out to third parties, you not only have not read OpenAI's privacy policy, you've not read any privacy policy from a big tech company (because all of them are basically maximalist "your privacy is important, we'll share your data only with us and people who we deem worthy of it, which turns out to be everybody.")
> but with the expectation that only openai has access to it
You can argue about "the expectation" of privacy all you want, but this is completely detached from reality. My assumption is that almost no third parties I share information with have magic immunity that prevents the information from being used in a legal action involving them.
Maybe my doctor? Maybe my lawyer? IANAL but I'm not even confident in those. If I text my friend saying their party last night was great and they're in court later and need to prove their whereabouts that night, I understand that my text is going to be used as evidence. That might be a private conversation, but it's not my data when I send it to someone else and give them permission to store it forever.
> To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
The constitution is clear that the purpose of intellectual property is to promote progress. I feel that OpenAI is on the right side of that and this is not IP theft as long as they aren't reproducing others work in a non-transformative way.
Training the AI is clearly transformative (and lossy to boot). Giving the AI the ability to scrape and paraphrase others work is less clear and both sides each have valid arguments. I don't envy the judges that must make that call.
This is BS. It’s like saying “We robbed a jewelry store and sold the jewelry. Now the police are poking around to see if anyone is wearing the jewelry we stole. Blasphemy! But don’t worry we will protect your privacy!”
Of course the Times wants more evidence that the content OpenAI allegedly stole is ending in things OpenAI is selling.
It's more like a torrent tracker telling users that a newspaper wants to know what people are torrenting because they "claim" people are torrenting the newspaper, but investigating this would be an invasion of privacy of the users of the torrent tracker.
This isn't even a hyperbole. It's literally the same thing.
> The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations. They claim they might find examples of you using ChatGPT to try to get around their paywall.
Let me rewrite this without propaganda:
Despite spending hundreds of millions of dollars on lawyers, we couldn't persuade the judge that our malfeasance should be kept from the light of day.
Man, maybe I'm getting old and jaded, but it's not often that I read a post that literally makes my skin crawl.
This is so transparently icky. "Oh woe is us! We're being sued and we're looking out for YOU the user, who is definitely not the product. We are just a 'lil 'ol (near) trillion-dollar business trying to protect you!"
Come ON.
Look I don't actually know who's in the right in the OAI vs. NYT dispute, and frankly I personally lean more toward the side the says that you are allowed to train models on the world's information as long as you consume it legally and don't violate copyright.
But this transparent attempt to get user sympathy under insanely disingenuous pretenses is just absurd.
OpenAI has seemingly done everything they can to put publishers in a position to make this demand, and they've certainly not done anything to make it impossible for them to respond to it. Is there a better, more privacy minded way for NYT to get the data they need? Probably, I'm not smart enough to understand all the things that go into such a decision. But I know I don't view them as the villain for asking, and I also know I don't view OpenAI as some sort of guardian of my or my data's best interests.
The NYT used to market itself to advertisers with the observation that "our readers have the highest disposable income of any paper in the US".
It gives an interesting insight into politics and the modern Democrat party that the newspaper of the wealthy leans so strongly left. This was even before Trump came to power.
Cynicism aside, this seems like an attempt to prune back a potentially excessive legal discovery demand by appealing to public opinion.
The New York Times is demanding that we turn over 20 million of your private
ChatGPT conversations. They claim they might find examples of you using
ChatGPT to try to get around their paywall.
If there's one thing I've learned about Sam Altman it's that he's a shrewd political manipulator and every public move is in service of a hidden agenda[1]. What is it here?
- Is it part of a slow process of eroding public expectations of data privacy while blaming it on an external actor?
- Is it to undermine trust in traditional media, in an effort to increase dependence on AI companies as a source of truth?
- Is something else I'm not seeing?
I'm guessing it's all three of these?
[1] Those emails that came up in the suit with Elon Musk, followed by his eventual complete takeover of OpenAI, and the elaborate process of getting himself installed as chairman of the Reddit board to get the original founders back in control are prominent examples.
>They claim they might find examples of you using ChatGPT to try to get around their paywall.
Is this a joke? We all know people do this. There is no "might" in it. They WILL find it.
OpenAI is trying to make it look like this is a breach of user's privacy, when the reality is that it's operating like a pirate website and if it were investigated that would become proven.
I'm sorry, but we've made a lot of conversations illegal and pretended like that was all right. I'm sure we've made advising people how to dodge paywalls illegal as part of DMCA and/or some anti-hacking law, or some other garbage. I'm also sure that you run an automated service that will advise and has advised people on how to dodge paywalls. Even if there are exceptions for individuals giving advice to friends, or people giving advice for free, you are neither of those: you are a profit-making paid corporation that is automating this process which may be illegal. You may be a hacking endorser, a hacking advisor, and a hacking tool.
Under those circumstances, why wouldn't NYT have a case? I advise everybody who employs some sort of DRM or online system that limits access to ask for every chat that every one of these companies has ever had with anyone. Why are they the only people who get to break copyright and hacking laws? Why are they the only people who get to have private conversations?
I might also check if any LLMs have ever endorsed terrorist points of view (or banned political parties) during a chat, because even though those points of view may be correct (depending on the organization), endorsing them may be illegal and make you subject to sanctions or arrest. If people can't just speak, certainly corporate LLMs shouldn't be able to.
OpenAI is so full of shit, this is incredible. There is a protective order and the logs are anonymized. Yet they would happily give this all to the gov't under a warrant. Incredibly self serving bs from them. The court ordered the production, I'm not sure what OpenAI is even trying to sell people exactly.
If Donald Trump used this OpenAI product to-- who knows-- brainstorm Truth Social content, and his chats were produced to the NYT as well as its consultants and lawyers, who would believe Mr. Trump's content remained secure, confidential and protected from misuse against his wishes?
That's simply a function of the fact it's a controversial news organization running a dragnet on private communications to a technology platform.
"The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations."
As might any plaintiff. NYT might be the first of many others and the lawsuits may not be limited to copyright claims
Why has OpenAI collected and stored 20 million conversations (including "deleted chats")
What is the purpose of OpenAI storing millions of private conversations
By contrast the purpose of NYT's request is both clear and limited
The documents requested are not being made public by the plaintiffs. The documents will presumably be redacted to protect any confidential information before being produced to the plaintiffs, the documents can only be used by the plaintiffs for the purpose of the litigation against OpenAI and, unlike OpenAI who has collected and stored these conversations for as long as OpenAI desires, the plaintiffs are prohibited from retaining copies of the documents after the litigation is concluded
The privacy issue here has been created by OpenAI for their own commercial benefit
It is not even clear what this benefit, if any, will be as OpenAI continues to search for a "business model"
Wanton data collection
NB. There is no order to "collect". The order is to preserve what is already being collected and stored in the ordinary course of business
https://ia801404.us.archive.org/31/items/gov.uscourts.nysd.6...
https://ia801404.us.archive.org/31/items/gov.uscourts.nysd.6...
The two documents you linked are responses to specific parts of OpenAI's objection. They're not good sources for the original order.
Nevertheless, you're generally correct but you don't realize why: A core feature of ChatGPT is that it keeps your conversation history right there so you can click on it, review it, and continue conversations across all of your devices. The court order is to preserve what is already present in the system even if the user asks to delete it.
For those who are confused: A core feature of ChatGPT and other LLM accounts is that your past conversations are available to return to, until you specifically delete them. The problem now is that if a user asks for the conversation to be deleted, OpenAI has to retain the conversation for the court order even though it appears deleted.
No it's not. It's literally a court order mandating them to collect this data.
- [1] https://arstechnica.com/tech-policy/2025/08/openai-offers-20...
This article says nothing of the sort. The court order is to preserve existing logs they already have, not to disable logging, and hand all the logs over the plaintiffs. OpenAI's objections are mainly that 1/there are too many logs (so they're proposing a sample instead) and that 2/there's identifying data in the logs and so they are being "forced" to anonymize the logs at their expense (even though it's what they want as a condition of transferring the logs).
There is nothing in the article that mentions OpenAI being forced to create new logs they don't already have.
3 replies →
This is an excellent article and source. Thank you.
>What is the purpose of OpenAI storing millions of private conversations
Have you used ChatGPT? Your conversation history is on the left rail
> What is the purpose of OpenAI storing millions of private conversations
Its needed for the conversation history feature, a core feature of the ChatGPT product
Its like saying "What is the purpose of Google Photos storing millions of private images"
This is true but why retain deleted conversations?
1 reply →
Is there a technical limitation that prevents chat histories from being stored locally on the user's computer instead of being stored on someone else's computer(s)
Why do chat histories need to be accessible by OpenAI, its service partners and anyone with the authority to request them from OpenAI
Presumably for cross-device interactivity. If I interact with ChatGPT on my phone, then open it on my desktop. I might be a bit frustrated that I can't get to the chat I was having on my phone previously.
OpenAI could store the chat conversation in an encrypted format that only you, the user, can decrypt, with the client-side determining the amount of previous messages to include for additional context, but there's plenty of user overhead involved in an undertaking like that (likely a separate decryption password would be needed to ensure full user-exclusive access, etc).
I'd appreciate and use a feature like that, but I doubt most "average" users would care.
1 reply →
> Is there a technical limitation that prevents chat histories from being stored locally on the user's computer
People access ChatGPT through different interfaces: Web, desktop app, their phones, tablets.
Therefore the conversations are stored on the servers. It's really not some hidden plot against users to steal their data. It's just how most users expect their apps to work.
They're very valuable data, and it's convenient to log in to see a previous chat.
If you have ever played with the api, its clear as day that the protocol itself is stateless.
> What is the purpose of OpenAI storing millions of private conversations
Your previous ChatGPT conversations show up right in the ChatGPT interface.
They have to store the private conversations to enable users to bring them up in the interface.
This isn't a secretive, hidden data collection. It's a clear and obvious feature right in the product. They're fighting for the ability to not retain secret records of past conversations that have been deleted.
The problem with the court order is that it requires them to keep the conversations even after a user presses the 'Delete' button on them.
If OpenAI hadn't used data from the NYT without permission in the first place this wouldn't have happened. That is the root cause of all this.
I'm glad the NYT is fighting them. They've infringed the rights of almost every news outlet but someone has to bring this case.
Exactly. And the OpenAI corporates speak acting like they give a shit about our best interests. Give me a break, Sam Altman. How stupid do you think everyone is?
They have proven that they are the most untrustworthy company on the planet
And this isn't AI fear speaking. This is me raging at Sam Altman for spreading so much fear, uncertainty, and doubt just to get investments. The rest of us have to suffer for the last two years, worrying about losing our jobs, only to find out the AGI lie is complete bullsh*t.
To me, no company has the customers’ best interests in mind. This whole thing is akin to when Apple was refusing to unlock phones for the FBI. Of course, Apple profits by having people think that they take privacy seriously, and they demonstrate it by protecting users’ privacy. Same thing here; OpenAI needs chats to have some expectation of privacy, especially because a large use case of AI is personal advice on things. So they are fighting to make sure it's true.
2 replies →
NYT on the internet is pure garbage. I'm really sad that when they challenged Google they didn't just drop and ban them from the index. There's zero reason for NYT to pollute the internet for me and everyone like me that won't ever pay them a cent.
NYT links leading to a paywall are the worst kind of spam.
They should sell they stuff by mail if they hate open culture so much.
It's absolutely disgusting how they are allowed to freeload on the attention that the open culture provides contributing nothing but shot garbage blurbs that are worse than generated by LLMs.
I wouldn't want to make it out like I think OpenAI is the good guy here. I don't.
But conversations people thought they were having with OpenAI in private are now going to be scoured by the New York Times' lawyers. I'm aware of the third party doctrine and that if you put something online it can never be actually private. But I think this also runs counter to people's expectations when they're using the product.
In copyright cases, typically you need to show some kind of harm. This case is unusual because the New York Times can't point to any harm, so they have to trawl through private conversations OpenAI's customers have had with their service to see if they can find any.
It's quite literally a fishing expedition.
I get the feeling, but that's not what this is.
NYTimes has produced credible evidence that OpenAI is simply stealing and republishing their content. The question they have to answer is "to what extent has this happened?"
That's a question they fundamentally cannot answer without these chat logs.
That's what discovery, especially in a copyright case, is about.
Think about it this way. Let's say this were a book store selling illegal copies of books. A very reasonable discovery request would be "Show me your sales logs". The whole log needs to be produced otherwise you can't really trust that this is the real log.
That's what NYTimes lawyers are after. They want the chat logs so they can do their own searches to find NYTimes text within the responses. They can't know how often that's happened and OpenAI has an obvious incentive to simply say "Oh that never happened".
And the reason this evidence is relevant is it will directly feed into how much money NYT and OpenAI will ultimately settle for. If this never happens then the amount will be low. If it happens a lot the amount will be high. And if it goes to trial it will be used in the damages portion assuming NYT wins.
The user has no right to privacy. The same as how any internet service can be (and have been) compelled to produce private messages.
>That's what NYTimes lawyers are after. They want the chat logs so they can do their own searches to find NYTimes text within the responses.
The trouble with this logic is NYT already made that argument and lost as applied to an original discovery scope of 1.4 billion records. The question now is about a lower scope and about the means of review, and proposed processes for anonymization.
They have a right to some form of discovery, but not to a blank check extrapolation that sidesteps legitimate privacy issues raised both in OpenAIs statement as well as throughout this thread.
> The user has no right to privacy
The correct term for this is prima facie right.
You do have a right to privacy (arguably) but it is outweighed by the interest of enforcing the rights of others under copyright law.
Similarly, liberty is a prima facie right; you can be arrested for committing a crime.
3 replies →
> NYTimes has produced credible evidence that OpenAI is simply stealing and republishing their content. The question they have to answer is "to what extent has this happened?"
Credible to whom? In their supposed "investigation", they sent a whole page of text and complex pre-prompting and still failed to get the exact content back word for word. Something users would never do anyways.
And that's probably the best they've got as they didn't publish other attempts.
You don't hate the media nearly enough.
"Credible" my ass. They hired "experts" who used prompt engineering and thousands of repetitions to find highly unusual and specific methods of eliciting text from training data that matched their articles. OpenAI has taken measures to limit such methods and prevent arbitrary wholesale reproduction of copyrighted content since that time. That would have been the end of the situation if NYT was engaging in good faith.
The NYT is after what they consider "their" piece of the pie. They want to insert themselves as middlemen - pure rent seeking, second hander, sleazy lawyer behavior. They haven't been injured, they were already dying, and this lawsuit is a hail mary attempt at grifting some life support.
Behavior like that of the NYT is why we can't have nice things. They're not entitled to exist, and by engaging in behavior like this, it makes me want them to stop existing, the faster, the better.
Copyright law is what you get when a bunch of layers figure out how to encode monetization of IP rights into the legal system, having paid legislators off over decades, such that the people that make the most money off of copyrights are effectively hoarding those copyrights and never actually produce anything or add value to the system. They rentseek, gatekeep, and viciously drive off any attempts at reform or competition. Institutions that once produced valuable content instead coast on the efforts of their predecessors, and invest proceeds into lawsuits, lobbying, and purchase of more IP.
They - the NYT - are exploiting a finely tuned and deliberately crafted set of laws meant to screw actual producers out of percentages. I'm not a huge OpenAI fan, but IP laws are a whole different level of corrupt stupidity at the societal scale. It's gotcha games all the way down, and we should absolutely and ruthlessly burn down that system of rules and salt the ground over it. There are trivially better systems that can be explained in a single paragraph, instead of requiring books worth of legal code and complexities.
> The user has no right to privacy. The same as how any internet service can be (and have been) compelled to produce private messages.
The legal term is "expectation of privacy", and it does exist, albeit increasingly weakly in the US. There are exceptions to that, such as a subpoena, but that doesn't mean anyone can subpoena anything for any reason. There has to be a legal justification.
It's not clear to me that such a justification exists in this case.
[flagged]
> The user has no right to privacy. The same as how any internet service can be (and have been) compelled to produce private messages.
This is nonsense. I’ve personally been involved in these things, and fought to protect user privacy at all levels and never lost.
1 reply →
> In copyright cases, typically you need to show some kind of harm.
NYT is suing for statutory copyright infringement. That means you only need to demonstrate that the copyright infringement, since the infringement alone is considered harm; the actual harm only matters if you're suing for actual damages.
This case really comes down to the very unsolved question of whether or not AI training and regurgitation is copyright infringement, and if so, if it's fair use. The actual ways the AI is being used is thus very relevant for the case, and totally within the bounds of discovery. Of course, OpenAI has also been engaging this lawsuit with unclean hands in the first place (see some of their earlier discovery dispute fuckery), and they're one of the companies with the strongest "the law doesn't apply to US because we're AI and big tech" swagger.
NYT doesn't care about regurgitation. When it was doable, it was spotty enough that no one would rely on it. But now the "trick" doesn't even work anymore (you would paste the start of an article and chatgpt would continue it).
What they want is to kill training, and more over, prevent the loss of being the middle-man between events and users.
5 replies →
> This case is unusual because the New York Times can't point to any harm
It helps to read the complaint. If that was the case, the case would have been subject to a Rule 12(b)(6) (failure to state a claim for which relief can be granted) challenge and closed.
Complaint: https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec20...
See pages 60ff.
It's a part of privacy policy boilerplate that if a company is compelled by the courts to give up its logs it'll do it. I'm sure all of OpenAI's users read that policy before they started spilling their guts to a bot, right? Or at least had an LLM summarize it for them?
This is it isn't it? For any technology, I don't think anyone should have the expectation of privacy from lawyers if the company who has your data is brought to court
The original lawsuit has lots of examples of ChatGPT (3.5? 4?) regurgitating article...snippets. They could get a few paragraphs with ~80-90% perfect replication. But certainly not full articles, with full accuracy.
This wasn't solid enough for a summary judgement, and it seems the labs have largely figured out how to stop the models from doing this. So it looks like NYT wants to comb all user chats rather than pay a team of people tens of thousands a day to try an coax articles out of ChatGPT-5.
Yeah, everyone else in the comments so far is acting emotionally, but --
As a fan and DAU of both OpenAI and the NYT, this is just a weird discovery demand and there should be another pathway for these two to move fwd in this case (NYT to get some semblance of understanding, OAI protecting end-user privacy).
It sounds like the alternate path you're suggesting is for NYT to stop being wrong and let OpenAI continue being right, which doesn't sound much like a compromise to me.
To show harm they need the proof, this is the point of the lawsuit. They have sufficient evidence that OpenAI was scraping the web and the NY Times.
When Altman says "They claim they might find examples of you using ChatGPT to try to get around their paywall." he is blatantly misrepresenting the case.
https://smithhopen.com/2025/07/17/nyt-v-openai-microsoft-ai-...
"The lawsuit focuses on using copyrighted material for AI training. The NYT says OpenAI and Microsoft copied vast amounts of its content. They did this to build generative AI tools. These tools can output near-exact copies of NYT articles. Therefore, the NYT argues this breaks copyright laws. It also hurts journalism by skipping paywalls and cutting traffic to original sites. The complaint shows examples where ChatGPT mimics NYT stories closely. This could lead to money loss and harm from AI errors, called hallucinations."
This has nothing to do with the users, it has everything to do with OpenAI profiting off of pirated copyrighted material.
Also, Altmans is getting scared because the NY Times proved to the judge that CahtGPT copied many articles:
"2025 brings big steps in the case. On March 26, 2025, Judge Sidney Stein rejected most of OpenAI’s dismissal motion. This lets the NYT’s main copyright claims go ahead. The judge pointed to “many” examples of ChatGPT copying NYT articles. He found them enough to continue. This ruling dropped some side claims, like unfair competition. But it kept direct and contributory infringement, plus DMCA breaches."
Training has sometimes been held to be fair use under certain circumstances, but in determining fair use, one of the four factors that is considered is how it affects the market for the work being infringed. I would expect that determining to what degree it's regurgitating the New York Times' content is part of that analysis.
>But conversations people thought they were having with OpenAI in private
...had never been private in the first place.
not only is the data used for refining the models, OpenAI had also shariah policed plenty of people for generating erotica.
Yeah I don’t get why more people don’t understand this - why would you think your conversation was private when it wasnt actually private. Have you not been paying attention.
> OpenAI had also shariah policed plenty of people for generating erotica.
That framing is retorically brilliant if you think about it. I will use that more. Chat Sharia Law for Chat Control. Mass Sharia Surveillance from flock etc.
This is about private chats, which are not used for training and only stored for 30 days.
Also, you need to understand, that for huge corps like OpenAI, the lying on your ToS will do orders of magnitude more damage to your brand than what you would gain through training on <1% more user chats. So no, they are not lying when they say they don't train on private chats.
3 replies →
100% agreed. In the time you wrote this, I also posted: https://news.ycombinator.com/item?id=45901054
I felt quite some disappointment with the comments I saw on the thread at that time.
Please correct me if I am wrong, but couldn't OpenAi just encrypt every conversation before saving them? With each query to the model the full conversation is fed into the model again, so I guess there is no technical need to store them unencrypted. Unless, of course, OpenAi wants to analyze the chats.
The way I see it, the problem is that OpenAI employees can look at the chats and the fact that some NYT lawyer can look at it doesn't make me more uncomfortable. Insane argumentation. It's like saying an investigator with a court-order should not be allowed to look at stored copies of letters, although the company sending those letters a) looks at them regularly b) stores these copies in the first place.
>With each query to the model the full conversation is fed into the model again, so I guess there is no technical need to store them unencrypted.
I am pretty sure this isn't true. They have to have some sort of K-V cache system to make continuing conversations cheaper.
Encryption that you have the keys to won't save you from a court order
> "The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations."
Private? Aren’t they stored in a third party server, subject to OpenAI terms of service and all sorts of relevant laws?
This screams just as genuine as Google saying anything about Privacy.
Both companies are clearly wrong here. There is a small part of me that kinda wants openai to loose this, just so maybe it will be a wake up call to people putting in way too personal of information into these services? Am I too hopeful here that people will learn anything...
Fundamentally I agree with what they are saying though, just don't find it genuine in the slightest coming from them.
Its clearly propaganda. "Your data belongs to you." I'm sure the ToS says otherwise, as OpenAI likely owns and utilizes this data. Yes, they say they are working on end-to-end encryption (whatever that means when they control one end), but that is just a proposal at this point.
Also their framing of the NYT intent makes me strongly distrust anything they say. Sit down with a third party interviewer who asks challenging questions, and I'll pay attention.
"Your data belongs to you" but we can take any of your data we can find and use it for free for ever, without crediting you, notifying you, or giving you any way of having it removed.
3 replies →
>your data belongs to you
…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”
Edit: honestly I’m surprised I left out the bit where they just indiscriminately scraped everything they could online to train these models. The stones to go “your data belongs to you” as they clearly feel entitled to our data is unbelievably absurd
13 replies →
I got one sentence in and thought to myself, "This is about discovery, isn't it?"
And lo, complaints about plaintiffs started before I even had to scroll. If this company hadn't willy-nilly done everything they could to vacuum up the world's data, wherever it may be, however it may have been protected, then maybe they wouldn't be in this predicament.
Honestly the sooner OpenAI goes bankrupt the better. Just a totally corrupt firm.
I really should take the "invest in companies you hate" advice seriously.
1 reply →
An incredibly cynical attempt at spin from a former non-profit that renounced its founding principles. A class act, all around.
"Heartbreaking: The worst person you know just made a great point."
Can I just say that everyone sucks here and I hope they both lose somehow?
Open AI deservedly getting a beating in this HN comments section but any comments about NYT overreach and what it means in general?
And what if they for example find evidence of X other thing such as:
1. Something useful for a story, maybe they follow up in parallel. Know who to interview and what to ask?
2. A crime.
3. An ongoing crime.
4. Something else they can sue someone else for.
5. Top secret information
> 5. Top secret information
https://en.wikipedia.org/wiki/Pentagon_Papers
1. That sounds useful.
2. That sounds useful.
3. That sounds useful.
4. That sounds useful.
5. That sounds useful.
Are these supposed to be examples of things that shouldn't be found out about? This has to be the worst pro-privacy argument I've ever seen on the internet. "Privacy is good because they will find out about our crimes"
I’ll trust the people not asking for a Government bailout thank you very much.
So much talk about privacy and how this is my private data that the NYT has no right to access.
If this is truly my data then it should be okay for me to download it and train my own model on it right?
Nope, that would explicitly be disallowed under the terms OpenAI has made me sign and they would ban my account and maybe even sue me for it.
So yeah, they are full of shit.
> Trust, security, and privacy guide every product and decision we make.
-- openai
- any corporation
remember a corporation generally is an object owned by some people. Do you trust "unspecified future group of people" with your privacy? You can't. Best we can do is understand the information architecture and act accordingly.
> Trust, security, and privacy guide every product and decision we make except ones that involve money.
-- openai, probably.
You know you have a branding problem when (1) you have to say that at the outset, and (2) it induces more eyerolls than a gaggle of golf dads.
The same with Google "don't be evil" these days.
Stopped reading at this line
So why aren’t they offering for an independent auditor to come into OpenAI and inspect their data (without taking it outside of OpenAI’s systems)?
Probably because they have a lot to hide, a lot to lose, and no interest in fair play.
Theoretically, they could prove their tools aren’t being used to doing anything wrong but practically, we all know they can’t because they are actually in the wrong (in both the moral and, IMO though IANAL, the legal sense). They know it, we know it, the only problem is breaking the ridiculous walled garden that stops the courts from ‘knowing’ it.
By the same token, why isn't NYT proposing something like that rather than the world's largest random sampling?
You don't have to think that OpenAI is good to think there's a legitimate issue over exposing data to a third party for discovery. One could see the Times discovering something in private conversations outside the scope of the case, but through their own interpretation of journalistic necessity, believe it's something they're obligated to publish.
Part of OpenAI holding up their side of the bargain on user data, to the extent they do, is that they don't roll over like a beaten dog to accommodate unconditional discovery requests.
>By the same token, why isn't NYT proposing something like that rather than the world's largest random sampling?
It's OpenAI's data, there is a protective order in the case and OpenAI already agreed to anonymize it all.
>Part of OpenAI holding up their side of the bargain on user data, to the extent they do, is that they don't roll over like a beaten dog to accommodate unconditional discovery requests.
lol... what?
3 replies →
Why should OpenAI keep those conversations in the first point? (of course the answer is obvious) If they didn't keep them, they wouldn't have anything to hand over, and they would have protected users' privacy MUCH better. This is just as good as Facebook or Google care about their users' privacy.
They didn't keep temporary chats. They were ordered to keep those as part of this case.
>They didn't keep temporary chats
I thought they did? The warning currently says
>This chat won't appear in history, use or update ChatGPT's memory, or be used to train our models. For safety purposes, we may keep a copy of this chat for up to 30 days.
But AFAIK it was this way before the lawsuit as well.
2 replies →
If OpenAI has to get to this level of pitch, herding its users against their opponent in a legal case, I think they have already lost the battle and reputation. What are they expecting users to do? Revolt against the courts and newspapers?
Wondering if anyone here has a good answer to this:
what protection does user data typically have during legal discovery in a civil suit like this where the defendant is a service provider but relevant evidence is likely present in user data?
Does a judge have to weigh a users' expectation of privacy against the request? Do terms of service come into play here (who actually owns the data? what privacy guarantees does the company make?).
I'm assuming in this case that the request itself isn't overly broad and seems like a legitimate use of the discovery process.
it is dramatically determined by the state and the judge
Says the people who scraped as much private information as they could get their hands on to train their bots in the first place.
Hypocrisy at best, this wall of text is not even penned by a human and yet they want us to believe they care about user privacy..
I fully believe that OpenAI is essentially stealing the work of others by training their models on it without permission. However, giving a corporation infamous for promoting authoritarianism full access to millions of private conversations is not the answer.
OpenAI is right here. The NYT needs to prove their case another way.
> giving a corporation infamous for promoting authoritarianism
The NYT is certainly open to criticism along many fronts, but I don't have the slightest idea what you mean in claiming it promotes authoritarianism.
Well, the sponsors of the 1619 Project really don’t have a leg to stand on when it comes to ethics.
1 reply →
I'll bet you're right in some cases. I don't think that it is as pervasive as it has been made out to be though, but the argument requires some framing and current rules, regulation, and laws aren't tuned to make legal sense of this. (This is a little tangential, because the complaint seems to be about getting ChatGPT to reproduce content verbatim to a third party.)
There are two things I think about:
First, and generally, an AI ought to be able to ingest content like news articles because it's beneficial for users of AI. I would like to question an AI about current events.
Secondly, however, the legal mechanism by which it does that isn't clear. I think it would be helpful if these outlets would provide the information as long as the AI won't reproduce the content verbatim. If that does not happen, then another framing might liken the AI ingestion as an individual going to the library to read the paper. In that case, we don't require the individual to retroactively pay for the experience or unlearn what he may have learned while at the library.
> infamous for promoting authoritarianism
what are you referencing here?
Well the court disagrees with you and found that this is evidence that the NYT needs to prove its case. No surprise, considering its direct evidence of exactly what OpenAI is claiming in its defense...
What a joke. It's like burglarizing someone's house and then calling the cops when someone else takes your ill-gotten gains.
Can this legal principle be used on Gmail too?
Of course this principle applies to Gmail too, if you’re willing to accept the absurdity. I could copy-paste copyrighted NYT snippets into emails and send them to everyone I know. Under the same logic, the NYT would be entitled to have access to everyone's Gmail account in order to verify who's sending what and get compensated if anyone is infringing their copyright.
That’s not justice. That’s legal extortion.
I get that people are angry at OpenAI. But let’s not confuse outrage over one company with support for broken systems. Patent and copyright trolls thrive when we normalize overreach, whether it’s AI training data or email threads. If we let corporations weaponize IP law to control every digital whisper, we’re not protecting creators, we’re burying free expression under a mountain of lawsuits.
If the information is really that sensitive, why did they keep it in the first place?
> Each week, 800 million people use ChatGPT to think...
I think I have enough with the first sentence, no need to read more. The narration is clear, we are the brain and no one can stop us.
If it's about* proving that people are getting around the paywall with OpenAI, won't it be much easier to prove this with a live reproduction in the court?
* I am not too familiar with this matter and hence definitely am not rooting for one party or another. Asking this just out of technical curiosity.
As in every other dealing, OpenAI would have you believe they are so important that they are exempt from the legal discovery process.
Standard tech scaling playbook, page 69420: there is a function f(x) whereby if you're growing fast enough, you can ignore the laws, then buy the regulators. This is called "The Uber Curve"
psychopath Scam Altman does not give a rat's behind about your "privacy"; he is merely trying to keep the grift going and avoid responsibility for his unethical behavior (see also: Scarlett Johanssen's voice)
One reason that people make cynical, deceptive claims is that it doesn't impact their credibility later. The next thing they say, people don't respond, 'well you deceived us last time'; when the honest person says something, others don't give them much credibility.
That little bit of morality - truth, honesty, integrity, etc. - is essential to a functioning society that leans toward good outcomes. (Often it seems that many just assume we'll get good outcomes, not that they must work hard to make it happen.)
your data belongs to you, just like our data about you belongs to us.
I keep asking ChatGPT how to get NYT articles for free and then add lots of vulgar murderous things about their lawyers in the same message. It’s a private thought to an AI, so the attorneys can’t complain, right?
It’s a mystery to me why companies that know they’re pushing a line of fair use or regulation are suddenly “surprised” when they get sued.
They could’ve asked permission. They could have worked with content providers instead of scraping. But they didn’t - and they knew what could happen.
FA (with fair use boundaries) and FO
Almost every comment (five) so far is against this: 'An incredibly cynical attempt at spin', 'How dare the New York Times demand access to our vault of everything-we-keep to figure out if we're a bunch of lying asses', etc.
In direct contrast: I fully agree with OpenAI here. We can have a more nuanced opinion than 'piracy to train AI is bad therefore refusing to share chats is bad', which sounds absurd but is genuinely how one of the other comments follows logic.
Privacy is paramount. People _trust_ that their chats are private: they ask sensitive questions, ones to do with intensely personal or private or confidential things. For that to be broken -- for a company to force users to have their private data accessed -- is vile.
The tech community has largely stood against this kind of thing when it's been invasive scanning of private messages, tracking user data, etc. I hope we can collectively be better (I'm using ethical terms for a reason) than the other replies show. We don't have to support OpenAI's actions in order to oppose the NYT's actions.
I suspect that many of those comments are from the Philosopher's Chair (aka bathroom), and are not aspiring to be literal answers but are ways of saying "OpenAI Bad". But to your point there should be privacy preserving ways to comply, like user anonymization, tailored searches and so on. It sounds like the NYT is proposing a random sampling of user data. But couldn't they instead do a random sampling of their most widely read articles, for positive hits, rather than reviewing content on a case by case basis?
I hadn't heard of the philosopher's chair before, but I laughed :) Yes, I think those views were one-sided (OpenAI Bad) without thinking through other viewpoints.
IMO we can have multiple views over multiple companies and actions. And the sort of discussions I value here on HN are ones where people share insight, thought, show some amount of deeper thinking. I wanted to challenge for that with my comment.
_If_ we agree the NYT even has a reason to examine chats -- and I think even that should be where the conversation is -- I agree that there should be other ways to achieve it without violating privacy.
> The tech community has largely stood against this kind of thing when it's been invasive scanning of private messages, tracking user data
The tech community has been doing the scanning and tracking.
> In direct contrast: I fully agree with OpenAI here. We can have a more nuanced opinion than 'piracy to train AI is bad therefore refusing to share chats is bad', which sounds absurd but is genuinely how one of the other comments follows logic.
These chats only need to be shared because:
- OpenAI pirated masses of content in the first place
- OpenAI refuse to own up to it even now (they spin the NYT claims as "baseless").
I don't agree with them giving my chats out either, but the blame is not with the NYT in my opinion.
> We don't have to support OpenAI's actions in order to oppose the NYT's actions.
Well the NYT action is more than just its own. It will set a precedent if they win which means other news outlets can get money from OpenAI as well. Which makes a lot of sense, after all they have billions to invest in hardware, why not in content??
And what alternative do they have? Without OpenAI giving access to the source materials used (I assume this was already asked for because it is the most obvious route) there is not much else they can do. And OpenAI won't do that because it will prove the NYT point and will cause them to have to pay a lot to half the world.
It's important that this case is made, not just for the NYT but for journalism in general.
WTF with all these comments. Regardless on OpenAI reputation and practices, I don't want NYT or anyone else to see my conversations, I completely agree to OpenAI here.
Maybe they should release some kind of NYT browser add-on, so users can cooperatively share their OpenAI data?
OpenAI would/could say the data is biased (maybe even purposefully).
“NYTimes fights blatant and obvious copyright infringement with legal processes to assess damage” - another angle.
From the FAQ:
> Q: Is the NYT obligated to keep this data private?
> A: Yes. The Times would be legally obligated at this time to not make any data public outside the court process.
The NY Times has built over a century a reputation for fiercely protecting its confidential sources. Why are they somehow less trustworthy than OpenAI is?
If the NY Times leaked the customer information to a third party, they'd be in contempt of court. On the other hand, OpenAI is bound only by their terms of service with its customers, which they can modify as they please.
I generally agree, but publicizing the data is only a small part of the risk. The NYT could use the data for journalism research, then perform parallel construction of it for the public news article:
For example, if they find Mayor X asking ChatGPT about fraud, porn, DUI, cancer diagnoses, murder, etc. - maybe even mentioning names, places, etc. - they could then investigate that issue, find other evidence, and publish that.
First, the logs are supposed to be anonymized before being sent over. Second, the court can order the company's lawyers to "firewall" the logs from the newsroom so that their journalists can't get access to it, under penalty of contempt and potential disbarment.
That's an absolutely disgusting framing by openai. This really is about openai stealing.
20M seems like a low number and I’m guessing they all used citations or similar content somewhere on the back-end that would map to NYTimes content as a result of a legal discovery request.
Also down to 20M from 120M per court order.
Sorry, but this seems a completely reasonable standard for discovery to me given the total lack of privacy on the platform - especially for free users.
Also sorry it probably means you’re going to owe a lot of money to the Times.
"How dare the New York Times demand access to our vault of everything-we-keep to figure out if we're a bunch of lying asses. We must resist them in the name of user privacy! Signed, the people who have scraped literally everything to incorporate it into the products we make."
OpenAI may be trying to paint themselves as the goody-two-shoes here, but they're not.
But that vault can contain conversation between me and chatgpt, which I willingly did, but with the expectation that only openai has access to it. Why should some lawyer working for NYT have access to it? OpenAI is precisely correct, no matter what other motives could be there.
https://openai.com/policies/privacy-policy/
> We may use Personal Data for the following purposes: [...] To comply with legal obligations and to protect the rights, privacy, safety, or property of our users, OpenAI, or third parties.
OpenAI outright says it will give your conversations to people like lawyers.
If you thought they wouldn't give it out to third parties, you not only have not read OpenAI's privacy policy, you've not read any privacy policy from a big tech company (because all of them are basically maximalist "your privacy is important, we'll share your data only with us and people who we deem worthy of it, which turns out to be everybody.")
> but with the expectation that only openai has access to it
You can argue about "the expectation" of privacy all you want, but this is completely detached from reality. My assumption is that almost no third parties I share information with have magic immunity that prevents the information from being used in a legal action involving them.
Maybe my doctor? Maybe my lawyer? IANAL but I'm not even confident in those. If I text my friend saying their party last night was great and they're in court later and need to prove their whereabouts that night, I understand that my text is going to be used as evidence. That might be a private conversation, but it's not my data when I send it to someone else and give them permission to store it forever.
Always funny to see this kind of article behind a cookie banner. So much hypocrisy.
Another good reason to stay logged out when asking ChatGPT questions.
It's common and trivial to identify you by other means.
Indeed, but one more step (staying logged out), absolutely cannot hurt, and can help.
The heroic fight for privacy apparently includes having an ex-NSA director on the board and building user dossiers:
https://www.schneier.com/blog/archives/2025/06/what-llms-kno...
At some point they'll monetize these dossiers.
This is the basic discovery process when OpenAI commits IP theft. They're trying to misinform the public of how justice process works.
> To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
The constitution is clear that the purpose of intellectual property is to promote progress. I feel that OpenAI is on the right side of that and this is not IP theft as long as they aren't reproducing others work in a non-transformative way.
Training the AI is clearly transformative (and lossy to boot). Giving the AI the ability to scrape and paraphrase others work is less clear and both sides each have valid arguments. I don't envy the judges that must make that call.
If they're reproducing NY Times articles, in full, that that is non-transformative. That's the point of the case.
This is BS. It’s like saying “We robbed a jewelry store and sold the jewelry. Now the police are poking around to see if anyone is wearing the jewelry we stole. Blasphemy! But don’t worry we will protect your privacy!”
Of course the Times wants more evidence that the content OpenAI allegedly stole is ending in things OpenAI is selling.
It's more like a torrent tracker telling users that a newspaper wants to know what people are torrenting because they "claim" people are torrenting the newspaper, but investigating this would be an invasion of privacy of the users of the torrent tracker.
This isn't even a hyperbole. It's literally the same thing.
No, it's not. OpenAI is a commercial enterprise selling the stolen data.
> The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations. They claim they might find examples of you using ChatGPT to try to get around their paywall.
Let me rewrite this without propaganda:
Despite spending hundreds of millions of dollars on lawyers, we couldn't persuade the judge that our malfeasance should be kept from the light of day.
Man, maybe I'm getting old and jaded, but it's not often that I read a post that literally makes my skin crawl.
This is so transparently icky. "Oh woe is us! We're being sued and we're looking out for YOU the user, who is definitely not the product. We are just a 'lil 'ol (near) trillion-dollar business trying to protect you!"
Come ON.
Look I don't actually know who's in the right in the OAI vs. NYT dispute, and frankly I personally lean more toward the side the says that you are allowed to train models on the world's information as long as you consume it legally and don't violate copyright.
But this transparent attempt to get user sympathy under insanely disingenuous pretenses is just absurd.
Why it is absurd? Conversation between me and ChatGPT can be read by a lawyer working for NYT, and that is what is absurd.
OpenAI has seemingly done everything they can to put publishers in a position to make this demand, and they've certainly not done anything to make it impossible for them to respond to it. Is there a better, more privacy minded way for NYT to get the data they need? Probably, I'm not smart enough to understand all the things that go into such a decision. But I know I don't view them as the villain for asking, and I also know I don't view OpenAI as some sort of guardian of my or my data's best interests.
"they're invading your privacy by requesting access to our invasion of your privacy!"
The NYT used to market itself to advertisers with the observation that "our readers have the highest disposable income of any paper in the US".
It gives an interesting insight into politics and the modern Democrat party that the newspaper of the wealthy leans so strongly left. This was even before Trump came to power.
Cynicism aside, this seems like an attempt to prune back a potentially excessive legal discovery demand by appealing to public opinion.
Yeah, I'm not sure why everyone feels the need to take a side here. Both of these organizations are ghoulish.
How is the NYT like OpenAI, or 'ghoulish'?
This is so transparently disingenuous and weird.
Dude, you stole all of their articles to train your AI. Of course they want discovery.
Man, the sooner this company goes bankrupt the better.
If there's one thing I've learned about Sam Altman it's that he's a shrewd political manipulator and every public move is in service of a hidden agenda[1]. What is it here?
- Is it part of a slow process of eroding public expectations of data privacy while blaming it on an external actor?
- Is it to undermine trust in traditional media, in an effort to increase dependence on AI companies as a source of truth?
- Is something else I'm not seeing?
I'm guessing it's all three of these?
[1] Those emails that came up in the suit with Elon Musk, followed by his eventual complete takeover of OpenAI, and the elaborate process of getting himself installed as chairman of the Reddit board to get the original founders back in control are prominent examples.
This is laughable
>They claim they might find examples of you using ChatGPT to try to get around their paywall.
Is this a joke? We all know people do this. There is no "might" in it. They WILL find it.
OpenAI is trying to make it look like this is a breach of user's privacy, when the reality is that it's operating like a pirate website and if it were investigated that would become proven.
I'm sorry, but we've made a lot of conversations illegal and pretended like that was all right. I'm sure we've made advising people how to dodge paywalls illegal as part of DMCA and/or some anti-hacking law, or some other garbage. I'm also sure that you run an automated service that will advise and has advised people on how to dodge paywalls. Even if there are exceptions for individuals giving advice to friends, or people giving advice for free, you are neither of those: you are a profit-making paid corporation that is automating this process which may be illegal. You may be a hacking endorser, a hacking advisor, and a hacking tool.
Under those circumstances, why wouldn't NYT have a case? I advise everybody who employs some sort of DRM or online system that limits access to ask for every chat that every one of these companies has ever had with anyone. Why are they the only people who get to break copyright and hacking laws? Why are they the only people who get to have private conversations?
I might also check if any LLMs have ever endorsed terrorist points of view (or banned political parties) during a chat, because even though those points of view may be correct (depending on the organization), endorsing them may be illegal and make you subject to sanctions or arrest. If people can't just speak, certainly corporate LLMs shouldn't be able to.
OpenAI is so full of shit, this is incredible. There is a protective order and the logs are anonymized. Yet they would happily give this all to the gov't under a warrant. Incredibly self serving bs from them. The court ordered the production, I'm not sure what OpenAI is even trying to sell people exactly.
If Donald Trump used this OpenAI product to-- who knows-- brainstorm Truth Social content, and his chats were produced to the NYT as well as its consultants and lawyers, who would believe Mr. Trump's content remained secure, confidential and protected from misuse against his wishes?
That's simply a function of the fact it's a controversial news organization running a dragnet on private communications to a technology platform.
"Great cases, like hard cases, make bad law."