Comment by jampa
1 day ago
The headline and article try to bias and frame the story to make people question: "Is OpenAI snitching on me?"
In reality, Uber records and conflicting statements incriminated him. He seems to be the one who provided the ChatGPT record to try to prove that the fire was unintentional.[1]
> He was visibly anxious during that interview, according to the complaint. His efforts to call 911 and his question to ChatGPT about a cigarette lighting a fire indicated that he wanted to create a more innocent explanation for the fire's start and to show he tried to assist with suppression, the complaint said.
[1] https://apnews.com/article/california-wildfires-palisades-lo...
It looks like the headline may have changed as well since the HN submission, assuming that the title here was the original headline. Now the headline seems to be "Suspect in Palisades fire allegedly used ChatGPT to generate images of burning forests and cities".
Changing the headline post hoc without any indication of the change is kind of a pet peeve of mine. Why is it not indicated as errata in the article like other edits when the body of the text is changed or factual information is confirmed?
Headlines are marketing and layout design, not journalism. Journalists have no role in title generation. And changes could be due to AB testing. Seems relatively immaterial to me.
3 replies →
I had the same assumption but apparently it does not appear to have changed since publication. [1]
[1] - https://web.archive.org/web/20251008204636/https://www.rolli...
Also why the sudden interest? Amazon Alexa snips have been used before in court/investigation and this is not new. But makes me wonder about what happens when you are dealing with summaries of summaries of long gone tokens. Is that evidence?
> But makes me wonder about what happens when you are dealing with summaries of summaries of long gone tokens. Is that evidence?
There is text input and text output it's really not that complicated
If used in court the jury would be given access to the full conversation just like if it was an email thread
I suppose it's a good reminder to people that every cloud service they interact with is collecting data which can be used against them in court or in any number of other ways at any point in the future and that chatbots are no exception.
I'm sure that there are many people who thoughtlessly type very personal things into chatgpt including things that might not look so good for them if they came out at trial.
> Also why the sudden interest? Amazon Alexa snips have been used before in court/investigation and this is not new.
As I understand it, some people treat chatgpt like a close personal friend and therapist. Confiding their deepest secrets and things like that.
Is this any different than people asking their deepest darkest questions to google?
2 replies →
You have your full history in chatgpt not just summaries and I doubt they permanently delete chats you specifically choose to delete.
For ChatGPT, they're under legal obligation not to delete chats for a period of time.
https://openai.com/index/response-to-nyt-data-demands/ (yes, that's written 100% from OpenAI's perspective)
In particular:
> The New York Times is demanding that we retain even deleted ChatGPT chats and API content that would typically be automatically removed from our systems within 30 days.
> ...
> This data is not automatically shared with The New York Times or anyone else. It’s locked under a separate legal hold, meaning it’s securely stored and can only be accessed under strict legal protocols.
> ...
> Right now, the court order forces us to retain consumer ChatGPT and API content going forward. That said, we are actively challenging the order, and if we are successful, we’ll resume our standard data retention practices.
I think they were referring to intermediate tokens in "Thinking" models, which are summarized in the interface but ultimately discarded (and may themselves be summaries of sources, other chats, or earlier intermediate states).
Presumably what's of evidentiary value is the tokens you type, though.
Your honor, the defendant is semantically guilty!
it would help indicate intention.
Hmm. The Rolling Stone article (and linked press conference) has the police giving a vastly different account of the ChatGPT logs they're complaining about:
> Investigators, he noted, allege that some months prior to the burning of the Pacific Palisades, Rinderknecht had prompted ChatGPT to generate “a dystopian painting showing, in part, a burning forest and a crowd fleeing from it.” A screen at the press conference showed several iterations on such a concept...
Video here, including the ChatGPT "painting" images circa 1m45s: https://xcancel.com/acyn/status/1975956240489652227
(Although, to be clear, it's not like the logs are the only evidence against him; it doesn't even look like parallel construction. So if one assumes "as evidence" usually implies "as sole evidence," I can see how the headline could be seen as sensationalizing/misleading.)
[flagged]
I wonder why he contributed $2 ($1 on two separate occasions). Did $1 get you access to a political blog or something back in 2020?
He donated to Biden, but had no registered party. Congrats, you're part of the insanity.
I'm sort of grossed out by people trying to blame a party for this in general, though. It's weird.
28 replies →
This may be unpopular opinion, but I'm more or less okay with things like search records and Uber receipts being included as evidence when there's probable cause.
It's no different than the contents of your home. Obviously we don't want police busting in to random homes to search, but if you're the suspect of a crime and police have a warrant, it's entirely reasonable to enter a home and search. I guess it can't necessarily help clear you up like an alibi would, but if the party is guilty is could provide things like more certainty, motivation, timeline of events, etc.
I think people conflate the two. They hold that certain things should remain private under all circumstances, where I believe the risk is a large dragnet of surveillance that affects everyone as opposed to targeted tools to determine guilt or innocence.
Am I wrong?
I don’t think you hold an unreasonable position on that issue. If everything is operating as it should then many would agree.
We’ve long ago entered a reality where almost everyone has a device on them that can track their exact location all the time and keeps a log of all their connections, interests and experiences. If a crime occurs at a location police can now theoretically see everyone who was in the vicinity, or who researched methods of committing a crime, etc. It’s hard to balance personal freedoms with justice, especially when those who execute on that balance have a monopoly on violence and can at times operate without public review. I think it’s the power differential that makes the debate and advocacy for clearer privacy protection more practical.
I shouldn't have to remind everyone that cops already can skip getting a warrant for things like phone location data.
Plenty of big services will just give cops info if they ask for it. It's legal. Any company or individual can just offer up evidence against you and that's fine, but big companies will have policies that do not require warrants.
Despite this atrocious anti-privacy stance, cops STILL clear around half of violent crimes, and that's only in states with rather good police forces, usually involving higher requirements than "A pulse" and long training in a police Academy. Other states get as low as 10% of crimes actually solved.
When you've built a panopticon and cops STILL can't solve cases, it's time to stop giving up rights and fix the cops.
2 replies →
There are two questions that come up.
1. How wide is the search net dragged?
2. Who can ask for access?
The first shows up in court cases about things like "which phones were near the crime" or "who in the area was talking about forest fires to ChatGPT?" If you sweep the net far enough, everyone can be put under suspicion for something.
A fun example of the second from a few years ago in the New York area was toll records being accessed to prove affairs. While most of us are OK with detectives investigating murders getting access to private information, having to turn it over to our exes is more questionable. (And the more personal the information, the less we are OK with it.)
Sure, warrants and subpoenas need to exist in order for the legal system to function. However, they have limits.
The modern abuse of the third-party doctrine is a different topic. Modern usage of the third-party doctrine claims (for instance) that emails sent and received via Gmail are actually Google's property and thus they can serve Google a warrant in order to access anyone's emails. The old-timey equivalent would be that the police could subpoena the post office to get the contents of my (past) letters -- this is something that would've been considered inconceivably illegal a few decades ago, but because of technical details of the design of the internet, we have ended up in this situation. Of course, the fact there are these choke points you can subpoena is very useful to the mass surveillance crowd (which is why these topics get linked -- people forget that many of these mass surveillance programs do have rubber-stamped court orders to claim that there is some legal basis for wiretapping hundreds of millions of people without probable cause).
In addition (in the US) the 5th amendment allows you the right to not be witness against yourself, and this has been found to apply to certain kinds of requests for documents. However, because of the third-party doctrine you cannot exercise those rights because you are not being asked to produce those documents.
> Am I wrong?
As a naturally curious person, who reads a lot and looks up a lot of things, I've learned to be cautious when talking to regular people.
While considering buying a house I did extensive research about fires. To do my job, I often read about computer security, data exfiltration, hackers and ransomware.
If I watch a WWI documentary, I'll end up reading about mustard gas and trench foot and how to aim artillery afterwards. If I read a sci-fi novel about a lab leak virus, I'll end up researching how real virus safety works and about bioterrorism. If I listen to a podcast about psychedelic-assisted therapy, I'll end up researching how drugs work and how they were discovered.
If I'm ever accused of a crime, of almost any variety or circumstance, I'm sure that prosecutors would be able to find suspicious searches related to it in my history. And then leaked out to the press or mentioned to the jury as just a vague "suspect had searches related to..."
The average juror, or the average person who's just scrolling past a headline, could pretty trivially be convinced that my search history is nefarious for almost any accusation.
Notorious hacker floor2 openly published comments online about misusing judicial process and the difficulty of covering his tracks.
Sometimes you are better off not invoking your right to a jury trial because if there is straight up evidence in your favor, it's easier to get a jury to ignore that for emotional bullshit than a judge.
DAs for bigger departments are likely well equipped, well trained, and well practiced at tugging on the heartstrings of average juries, which are not average people, because jury selection is often a bad system.
I think you're right, but the two collide over the question of whether police have the right to be able to access your stuff, or merely the right to try to access it.
In the past, if you put evidence in a safe and refused to open it, the police could crack it, drill it, cut it open, etc. if all else failed.
Modern technology allows wide access to the equivalent of a perfectly impregnable safe. If the police get a warrant for your files, but your files fundamentally cannot be read without your cooperation, what then?
It comes down to three options: accept this possibility and do without the evidence; make it legally required to unlock the files, with a punishment at least as severe as you're facing for the actual crime; or outlaw impregnable safes.
There doesn't seem to be any consensus yet about which approach is correct. We see all three in action in various places.
>The headline and article try to bias and frame the story to make people question: "Is OpenAI snitching on me?"
And very rightly so, regardless if Uber records incriminated this person.
Ok. But this serves as a reminder not to expect privacy when sending messages back and forth to some software company.
Nothing new here. Somehow people are surprised evidence against them includes - "my google search" or "my chatgpt logs" or ...
Rolling Stone is a general audience publication so it is fair enough for some of their readers to be surprised.
In my opinion, people should be constantly reminded of this.
>In reality, Uber records and conflicting statements incriminated him. He seems to be the one who provided the ChatGPT record to try to prove that the fire was unintentional.[1]
Do you think OpenAI wont produce responsive records when it receives a lawful subpoena?
In this age I'd assume the NSA already has such records.
Not sure what that would have to do with a subpoena to OpenAI
It's better to keep a level head about such things. It's quite obvious that the NSA does not have the facilities to simply intercept and store everything.
OpenAI also literally announced that they send data to law enforcement after a judge told them they had to do so.
Every company must comply with lawful warrants and subpoenas.
EDIT: Original parent was "Every company does this."
Not Mullvad. Swedish police showed up looking for some dat, Mullvad didn't even collect what they wanted, police left empty handed.
11 replies →
they HAD to? didnt Apple refuse to do this exact thing?
Apple refused to create new software to allow the FBI to brute force an encrypted device. OpenAI just had this info floating around on hard drives.
2 replies →
If you are referring to the incident below, it is different because the government asked Apple to write software to allow access to the device:
https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...
If Apple had simply had the text records, they would have had to comply with the government order to provide them.
1 reply →