Palisades Fire suspect's ChatGPT history to be used as evidence

6 days ago (rollingstone.com)

All I can say is GOOD.

If a person is suspected of committing a crime, and police obtain a specific, pointed, warrant for information pertaining to an individual, tech companies have a moral obligation to comply, in the best interests of humanity.

If law enforcement or spy agency asked for a dragnet warrant like "find me all of the people that might be guilty of XYZ" or "find me something this individual might be guilty of"; tech companies have a moral obligation to resist, in the best interest of humanity.

The first is an example of the justice system working correctly in a free society; the second is an example of totalitarian government seeking to frame individuals.

  • Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them. I already think too much about everything I put into ChatGPT, since my default assumption is it will all be made public. Now I also have to consider the possibility that random discussions will be used against me and taken out of context if I'm ever accused of committing a crime. (Like all the weird questions I ask about anonymous communications and encryption!) So everything I do with these tools will be with an eye towards the fact that it's all preserved and I'll have to explain it, which has a huge chilling effect on using the system. Just make it easy for me not to log history.

    • > These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.

      But you do, just like you have confidentiality in what you write in your diary.

      1 reply →

    • > Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.

      Don't expect that from products with advertising business models

      6 replies →

    • I think there is a non-zero chance they had no idea about this guy until OpenAI employees uncovered this, reported it, and additional cell phone data backed up the entire thing.

  • How do you square this with Apple's pushback few years back against FBI who asked for a specific individual's details.

    I'm not taking sides, but it sounds like if ChatGPT cooperating with LE is a Good Thing (TM), then Apple making a public spectacle of how they are not going to cooperate is .. bad?

    I'm fully aware that Apple might not even be able to provide them the information, which is a separate conversation.

    • >How do you square this with Apple's pushback few years back against FBI who asked for a specific individual's details.

      See: https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...

      >Most of these seek to compel Apple "to use its existing capabilities to extract data like contacts, photos and calls from locked iPhones running on operating systems iOS 7 and older" in order to assist in criminal investigations and prosecutions. A few requests, however, involve phones with more extensive security protections, which Apple has no current ability to break. These orders would compel Apple to write new software that would let the government bypass these devices' security and unlock the phones.[3]

      That's much more different than OpenAI dumping some rows from their database. If chatgpt was end-to-end encrypted and they wanted OpenAI to backdoor their app I would be equally opposed.

      4 replies →

    • Yes. I'm glad the FBI was able to crack the phone without Apple's help in that San Bernardino case, which humiliated Apple as a little bonus.

    • With my current knowledge of the case, I'd say Apple was clearly in the moral wrong and it's a pretty dark mark in their past.

      My understanding is the suspect was detained and law enforcement was not asking for a dragnet (at least thats what they stated publicly), and they were asking for a tool for a specific phone. Apple stated the FBI was asking them to backdoor in all iPhones, then the FBI countered and said thats not what they were asking for. Apple then marched triumphantly into the moral sunset over the innocent victims'; meanwhile the FBI then send funds to a dubious group with questionable ethics and ties to authoritarian regimes.

      In my opinion, Apple should have expediently helped here, if for no other reason than to prevent the funding of groups that support dragnets, but also out of moral obligation to the victims.

      5 replies →

  • I don't think anyone has a moral obligation to do the state's bidding, and if you think these tools will only be used morally against "bad guys", you have not been paying attention to recent events.

    I also don't think the interests of the state are "in the best interests of humanity".

    Sometimes the price of having nice things and them remaining nice means that people you don't like can use them, too.

  • > If law enforcement or spy agency asked for a dragnet warrant like "find me all of the people that might be guilty of XYZ" or "find me something this individual might be guilty of"; tech companies have a moral obligation to resist, in the best interest of humanity.

    There is more evidence they will do this rather than that they won't. ChatGPT is a giant dragnet and 15 years ago I would've argued it's probably entirely operated and funded by the NSA. The police already can obtain a "geofenced warrant" today. We're not more than one senator up for re-election from having a new law forced down our throat "for the children" that enables them to mine OpenAI data. That is, if they don't already have a Room 641A located in their HQ.

    People pour their live out into these fuzzy word predictors. OpenAI is holding a treasure trove of personal data, personality data, and other data that could be used for all kinds of intelligence work.

    This is objectively bad regardless of how bad the criminal is. The last near 40 years of history, and especially the post 9/11 world, shows that if we don't stand up for these people the government will tread all over our most fundamental rights in the name of children/security/etc.

    Basic rights aren't determined by how "good people" use them. They are entirely determined by how we treat "bad people" under them.

    • Just wait until AI is advanced enough that you can buy an AI best friend who will be with you all your life. I'm reminded of Decker's AI hologram friend in Blade Runner 2049. The only thing they got wrong was she was not collecting data for the megacorp.

      Thinking again, the AI will certainly be "free".

  • Does this imply that the tech company has the moral obligation to evaluate the merits of each warrant on a case-by-case basis?

  • > the Justice Department’s allegations against Rinderknecht are supported by evidence found on his phone

    Sounds like they got the info from his phone, not taken from any servers, so this is likely not an example of a tech company "complying".

  • There are many routes that the government has to court order/warrant/subpoena information from tech companies.

    The tech companies have just about zero ability to resist.

    There should likely be legislation enacted that raises chat logs to the level of psychotherapist-patient privilege.

The headline and article try to bias and frame the story to make people question: "Is OpenAI snitching on me?"

In reality, Uber records and conflicting statements incriminated him. He seems to be the one who provided the ChatGPT record to try to prove that the fire was unintentional.[1]

> He was visibly anxious during that interview, according to the complaint. His efforts to call 911 and his question to ChatGPT about a cigarette lighting a fire indicated that he wanted to create a more innocent explanation for the fire's start and to show he tried to assist with suppression, the complaint said.

[1] https://apnews.com/article/california-wildfires-palisades-lo...

  • It looks like the headline may have changed as well since the HN submission, assuming that the title here was the original headline. Now the headline seems to be "Suspect in Palisades fire allegedly used ChatGPT to generate images of burning forests and cities".

  • Also why the sudden interest? Amazon Alexa snips have been used before in court/investigation and this is not new. But makes me wonder about what happens when you are dealing with summaries of summaries of long gone tokens. Is that evidence?

    • I suppose it's a good reminder to people that every cloud service they interact with is collecting data which can be used against them in court or in any number of other ways at any point in the future and that chatbots are no exception.

      I'm sure that there are many people who thoughtlessly type very personal things into chatgpt including things that might not look so good for them if they came out at trial.

    • > But makes me wonder about what happens when you are dealing with summaries of summaries of long gone tokens. Is that evidence?

      There is text input and text output it's really not that complicated

      If used in court the jury would be given access to the full conversation just like if it was an email thread

    • > Also why the sudden interest? Amazon Alexa snips have been used before in court/investigation and this is not new.

      As I understand it, some people treat chatgpt like a close personal friend and therapist. Confiding their deepest secrets and things like that.

      3 replies →

  • This may be unpopular opinion, but I'm more or less okay with things like search records and Uber receipts being included as evidence when there's probable cause.

    It's no different than the contents of your home. Obviously we don't want police busting in to random homes to search, but if you're the suspect of a crime and police have a warrant, it's entirely reasonable to enter a home and search. I guess it can't necessarily help clear you up like an alibi would, but if the party is guilty is could provide things like more certainty, motivation, timeline of events, etc.

    I think people conflate the two. They hold that certain things should remain private under all circumstances, where I believe the risk is a large dragnet of surveillance that affects everyone as opposed to targeted tools to determine guilt or innocence.

    Am I wrong?

    • I don’t think you hold an unreasonable position on that issue. If everything is operating as it should then many would agree.

      We’ve long ago entered a reality where almost everyone has a device on them that can track their exact location all the time and keeps a log of all their connections, interests and experiences. If a crime occurs at a location police can now theoretically see everyone who was in the vicinity, or who researched methods of committing a crime, etc. It’s hard to balance personal freedoms with justice, especially when those who execute on that balance have a monopoly on violence and can at times operate without public review. I think it’s the power differential that makes the debate and advocacy for clearer privacy protection more practical.

      3 replies →

    • There are two questions that come up.

      1. How wide is the search net dragged?

      2. Who can ask for access?

      The first shows up in court cases about things like "which phones were near the crime" or "who in the area was talking about forest fires to ChatGPT?" If you sweep the net far enough, everyone can be put under suspicion for something.

      A fun example of the second from a few years ago in the New York area was toll records being accessed to prove affairs. While most of us are OK with detectives investigating murders getting access to private information, having to turn it over to our exes is more questionable. (And the more personal the information, the less we are OK with it.)

    • Sure, warrants and subpoenas need to exist in order for the legal system to function. However, they have limits.

      The modern abuse of the third-party doctrine is a different topic. Modern usage of the third-party doctrine claims (for instance) that emails sent and received via Gmail are actually Google's property and thus they can serve Google a warrant in order to access anyone's emails. The old-timey equivalent would be that the police could subpoena the post office to get the contents of my (past) letters -- this is something that would've been considered inconceivably illegal a few decades ago, but because of technical details of the design of the internet, we have ended up in this situation. Of course, the fact there are these choke points you can subpoena is very useful to the mass surveillance crowd (which is why these topics get linked -- people forget that many of these mass surveillance programs do have rubber-stamped court orders to claim that there is some legal basis for wiretapping hundreds of millions of people without probable cause).

      In addition (in the US) the 5th amendment allows you the right to not be witness against yourself, and this has been found to apply to certain kinds of requests for documents. However, because of the third-party doctrine you cannot exercise those rights because you are not being asked to produce those documents.

    • > Am I wrong?

      As a naturally curious person, who reads a lot and looks up a lot of things, I've learned to be cautious when talking to regular people.

      While considering buying a house I did extensive research about fires. To do my job, I often read about computer security, data exfiltration, hackers and ransomware.

      If I watch a WWI documentary, I'll end up reading about mustard gas and trench foot and how to aim artillery afterwards. If I read a sci-fi novel about a lab leak virus, I'll end up researching how real virus safety works and about bioterrorism. If I listen to a podcast about psychedelic-assisted therapy, I'll end up researching how drugs work and how they were discovered.

      If I'm ever accused of a crime, of almost any variety or circumstance, I'm sure that prosecutors would be able to find suspicious searches related to it in my history. And then leaked out to the press or mentioned to the jury as just a vague "suspect had searches related to..."

      The average juror, or the average person who's just scrolling past a headline, could pretty trivially be convinced that my search history is nefarious for almost any accusation.

      2 replies →

    • I think you're right, but the two collide over the question of whether police have the right to be able to access your stuff, or merely the right to try to access it.

      In the past, if you put evidence in a safe and refused to open it, the police could crack it, drill it, cut it open, etc. if all else failed.

      Modern technology allows wide access to the equivalent of a perfectly impregnable safe. If the police get a warrant for your files, but your files fundamentally cannot be read without your cooperation, what then?

      It comes down to three options: accept this possibility and do without the evidence; make it legally required to unlock the files, with a punishment at least as severe as you're facing for the actual crime; or outlaw impregnable safes.

      There doesn't seem to be any consensus yet about which approach is correct. We see all three in action in various places.

  • >The headline and article try to bias and frame the story to make people question: "Is OpenAI snitching on me?"

    And very rightly so, regardless if Uber records incriminated this person.

  • Hmm. The Rolling Stone article (and linked press conference) has the police giving a vastly different account of the ChatGPT logs they're complaining about:

    > Investigators, he noted, allege that some months prior to the burning of the Pacific Palisades, Rinderknecht had prompted ChatGPT to generate “a dystopian painting showing, in part, a burning forest and a crowd fleeing from it.” A screen at the press conference showed several iterations on such a concept...

    Video here, including the ChatGPT "painting" images circa 1m45s: https://xcancel.com/acyn/status/1975956240489652227

    (Although, to be clear, it's not like the logs are the only evidence against him; it doesn't even look like parallel construction. So if one assumes "as evidence" usually implies "as sole evidence," I can see how the headline could be seen as sensationalizing/misleading.)

  • Ok. But this serves as a reminder not to expect privacy when sending messages back and forth to some software company.

  • >In reality, Uber records and conflicting statements incriminated him. He seems to be the one who provided the ChatGPT record to try to prove that the fire was unintentional.[1]

    Do you think OpenAI wont produce responsive records when it receives a lawful subpoena?

Strange. The article says that he made the fire, called first responders, who put the fire out. The fire continued to smolder before reigniting in later winds.

If you cause a problem, report it, then the authorities responsible for dealing with those problems take care of it and go home, what does it mean?

Are the authorities then partially responsible for not ensuring the fire was put out properly before leaving the area?

Is he even guilty at all given that he filled his duty and reported the problem after unintentionally causing it?

  • This all hinges on the word 'unintentionally', which is not at all how the law sees it. Arson has a forty year maximum for a good reason, because fire tends to spread and cause a lot more damage than anyone predicted. You are not exonerated of responsibility just because emergency services showed up. You are, to a first approximation, responsible for all damage done.

  • To add to this, even the government isn't sure they can get the case that he intentionally / maliciously started the fire to stick which is why their official complaint is going for recklessness / negligence.

    The case for malicious intent is extremely flimsy and based entirely on circumstantial evidence. The strongest piece of evidence they have for arson is that he threatened to burn down his sister's house but here's the thing, it would be extremely unusual for an arsonist to switch from targeted arson based on anger or revenge to thrill seeking arson setting unmotivated fires.

    • >it would be extremely unusual for an arsonist to switch from targeted arson based on anger or revenge to thrill seeking arson setting unmotivated fires

      This is all pet theories and silliness for purposes of discussion. I freely admit that I haven't built a case here that's strong enough to withstand even a gentle poking by an opponent.

      I don't know as much about arson but I did go through the same serial killer phase as every morose teen and one of the things that stuck with me is the way that some offenders escalate from simple peeping and stalking all the way up to murder. Another thing that stuck with me is how in some cases when there is an intended victim, esp for revenge, an obsessed mind will often hone in on a single characteristic of the intended victim then transfer victimhood to strangers based on that characteristic. The woman who "wronged" you is a skinny blonde who smokes cigarettes so you go out looking for skinny blondes who smoke cigarettes to victimize in her stead because in your unconscious brain that matches the pattern of behavior that would soothe the wounded entitlement of the offender. Given these facts about the nature of obsessive, vengeance-oriented crime and the fact that the serial killer/arsonist crossover is so common that arson is one of the mcdonald triad of behaviors common to serial killers there's a non-zero possibility that we're seeing a revenge fantasy transferred to another victim. There's also the fact that obsessed criminals tend to want to roleplay or practice and a lot of times their first "serious" crime is one of these roleplay/practice sessions getting out of control. This feels like that to me though I can't prove it. It's like he wanted to see what starting a fire would be like, assumed that the local VFD would get it under control and in doing so would also give him an idea of what the response looked like so he could optimize for escape, then either it got out of control or he tried to inject himself into the emergency response (another common thing among obsessed criminals, many like to relive the crime by being part of the investigation, like to tease investigators by being right under their nose or believe that by injecting themselves into the investigation they can steer it away from them).

      Again, does any of this hold up in a court of law? Of course not. Does it hold up in a court of a thread on a post on HN? Maybe, we're here to talk and I'm of a mind that we didn't do anything to fix w/e it was that made people serial killers but there aren't really any serial killers anymore so something must have happened to that behavior. Perhaps stranger arson is a way that the same drivers that led to serial murder before the ~~panopticon~~internet are driving new behaviors now. Intuitively I'm highly confident that the stranger spree killings we see now are driven by those same pressures in a lot of perpetrators and the change in MO is about taking advantage of lag time in law enforcement's ability to correlate facts. Before the internet you could drive a few hours' down the road and start using a new name and unless your old name was already in the system there was basically no way for anyone to know. Obsessed criminals could offend, disappear and wait it out. Nowadays we're really good at ID'ing an offender so obsessive murders have to be one and done, but another strategy could be crimes that are small enough that they don't trigger the kind of dragnet response that involves things like checking all the CCTV cameras in a ten mile circle around the crime and things like that.

      edit: everyone seems focused on the "serial killer phase" line that was really intended to be a throwaway. I just mean that I read a lot about them and thought it was shocking and cool to have a "favorite". Gross shit, but I assure you no one was ever in any amount of physical or psychic danger beyond declaring me a pizza cutter (all edge and no real point).

      10 replies →

  • two things:

    1) this whole case hinges on intentionality and the gov't intends to prove that he set the fire intentionally. part of the chatgpt history is images he generated of fires and people running from fires. If he intentionally set a fire in a wildfire-prone area it doesn't matter that he didn't intend it to be a wildfire or anything he did after he set the fire.

    2) If you'd like to have emergency services that are either prohibitively expensive or simply nonexistent, one great way to do that is to make first responders responsible for not doing a good enough job in their responses. I'm honestly not sure what we'd do in cases of blatantly neglectful behavior by a first responder during an emergency response, but beyond intentional malpractice we generally extend an assumption of good faith to anyone who bothers to show up and help during an emergency like this. The first time I get sued for not putting a fire out fast enough or completely enough is the last time I put out a fire.

  • [flagged]

    • Intent here will matter.

      If he's got a gun fetish and accidentally set it off, killing someone, that's different than shooting it at someone.

      He might have had a fire fetish, set one, extinguished it, and despite his intent it got out of control.

      Hard to say though. Either way, I can see a stiff penalty to prevent future use of the "oops I just like fire" defense.

      2 replies →

  • I mean everyone sees this stuff differently. In my opinion everyone is allowed to carry a gun (above 18, not crazy, etc..). If you take a loaded gun and aim it at someones head and force them to empty a cash register into a bag, I personally believe that person should NEVER be allowed in society ever again in their lifetime. (Yeah that's not how it all works). But you were willing to let that person be within strands of their life not existing. If they reacted in the wrong way - not even intentionally, the gunman will shoot. If they try to fight back because they didn't agree to empty the cash register, the gunman shoots.

    That's an extreme situation that the gunman put someone in. Imagine it being YOU. Now if you could be the LAST person that gunman ever put in that situation, would you allow them to go to jail forever? Because if that's the case, the number of people in that situation ever again goes from millions to a few thousand over the next 1000 years. And many of those people will REACT and die.

    So when someone starts a fire, they were like the gunman. They were willing to let a lot of people die. Then realizing they were wrong, calling the cops, and having them put the fire out, that's the same as the situation as going into 7-11 and aiming the gun, but then putting it down and walking out. But they still risked someone else's life! What if they accidentally slipped their finger? Employee DEAD.

    So it's really the same thing. All that being said, I do grant that the waters are muddied at this point with the legal system. The person still deserves to be separated from civil society. He is not CIVIL!

    And even though the legal system's waters are muddied, his original actions resulted in 12 people dying. The firefighters that were incompetent are not originally responsible for those 12 deaths.

    The reason I want maximum punishment is that it works, it does deter. In this legal system of course there's a 50/50 those 12 people will have died without being avenged at all (and their families - all that are affected), and a 90% chance (if he is found responsible) those 12 people will get this guy in jail for 10 years. And because of those chances, people decide, that fuck even if I'm caught, it seems like in the last 10 years there is a VERY low chance of punishment. Punishment is very important in this world and life. I'm not talking about capital punishment.

    A lot of people disagree with all of this, I personally think they have suicidal empathy. They have no empathy to the thousands of people that died from other peoples intentional actions - actions those people KNEW they might end up killing. They have too much empathy for the attacker. It's massive victim blaming.

    • People aren't robots that think through every single decision. Arson happens frequently and nobody dies. Death is a rare consequence and the arsonist didn't intend to kill someone, it feels like an accident, not murder.

      This is how humans work. We work on probability and approximation. We often act based the consequences of our intentions, not the consequences of our actions.

      Someone that learns the consequences of their actions, regrets the harm they inflicted, and changes their behavior as a result, is not the same danger to society they were before. In fact society would be better off reintegrating them because they'll tell others not to do the same thing.

      I'm not exactly sure where to fit this in, but people change. A society that makes vengeance the only rule, where death is punished with death, regardless of a person's intentions, is an authoritarian nightmare.

      1 reply →

> But more curious than the allegation that a Florida man was responsible for setting a small brush fire on the other side of the country

As far as I’ve heard from other articles, he lived in the Palisades at the time and worked as an Uber driver there. He moved to Florida after the fire. This is not very well researched.

This title is misleading. The article doesn't say that the chat history will be used as evidence, only that it exists. Whether it can be used in court is an unsettled question, as explained in the last few paragraphs.

  • How is it unsettled? If they got a warrant for it, what would prevent them from using it as evidence?

    • Judges can refuse to admit anything they want, or give jurors instructions of any kind about how to consider evidence in relation to a crime. About the only thing a judge can't do is fabricate evidence themselves.

    • Another thread says they tried to use his past "drawing a fire related photo" to try and paint him as some kind of pyromaniac. These clods just cant help themselves but to prove AT THE FIRST CHANCE that they will twist and abuse anything they can get their hands on to paint some kind of picture. Its hilarious that they cant even keep this in their back pockets to wait for a real real bad hard to persecute criminal to use it on either

      1 reply →

Not surprising. Search and browsing history has been used as evidence for some time.

  • Nearly anything that isn't end-to-end encrypted is fair game, assuming there is probable cause. Access to your physical location history (even if you weren't suspected of a crime) wasn't off limits until 2024 [1]. (It still isn't off limits if you are suspected of a crime, but is no longer collected at the scale of "most Android users" [2].)

    [1] https://www.eff.org/deeplinks/2024/08/federal-appeals-court-...

    [2] https://techcrunch.com/2023/12/16/google-geofence-warrants-l...

  • It still boggles my mind that in this day and age most people use the one search engine that keeps the most copious records of everything that is entered and that ties that to the most information any corporation probably has about any random person. I wouldn't be surprised if everyone moved to using the NSA search engine if they ever came out with one.

    Just for general peace of mind, use a privacy-oriented search engine. I use leta.mullvad.net or search.brave.com usually. I haven't used Google in years. And if you just happen to have a curiosity about something fringe that might be misinterpreted in the wrong circumstances, download an LLM and use it locally.

    • Not using Google but instead use a Mullvad or Brave search engine isn't solving any problem. Because if you cannot trust company A, you also shouldn't trust company B.

      If you want real and total anonymous search, use a public computer.

      2 replies →

  • Coming into the thread(and general discussion about chatgpt being used as evidence) with this context, I’m confused about the reactions to this. Online activity has been used as evidence as far as I remember. OpenAI also has a couple high profile cases against them with chatgpt history used as the primary evidence

"This felony charge, he added, carried a mandatory minimum prison sentence of five years in federal prison but is punishable by up to 20 years in prison."

Are the 12 deaths separate charges? A sentence of 5-20 years seems very light for 12 deaths. This article is clearly focused on the AI aspect of it, so it doesn't cover the charges at all really.

  • If they get a conviction in this case they will use that to support murder charges. The arson charges are easier to prosecute and try and if he’s convicted it will make the other case simpler.

  • Please correct me if I’m wrong but it’s my understanding he didn’t start the fire that burned much of the Palisades; he started a fire that was put out (or at least was claimed to be so) which rekindled later and the rest is history.

    • He started a continuous combustion reaction, and Malibu was destroyed by the continuance of that combustion reaction. Whether at some point the orange light it was giving off dimmed a bit is not very interesting. He committed the crime; then emergency services tried to mitigate the damage but failed. These are two fully separate things.

      1 reply →

    • Did he start the fire knowing it could kill people? Did his actions lead to the death of people?

      That seems clear cut first degree murder to me, as I understand it (I'm not sure if it requires a specific person to be murdered but a pre-meditated act that kills people seems like it'd qualify to me).

      4 replies →

If anyone's wonder if this guy was really the cause of the Palisade's fire: No, probably not. His reportedly erratic and eccentric behavior tells me he's probably not mentally capable of standing trial. Normally the government would just shrug, call it an act of nature, and move on. But for some reason they're going out of their way to pin this fire on someone.

  • I don't know how they would establish causality (for the major fire and deaths) beyond a reasonable doubt when the fire he set was seemingly put out and the entire region is a tinderbox. Whatever his crimes, he seems like a convenient scapegoat for larger systemic failures.

  • If I can tin-foil-hat a bit, wouldn't it make sense if ruling the fire an act of arson would be favorable to someone with power, such as the Malibu landowners or the insurance companies? If anyone wants to "cui bono" their way to an article about it I'd be interesting in reading that.

I would like to know if OpenAI is able to supply this information to law enforcement even if their user's history has been cleared.

  • Is there any reason to believe that deleting a ChatGPT conversation is anything more than "UPDATE conversations SET status='user_deleted WHERE conversation_id=..."?

    • Don’t get me wrong, I am highly skeptical. I also am genuinely curious because it seems to be in their best interest to delete these records for a few reasons:

      1. Adherence to their own customer-facing policy. 2. Corporate or government customers would CERTAINLY want their data handling requirements to be respected. 3. In my experience at $MEGA_CORP, we absolutely delete customer data or never maintain logs at all for ML inference products. 4. They’re a corporation with explicit goal of making money. They’re not interested in assisting LE beyond minimum legal requirements.

      But still I wonder what the reality is at OpenAI.

      1 reply →

Can't wait for the "Did ChatGPT Burn Down Palisades?" headline.

  • “He’s called the AOL killer and he’s using something called ‘chatrooms’ to lure people in. Tonight, the dark side of cyberspace.”

Hmmm.

I have a "saved" history in Google Gemini. The reason I put "saved" in scare quotes is that Google feels free to change the parts of that history that were supplied by Gemini. They no longer match my external records of what was said.

Does ChatGPT do the same thing? I'd be queasy about relying on this as evidence.

  • Could you post some details about this or make a write-up? I'd be interested in reading more about this.

    • I'm not sure what details would add. What happened:

      1. I engaged with Gemini.

      2. I found the results wanting, and pasted them into comment threads elsewhere on the internet, observing that they tended to support the common criticism of LLMs as being "meaning-blind".

      3. Later, I went back and viewed the "history" of my "saved" session.

      4. My prompts were not changed, but the responses from Gemini were different. Because of the comment threads, it was easy for me to verify that I was remembering the original exchange correctly and Google was indulging in some revision of history.

      7 replies →

I don't get these people. I get nervous to type even something like "why in movies people throw up after killing someone" in Google, even in incognito mode. Why would anyone put something even remotely incriminating into the hands of another company?

I sure hope that cats in military uniforms don't invade NYC because they're going to find the evidence on my ChatGPT account.

  • Are we talking house cats here, or full grown lions, or gozilla-cats?

    Godzilla cats really seems like it needs a movie.

ChatGPT and Google are different types of engines. I wonder if they will make ChatGPT submit flagged questions to authorities automatically. Since the questions are more like conversations with clear intentions, they can get very clear signals.

  • They can do whatever they want. It's a dead end.

    End of the day, a chimp with a 3 inch brain has to digest the info tsunami of flagged content. That's why even the Israelis didn't see Oct 7th coming.

    Once upon a time I worked on a project for banks to flag complaints about Fraud in customer calls. Guess what happened? The system registered a zillion calls where people talked about fraud world wide, the manager in charge was assigned 20 people to deal with it, and after naturally getting overwhelmed and scapegoated for all kinds of shit, he puts in a request for few hundred more, saying he really needed thousands of people. Corporate wonderland gives him another 20 and writes a para in their annual report about how they are at the forefront of combatting fraud etc etc.

    This is how the world works. The chimp troupe hallucinates across the board, at the top and at the bottom about what is really going on. Why?

    Because that 3 inch chimp brain has hard limits to how much info, complexity and unpredictability it can handle.

    Anything beyond that, the reaction is similar to ants running around pretending they are doing something useful anytime the universe pokes the ant hill.

    Herbert Simon won a nobel prize for telling us we don't have to run around like ants and bite everything anytime we are faced with things we can't control.

    • That's why companies usually use an AI to automatically ban your account. That's why there are currently tricks floating around to get anyone you don't like banned from Discord, by editing your half of an innocuous conversation to make it about child porn and trafficking. The AI reads the edited conversation, decided it's about bad stuff and bans both accounts involved.

  • > they can get very clear signals.

    No they can't. People write fiction, a lot of it. I'm willing to bet that the number of fiction related "incriminating" questions to chatgpt greatly numbers the number of "I'm actually a criminal" questions.

    Also wonder about hypotheticals, make dumb bets, etc.

    • You don't even need to make bets. Encoded within the answer of "what is the best way to prevent fires" is the obvious data on the best way to start them.

  • To be clear there is exactly nothing you're required to submit to the government as a US service provider, if that's what you mean by authorities.

    If you see CSAM posted on the service then you're required to report it to NCMEC, which is intentionally designed as a private entity so that it has 4th amendment protections. But you're not required to proactively go looking for even that.

  • I recall Anthropic publicly admitting that, at least in some of their test environments, Claude will inform authorities on its own initiative if it thinks you’re using it for illicit purposes. They tried to spin it as a good thing for alignment.

[flagged]

  • > the cause wasn't climate change, like all the media worldwide boasted.

    That was not the typical claim of news outlets.

    The actual headlines were more like "How climate change affected the LA fires".

    Media outlets linking small regional problems to big global issues is to be expected because that drives engagement.

    There is no need for retractions/corrections because what those articles typically said was something like "climate change can facilitate extended droughts making such fires more likely", not that climate change was the cause for the fire.

    You could argue that articles like that are trying to mislead the reader, and you would not be wrong. The main purpose of a lot of modern reporting is not to inform, but to bait for clicks/ads.

    edit: You can see the exact same issue in this article: It kinda baits the reader into thinking that ChatGPT is monitoring chats and snitching on you to the police preemptively, but doesn't actually say that.

  • Forests burn more easily when they are dry. Climate change in sun-bathed regions is increasing this dryness, which in turn raises the risk, speed, and spread of wildfires.

    The initial fire is intentional but as stated in the article it spreads silently and exploded because of extreme weather condition.

  • Climate change can create factors that make wildfires much larger or more dangerous.

    I'm dubious of your claim that climate change was cited as a cause in any serious publication. Can you provide any sources?

    Every fire has an ignition, and climate change is not an ignition like a lit cigarette or a lightning strike. It merely creates conditions.

    • > I'm dubious of your claim that climate change was cited as a cause in any serious publication. Can you provide any sources?

      I didn't claim anything about publications. I complain about the media coverage. If you want the source, you can go to YouTube and watch German Tageschau, a central news outlet. Every report of fire was followed by brainwashing about climate change. It was similar in most of the major Western European news coverage.

  • if you have a server that's overloaded, then a request comes in that requires fractionally more memory use than the average request, do you blame the single request for crashing your servers or do you acknowledge that the load on the server is probably contributing to the issue?

  • you will get a million other reasons why you are wrong, but you will never get "you're right". people here literally cannot have a conversation about themselves being on the receiving end of a scam, they are much too high-brow for that to happen.

[flagged]

  • I was gearing up to smugly quote the HN guidelines to you but I find myself surprised to see that there is no clear messaging against supporting acts of violence.

    Allow me to violate the guidelines and note how this is in stark contrast to Reddit. For better or worse.

pretty interesting that cloud data is not covered by the 4th amendment. I wonder if we’ll push for on-prem storage of context and memories as our relationship with AI gets more personal and intertwined.

  • The article states that OpenAI only discloses user content with a search warrant. How did that lead you to believe that it's not subject to the fourth amendment?

  • I don't have a personal relationship with AI, and strongly suggest that people stay away from AI for personal matters.

    • I still haven't once talked to an LLM for personal reasons. It's always been to get information.

      Talking to an LLM like a human is like talking to a mirror. You're just shaping their responses based on what you say. Quite sad to see stuff like the "myboyfriendisai" reddit

      1 reply →

stop blaming fires on people who start them. its inevitable that someone is going to be the root cause. meanwhile the people whose job it is to manage the forest or wilderness areas are scot-free. At least send them to prison too.