Comment by exabrial

19 hours ago

All I can say is GOOD.

If a person is suspected of committing a crime, and police obtain a specific, pointed, warrant for information pertaining to an individual, tech companies have a moral obligation to comply, in the best interests of humanity.

If law enforcement or spy agency asked for a dragnet warrant like "find me all of the people that might be guilty of XYZ" or "find me something this individual might be guilty of"; tech companies have a moral obligation to resist, in the best interest of humanity.

The first is an example of the justice system working correctly in a free society; the second is an example of totalitarian government seeking to frame individuals.

Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them. I already think too much about everything I put into ChatGPT, since my default assumption is it will all be made public. Now I also have to consider the possibility that random discussions will be used against me and taken out of context if I'm ever accused of committing a crime. (Like all the weird questions I ask about anonymous communications and encryption!) So everything I do with these tools will be with an eye towards the fact that it's all preserved and I'll have to explain it, which has a huge chilling effect on using the system. Just make it easy for me not to log history.

  • > These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.

    But you do, just like you have confidentiality in what you write in your diary.

  • > Not good. These tools (from search engines to AI) are increasingly part of our brains, and we should have confidentiality in using them.

    Don't expect that from products with advertising business models

  • Serious question. Why should someone have more privacy in a software system than they do within their home?

    • I have enormous privacy in my home. I can open up any book and read it with nobody logging what I read. I can destroy any notes I take and know they'll stay destroyed. I can even visit the library and do all these things in an environment with massive information access; only the card catalog usage might get logged, and I probably still don't have to tie usage to my identity because once upon a time it was totally normal to make knowledge tools publicly-accessible without the need for authentication credentials.

    • They maybe (not taking a stance) shouldn't, but I don't think this argument is as simple as one thinks. Doing surveillance on someone's home generally requires a court order beforehand. And depending on the country (I don't believe this applies to the US), words spoken at home also enjoy extended legal protection, i.e. they can't subpoena a friend you had a discussion with.

      Now the real question is, do you consider it a conversation or a letter. Any opened¹ letters you have lying around at home can be grabbed with a court-ordered search warrant. But a conversation—you might need the warrant beforehand? It's tricky.

      (Again, exact legal situation depends on the country.)

      ¹ Secrecy of correspondence frequently only applies to letters in sealed envelopes. But then you can get another warrant for the correspondence…

      6 replies →

  • I think there is a non-zero chance they had no idea about this guy until OpenAI employees uncovered this, reported it, and additional cell phone data backed up the entire thing.

    • Why do employees need to be involved? It's AI. It is entirely capable of doing the surveillance, monitoring and reporting entirely by itself. If not now, then in the near future.

  • Just give the ai to user relationship a protection like attorney client privilege.

    Edit: ai has already passed the bar exam.

How do you square this with Apple's pushback few years back against FBI who asked for a specific individual's details.

I'm not taking sides, but it sounds like if ChatGPT cooperating with LE is a Good Thing (TM), then Apple making a public spectacle of how they are not going to cooperate is .. bad?

I'm fully aware that Apple might not even be able to provide them the information, which is a separate conversation.

  • >How do you square this with Apple's pushback few years back against FBI who asked for a specific individual's details.

    See: https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...

    >Most of these seek to compel Apple "to use its existing capabilities to extract data like contacts, photos and calls from locked iPhones running on operating systems iOS 7 and older" in order to assist in criminal investigations and prosecutions. A few requests, however, involve phones with more extensive security protections, which Apple has no current ability to break. These orders would compel Apple to write new software that would let the government bypass these devices' security and unlock the phones.[3]

    That's much more different than OpenAI dumping some rows from their database. If chatgpt was end-to-end encrypted and they wanted OpenAI to backdoor their app I would be equally opposed.

    • Interesting that it wound up not being Cellebrite, I thought for years it was, I wonder if Cellebrite had people lie to the press that it was them. Really effective marketing.

      I agree, the line is at messing with End to End Encryption. If your E2EE has a backdoor ITS NOT END TO END ENCRYPTION. Thanks.

      1 reply →

    • With CALEA and related laws, companies that don't keep logs can be compelled to surveil certain users from that point forward, even if that means installing hardware/software that keeps logs on them.

  • The difference is that in this case OpenAI was able to produce the requested information without compromising security for their other customers.

    • Right, for the OpenAI case to be analogous, they would have to switch to a system where your chats are homomorphically encrypted -- i.e. OpenAI does all its operations without knowing either the input or output plaintext. In that case, they'd only have encrypted chats to begin with, and would have to somehow get your key to comply with a warrant for the plaintext.

      And note: the above scenario is not likely anywhere in the near future, because homomorphic encryption has something like a million times overhead, and requires you to hit the entire database on every request, when state-of-the-art LLM systems are already pushing the limits of computation.

  • Yes. I'm glad the FBI was able to crack the phone without Apple's help in that San Bernardino case, which humiliated Apple as a little bonus.

    Apple also tried to freak the public out saying the FBI wanted a backdoor added, which was inaccurate. You can't retroactively add a backdoor, that's the whole point of it. FBI wanted Apple to unlock a specific phone, which Apple said they were capable of doing already.

  • With my current knowledge of the case, I'd say Apple was clearly in the moral wrong and it's a pretty dark mark in their past.

    My understanding is the suspect was detained and law enforcement was not asking for a dragnet (at least thats what they stated publicly), and they were asking for a tool for a specific phone. Apple stated the FBI was asking them to backdoor in all iPhones, then the FBI countered and said thats not what they were asking for. Apple then marched triumphantly into the moral sunset over the innocent victims'; meanwhile the FBI then send funds to a dubious group with questionable ethics and ties to authoritarian regimes.

    In my opinion, Apple should have expediently helped here, if for no other reason than to prevent the funding of groups that support dragnets, but also out of moral obligation to the victims.

    • Seeing how strained your good-faith interpretation is has further entrenched my belief that San Bernadino was a false flag operation by the FBI.

      There is no world in which a post-PRISM compliant Apple cannot be coerced by the feds for an investigation. It's just a matter of how much pressure the FBI wanted to apply; Apple's colossal marketing win is the sort of thing that you would invent if you wanted to manufacture consumer trust, not "prove" anything to cryptographers. Playing devil's advocate, "authoritarian regimes" are exactly the sort of place you would send the iPhone to if you already had the information and wanted to pretend like it was hard to access.

      If we assume a worst-case-scenario where Apple was already under coercion by the FBI, everything they did covers up any potential wrongdoing. It was all talk, no walk. Neither side had to show any accountability, and everyone can go on happily using their devices for private purposes.

> If law enforcement or spy agency asked for a dragnet warrant like "find me all of the people that might be guilty of XYZ" or "find me something this individual might be guilty of"; tech companies have a moral obligation to resist, in the best interest of humanity.

There is more evidence they will do this rather than that they won't. ChatGPT is a giant dragnet and 15 years ago I would've argued it's probably entirely operated and funded by the NSA. The police already can obtain a "geofenced warrant" today. We're not more than one senator up for re-election from having a new law forced down our throat "for the children" that enables them to mine OpenAI data. That is, if they don't already have a Room 641A located in their HQ.

People pour their live out into these fuzzy word predictors. OpenAI is holding a treasure trove of personal data, personality data, and other data that could be used for all kinds of intelligence work.

This is objectively bad regardless of how bad the criminal is. The last near 40 years of history, and especially the post 9/11 world, shows that if we don't stand up for these people the government will tread all over our most fundamental rights in the name of children/security/etc.

Basic rights aren't determined by how "good people" use them. They are entirely determined by how we treat "bad people" under them.

  • Just wait until AI is advanced enough that you can buy an AI best friend who will be with you all your life. I'm reminded of Decker's AI hologram friend in Blade Runner 2049. The only thing they got wrong was she was not collecting data for the megacorp.

    Thinking again, the AI will certainly be "free".

I don't think anyone has a moral obligation to do the state's bidding, and if you think these tools will only be used morally against "bad guys", you have not been paying attention to recent events.

I also don't think the interests of the state are "in the best interests of humanity".

Sometimes the price of having nice things and them remaining nice means that people you don't like can use them, too.

Does this imply that the tech company has the moral obligation to evaluate the merits of each warrant on a case-by-case basis?

  • They should resist fishing expeditions. I don't think thats that hard.

    • Is that only a function of the number of individuals targeted by a group of warrants? What determines "group membership" of a warrant? It seems like it actually is hard to determine, both for the legal system (there are many controversies in the U.S. about whether dragnet warrants are constitutional and what constitutes a dragnet warrant) and for a company receiving these warrants.

Not fair. It means idiots who type "how do I hide a body" get caught, but smart types from HN can hide their traces. In a fair society, both dumb and smart criminals should have equal chance of getting caught. Imagine, for example, if you could use Internet only after identification. And maybe there should be reduced punishment for dumb types for humanity reasons.

  • Absurd. Unrealistic.

    Does everyone have the same earnings potential risk regardless of their skill? Same with stealing potential.

    Edit: on the flip side, white collar crimes leave a paper trail that traditional smash and grab crimes do not, so more white collar criminals should be getting caught and convicted now.

> the Justice Department’s allegations against Rinderknecht are supported by evidence found on his phone

Sounds like they got the info from his phone, not taken from any servers, so this is likely not an example of a tech company "complying".

There are many routes that the government has to court order/warrant/subpoena information from tech companies.

The tech companies have just about zero ability to resist.

There should likely be legislation enacted that raises chat logs to the level of psychotherapist-patient privilege.

  • > There should likely be legislation enacted that raises chat logs to the level of psychotherapist-patient privilege.

    Medical records are up for grabs when it comes to investigations/discovery/subpoenas/warrants/etc, it's one of the privacy exceptions in HIPAA.

    They can also be used to try to identify alleged criminals, missing people, fugitives, etc. For example, if they have DNA samples, bite impressions (even though it's BS), law enforcement can even demand matching medical records without warrants.

The issue is that law enforcement personnel are going to do combinations of both. Corruption is real and companies are just made of people. Things can be done in relative secret without arousing controversy. This is the same logic libertarian shitheads use for why we shouldn't provide kids a school lunch.

Human nature doesn't follow your shitfuck ideal driven rules friend. I guarantee some day you'll find that out the hard way.