Comment by aylmao

3 days ago

I'll note that Persona's CEO responded on LinkedIn [1] pointing out that:

  - No personal data processed is used for AI/model training. Data is exclusively used to confirm your identity.
  - All biometric personal data is deleted immediately after processing.
  - All other personal data processed is automatically deleted within 30 days. Data is retained during this period to help users troubleshoot.
  - The only subprocessors (8) used to verify your identity are: AWS, Confluent, DBT, ElasticSearch, Google Cloud Platform, MongoDB, Sigma Computing, Snowflake

The full list of sub-processors seems to be a catch-all for all the services they provide, which includes background checks, document processing, etc. identity verification being just one of them.

I have I've worked on projects that require legal to get involved and you do end up with documents that sound excessively broad. I can see how one can paint a much grimmer picture from documents than what's happening in reality. It's good to point it out and force clarity out of these types of services.

[1]: https://www.linkedin.com/feed/update/urn:li:activity:7430615...

Persona Identity, Inc. is a Peter Thiel-backed venture that offers Know Your Customer (KYC) and Anti-Money Laundering (AML) solutions that leverage biometric identity checks to estimate a user’s age that use a proprietary “liveliness check” meant to distinguish between real people and AI-generated identities.

Once a user verifies their identity with Persona, the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years.

There are so many keywords in there that should raise a red flag, but funded by Peter Thiel should probably be enough.

https://www.therage.co/persona-age-verification/

All of which is meaningless if it's not reflected properly in their legal documents/terms. I've had interactions with the Flock CEO here on Hacker News and he also tried to reassure us that nothing fishy is/was going on. Take it with a grain of salt.

  • Why anyone would trust the executives at any company when they are only incentivized to lie, cheat, and steal is beyond me. It's a lesson every generation is hellbent on learning again and against and again.

    It use to be the default belief, throughout all of humanity, on how greed is bad and dangerous; yet for the last 100 years you'd think the complete opposite was the norm.

    •   > when they are only incentivized to lie, cheat, and steal
      

      The fact that they are allowed to do this is beyond me.

      The fact that they do this is destructive to innovation and I'm not sure why we pretend it enables innovation. There's a thousands multi million dollar companies that I'm confident most users here could implement, but the major reason many don't is because to actually do it is far harder than what those companies build. People who understand that an unlisted link is not an actual security measure, that things need to actually be under lock and key.

      I'm not saying we should go so far as make mistakes so punishable that no one can do anything but there needs to be some bar. There's so much gross incompetence that we're not even talking about incompetence; a far ways away from mistakes by competent people.

      We are filtering out those with basic ethics. That's not a system we should be encouraging

      11 replies →

    • > It use to be the default belief, throughout all of humanity, on how greed is bad and dangerous

      And what used to be the default beliefs on rape and slavery?

  • Yup exactly, if this is the truth then put it on the terms/privacy policy etc... exec's say anything these days with zero consequences for lieing in a public forum.

  • Can a ceo's word on linkedin and X be used to make claims against them?

    • Anything a publicly traded company would state that would lead to a person making a decision to buy or sell stock would be subject to FTC regulations.

      1 reply →

    • Absolutely. I don't know what legal jurisdiction they are subject to, but I could imagine that someone tries to sue an EU division/outpost in an EU court under a GPDR-type of petition, these posts would be submitted as evidence. One could easily argue the CEO is acting on behalf of the company by posting using their real name. (Let's presume there is no identity fraud for these posts.)

      And don't forget that Elon Musk was tried in the US for defamation after making a bunch of posts on Twitter against some UK citizens. Assuming that you are posting under your real name, you are definitely legally responsible for those words.

But why believe that when their policy says any of it may not be true, or could change at any time?

Even if the CEO believes it right now, what if the team responsible for the automatic-deletion merely did a soft-delete instead of a hard delete "just in case we want to use it for something else one day"?

  • I dont believe that for one second. I can think of many examples of times CEO's have said things publicly that were not or ended up being not true!

My favourite 'thing' in the modern world is that 'we don't process and store your data' has taken to mean - 'we don't process and store your data - our partner does'.

Which might not even be stated explicitly, it might be that they just move it somewhere and then pass it on again, at which point its outside the legal jurisdiction of your country's ability to enforce data protection measures.

Even if such a scheme is not legal, the fact that your data moves through multiple countries with different data protection measures, enforcing your rights seems basically impossible.

  • "We don't sell your data" translates to "we sell OUR data about you".

    They would never admit the data belongs to you while selling it. When they sell it, they declare themselves the owners of that data, which they derived from things you uploaded or told them, so they're never selling your data according to their lawyers.

    Another thing they like to do is sell the use or access to this data, without transferring the legal rights to the data, so they can say with a straight face they never sold the data. Google loves this loophole and people here even defend it.

> that require legal to get involved and you do end up with documents that sound excessively broad

If you let your legal team use such broad CYA language, it is usually because you are not sure what's going on and want CYA, or you actually want to keep the door open for broader use with those broader permissive legal terms. On the other hand, if you are sure that you will preserve user's privacy as you are stating in marketing materials, then you should put it in legal writing explicitly.

A KYC provider is a company that doesn't start with neutral trust. It starts with a huge negative trust.

Thus it is impossible to believe his words.

  • Can you say more? Why isn't it neutral or slightly positive? I would assume that a KYC provider would want to protect their reputation more than the average company. If I were choosing a KYC provider I would definitely want to choose the one that had not been subject to any privacy scandals, and there are no network effects or monopoly power to protect them.

    • > Why isn't it neutral or slightly positive?

      Because KYC is evil in itself and if the linked article does not explain to you why is that then I certainly cannot.

      > KYC provider would want to protect their reputation more than the average company

      False. It is exactly the opposite. See, there are no repercussions for leaking customers data, while properly securing said data is expensive and creates operational friction. Thus, there are NO incentives to protect data while there ARE incentives to care as less as possible.

      Bear in mind that KYC is a service that no one wants, anll customers are forced and everybody hates it: customers, users, companies.

      2 replies →

  > - All biometric personal data is deleted immediately after processing.

The implication is that biometric data leaves the device. Is that even a requirement? Shouldn't that be processed on device, in memory, and only some hash + salt leave? Isn't this how passwords work?

I'm not a security expert so please correct me. Or if I'm on the right track please add more nuance because I'd like to know more and I'm sure others are interested

  • I'm not an expert but i imagine bio data being much less exact than a password. Hashes work on passwords because you can be sure that only the exact date would allow entry, but something like a face scan or fingerprint is never _exactly_ the same. One major tenant that makes hashes secure is that changing any singlw bit of input changes the entirety of the output. So hashes will by definition never allow the fuzzy authentication that's required with biodata. Maybe there's a different way to keep that secure? I'm not sure but you'd never be able to open your phone again if it requires a 100% match against your original data.

    • I'd assume they'd use something akin to a perceptual hash.

      Btw, hashes aren't unique. I really do mean that an input doesn't have a unique output. If f(x)=y then there is some z such that f(z)=y.

      Remember, a hash is a "one way function". It isn't invertible (that would defeat the purpose!). It is a surjective function. Meaning that reversing the function results in a non-unique output. In the hash style you're thinking of you try to make the output range so large that the likelihood of a collision is low (a salt making it even harder), but in a perceptual hash you want collisions, but only from certain subsets of the input.

      In a typical hash your collision input should be in a random location (knowing x doesn't inform us about z). Knowledge of the input shouldn't give you knowledge of a valid collision. But in a perceptual hash you want collisions to be known. To exist in a localized region of the input (all z are near x. Perturbations of x).

      https://en.wikipedia.org/wiki/Perceptual_hashing

      2 replies →

I'm not convinced there's any significant overlap between "people who are worried about which subprocessors have their data" and "people who don't think that eight subprocessors is a lot"

  • I mean, two of them are cloud vendors. The rest just seem like very boring components of a (somewhat) modern data pipeline.

    • The issue isn't the vendors themselves necessarily but the quantity of them. Plenty of boring things over the years have had security vulnerabilities that end up with data getting leaked, so each additional one is just more risk even if you trust them not to be actively malicious. All it takes is one well-meaning but careless vendor to make the whole house of cards collapse.

As an industry we really need a better way to tell what’s going g where than:

- someone finally reading the T&Cs

- legal drafting the T&Cs as broadly as possible

- the actual systems running at the time matching what’s in the T&Cs when legal last checked in

Maybe this is a point to make to the Persona CEO. If he wants to avoid a public issue like this then maybe some engineering effort and investment in this direction would be in his best interest.

This is not the concern for me. I thought the risk was obvious to everyone. Tho I've been tempted because it means I'll "have more interactions" or whatever LinkedIn pitches with, I didn't want to put a public signal out there with yes: "This is my real name, real job, real city" - to me it's like a pre-vetted database of marks for identity theft criminals or whatnot. You know?

I thought everyone, at least in security would be somewhat concerned about this, but they're not. I get the benefits, and I want to enjoy those benefits too. I'd much prefer if I could privately confirm my name using IDs (zero problem with that) but then not have to show it or an exact profile photo. I'm sure there's a cryptographic way for my identity to be proven to any who I chose to prove it to who required such bona fides. I dislike the surface of "proven identity for everyone". You know?

This to me is the far more important thing than: "security focused biometric company processed my data, therefore being rational and modern I will now have a meltdown." Everytime you drive, use a payment method linked to your name, use your plan phone, your laptop, go to a venue that ID scans, make a rental, catch a flight, cross a border, etc, your ID (or telemetric equivalents sufficient to ID you) is processed by some digital entity. If you will revolt against the principle of "my government issued and not-truly-mine-anyway ID documents, or other provided bona fides are being read by digital entities contracted to do that", it seems nonsensical.

I think the bigger risk is always taking a photo of your passport and putting it on the internet, which is basically what the current LI verification means. Casual OSINT on a verified profile likely reveals the exact birthday (or cross-referenced on other platforms), via "happy birthday" type posts. How old am I type image AI can give you rough years.

  • > I'm sure there's a cryptographic way for my identity to be proven to any who I chose to prove it to

    There is. The pattern is: generate a keypair locally, derive a DID (decentralized identifier) from the public key, and then selectively prove your identity to specific verifiers using digital signatures. No central authority ever holds your private key.

    The key difference from the LinkedIn model: you never hand biometric data to a third party. Instead, you hold a cryptographic identity that you control. If someone needs to verify you, they check a signature — not a database. You can prove you're the same entity across interactions without revealing anything about who you are in the physical world.

    This is exactly the approach behind things like W3C DIDs and Verifiable Credentials. The crypto has been solved for years; the adoption problem is that platforms like LinkedIn have no incentive to give users self-sovereign identity when the current model lets them be the middleman.

    I've been building an open implementation of this for AI agents (where the identity problem is arguably even worse — there's no passport to scan): https://github.com/The-Nexus-Guard/aip. But the same cryptographic primitives apply to human identity too.

    • I like this but want to marry it with real world, too. How would you do that? LinkedIn would verify biometrics and then sign your DID? ANd you'd use that biometric-attested ID to prove to who you want?

      I guess from a psychological and UX point of view tho, large platforms like LI have lots of "trust" in people's eyes (accurate or not) and so if LI says "verified" we can trust that. It's not just a conspiracy for linkedin to intermediate themselves, it's human sociology. I would just like LI to remove the "self-dox pwn" from verified badges, attest but let me redact.

Facebook at some period was pushing users to enable 2fa for security reasons, and guess what they did with the phone numbers they collected.

All of those statements require trust and/or the credible threat of a big stick.

Trust needs to earned. It hasn't been.

The big stick doesn't really exist.

Whelp, so long as the CEO says it's fine, we've no reason to worry about what's in the legal verbiage.

I am wondering what the 'sub-processor' means here. Am I right in assuming that the Persona architecture uses Kafka, S3 data lake in AWS and GCP, Elastic Search, MongoDB for configuration or user metadata, and Snowflake for analytics, thus all these end up on sub-processle list as the data physically touches these company's products or infra hosted outside Persona? I hope all these aren't providing their own identity services and all of them aren't seeing my passport for further validation.

"The only subprocessors used to verify your identity are"... some of the biggest data mining companies on the planet. Excellent.

Right, because as seen over the last several years, the Big Tech CEOs should totally be trusted on their promises, especially if it is related to how our sensitive personal data is stored and processed. This goes even wtihout knowing who is one of the better known "personas" investing in Persona.

> what's happening in reality

that's the thing... excessively broad might not reflect reality TODAY but can be an opportunity in the future.

Why would we believe they are deleted after processing and not shared with the government?

  • What's the government going to do with a picture of the ID they, themselves issued to you?

    • Associate it with the specific service they don't want you using, or transactions they don't want you making, or conversations and connections they don't want you having.

    • As an example, the state government may issue a particular ID that I use in several different places. But the federal government did not issue that ID to me.

    • Keep in mind for most users of the service, the ID was not issued by the US government.

    • it's one service collecting ID's issued by dozens of governments.

      the already too centralized is being made even more centralized here.

This reads like their entire software stack. I don’t understand the role ElasticSearch plays; are people still using it for search?

Infrastructure: AWS and Google Cloud Platform

Database: MongoDB

ETL/ELT: Confluent and DBT

Data Warehouse and Reporting: Sigma Computing and Snowflake

If he's really so confident these assurances will stand scrutiny then why doesn't he put them in the agreement and provide legal assurance to that effect?

this is just "trust me bro" with more words. even if true, the point is not what they do right now, the point is what they CAN do, which clearly as pointed in terms is a lot more than that.

What possible use legitimate use is Snowflake in verifying your identity? ES?

  • It's probably used to aggregate all their data sources to compile profiles. They then match the passport against their database of profiles. To say, yup, this passport is for real person; not a deceased person whose identity was stolen for example.

I mean...

1) This is 'trust me bro' with more details

2) 'After processing' is wide enough to drive a truck through. What if processing takes a year? What if processing is defined as something involving recurring checks?

3) You have no contract with Persona or even LinkedIn beyond the fact that you agreed to LinkedIn's TOS (but didn't even read).

4) The company that acquires or takes-private Persona might have a very different of how it handles this.

5) What does verifying do for you, the user? I understand its value to LinkedIn and their ability to sell your attention to advertisers, but what do YOU gain?

Man, a top-voted, white-knight comment on each post involving FAANG gets really tiring