← Back to context

Comment by BatteryMountain

4 hours ago

YES. They all do. Everyone is dripping to get their hands on your biometric data and medical info.

I have worked in this space, and my experience was that usually age / identity verification is driven by regulatory or fraud requirements. Usually externally imposed.

Product managers hate this, they want _minimum_ clicks for onboarding and to get value, any benefit or value that could be derived from the data is miniscule compared to the detrimental effect on signups or retention when this stuff is put in place. It's also surprisingly expensive per verification and wastes a lot of development and support bandwidth. Unless you successfully outsource the risk you end up with additional audit and security requirements due to handling radioactive data. The whole thing is usually an unwanted tarpit.

  • > Product managers hate this

    Depends on what product they manage, at least if they're good at their job. A product manager for social media company know it's not just about "least clicks to X", but about a lot of other things along the way.

    Surely the product managers at OpenAI are briefed on the potential upsides with having the concrete ID for all users.

    • Making someone produce an identity document or turn on their camera for a selfie absolutely tanks your funnel. It's dire.

      The effect is strong enough that a service which doesn't require that will outcompete a service which does. Which leads to nobody doing it in competitive industries unless a regulator forces it for everybody.

      Companies that must verify will resort to every possible dark pattern to try to get you over this massive "hump" in their funnel; making you do all the other signup before demanding the docs, promising you free stuff or credit on successful completion of signup, etc. There is a lot of alpha in being able to figure out ways to defer it, reduce the impact or make the process simpler.

      There is usually a fair bit of ceremony and regulation of how verification data is used and audits around what happens to it are always a possibility. Sensible companies keep idv data segregated from product data.

      2 replies →

  • There is no way that the likes of OpenAI can make a credible case for this. What fraud angle would there be? If they were a bank then I can see the point.

    • Regulatory risk around child safety. DSA article 28 and stuff like that. Age prediction is actually the "soft" version; i.e, try not to bother most users with verification, but do enough to reasonably claim you meet requirements. They also get to control the parameters around how sensitive it is in response to the political / regulatory environment.