Comment by samename

12 hours ago

Does OpenAI have an incentive to get age prediction "wrong" so that more people "verify" their ages by uploading an ID or scanning their face, allowing "OpenAI" to collect more demographic data just in time to enable ads?

YES. They all do. Everyone is dripping to get their hands on your biometric data and medical info.

  • I have worked in this space, and my experience was that usually age / identity verification is driven by regulatory or fraud requirements. Usually externally imposed.

    Product managers hate this, they want _minimum_ clicks for onboarding and to get value, any benefit or value that could be derived from the data is miniscule compared to the detrimental effect on signups or retention when this stuff is put in place. It's also surprisingly expensive per verification and wastes a lot of development and support bandwidth. Unless you successfully outsource the risk you end up with additional audit and security requirements due to handling radioactive data. The whole thing is usually an unwanted tarpit.

    • > Product managers hate this

      Depends on what product they manage, at least if they're good at their job. A product manager for social media company know it's not just about "least clicks to X", but about a lot of other things along the way.

      Surely the product managers at OpenAI are briefed on the potential upsides with having the concrete ID for all users.

      3 replies →

    • There is no way that the likes of OpenAI can make a credible case for this. What fraud angle would there be? If they were a bank then I can see the point.

      1 reply →