For some reason ChatGPT has suddenly started thinking i'm a teen. Every answer it starts out "Since you are a teen I will..." and prompts me to upload an ID to show my age. I'm 35.
OpenAI demanded I prove my age in November 2025. I’m an educated 50 year old and had been paying for the service for over a year.
When they insisted I prove my age I went through two layers of support but got nowhere. They insisted I go through their verification process.
I refused and cancelled my subscription.
This may be a losing battle but I’m not going to upload a photo to these services.
Does OpenAI have an incentive to get age prediction "wrong" so that more people "verify" their ages by uploading an ID or scanning their face, allowing "OpenAI" to collect more demographic data just in time to enable ads?
>behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
Surely they're using "history of the user-inputted chat" as a signal and just choosing not to highlight that? Because that would make it so much easier to predict age.
Its for both; the loosened guardrails around sex and the advertising roll-out are both explicitly for logged-in adults, and the age prediction is how they determine the logged-in user is an “adult”.
Do you expect the data collected for age verification will be completely separate from the advertising apparatus? I would expect the incentives would align for this to enhance their advertising options.
I'll almost certainly get used for both, but I believe "adult content" is the primary motivator. If it was just for ads they wouldn't even bother announcing it as a "feature", they'd just do it.
Also:
"Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account."
Hard no. It's so easy to get "flagged" by opaque systems for "Age verification" processes or account lockouts that require giving far too much PII to a company like this for my liking.
> Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Yea, my linkedin account which was 15 years old and was a paid pro user for several years got flagged for verification (no reason ever given, I rarely used it for anything other than interacting with recruiters) with this same company as their backend provider. They wouldn't accept a (super invasive feeling) full facial scan + a REAL ID, they also wanted a passport. So I opted out of the platform. There was no one to contact - it wasn't "fast" or "easy" at all. This kind of behavior feels like a data grab for more nefarious actors and data brokers further downstream of these kinds of services.
The unfortunate reality is this isn't just corporations acting against user's interests, governments around the world are pushing for these surveillance systems as well. It's all about centralizing power and control.
Facebook made journalists a lot less relevant, so anything that hurts Meta (and hence anything that hurts tech in general) is a good story that helps journalists get revenge and revenue.
"Think of the children", as much as it is hated on HN, is a great way to get the population riled up. If there's something that happens which involves a tech company and a child, even if this is an anecdote that should have no bearing on policy, the media goes into a frenzy. As we all know, the media (and their consumers) love anecdotes and hate statistics, and because of how many users most tech products have, there are plenty of anecdotes to go around, no matter how good the company's intentions.
Politicians still read the NY Times, who had reporters admit on record that they were explicitly asked to put tech in an unfavorable light[1], so if the NYT says something is a problem, legislators will try to legislate that problem away, no matter how harebrained, ineffective and ultimately harmful the solution is.
Yeah, this is all far far too invasive. The goal is obviously to gather as much data on you as possible under whatever pretense users are most likely to accept. "Think of the children", as always. This will then be used to sell advertising to you, or outright sell it to data brokers.
Protecting the kids and fighting terror. Anything that can't be argued against is always used as a justification of people in power who don't want to incite a riot.
Politicians, CEOs, Lawyers it's standard practice because it's so effective.
Does regulators really care about a predicted age? I feel like they require hard proof of being above age to show explicit content. The only ones that care about predicted age is advertisers.
In the UK, age verification just has to be "robust" (not further specified). Face scanning, for example, is explicitly designated as an allowed heuristic.
It's not much for the regulators as much as its for the advertisers.
At this point, just use gemini (yes its google and has its issues if you need SOTA) or I have recently been trying out more and more chat.z.ai for simple text issues (like hey can you fix this docker issue etc.) and I feel like chat.z.ai is pretty good plus open source models (honestly chat.z.ai feels pretty SOTA to me)
Kagi's Assistant is the most useful tool I've found as far as searching goes, and occasionally simple codegen. Let's you use a wide variety of models and isn't tracking me.
Any proof that you are above a certain age will also expose you identity.
That is the only reason regulators care about children safety online, because they care about ID.
LLMs are very good at profiling users in hacker news and finding alt accounts, for example. profiling is the best use case for llms.
So there you go, maybe it wont give exactly what regulators say they want, but it will give exactly what they truly want.
Hey Al, they might be implying that non-minors would be impervious to the viral challenges based on some sort of well-developed critical thinking facilities. I am not so optimistic.
A lot of people are using AI as a trusted friend, because they don't really have anyone else. A good friend would, I hope, talk someone out of doing something dangerous just to get a silly viral video. With AI being trained on the internet, that's going to have a very different take on things, as the internet only cares about the spectacle, not the person performing it.
This could have the unintended consequence of encouraging under-agers to ask more 'adult' questions in order to try to trick it into thinking they're an adult. Analogous to the city that wanted to get rid of rats, so offered a bounty for every dead rat, and to the surprise of nobody except policy makers, the city ended up with more rats, not less. (lesson: they thought they were incentivising less rats, but unintentionally incentivised more)
The padding in OpenAI's statement is easy to see through:
> The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
(the only real signal here is 'usage patterns' - AKA the content of conversations - the other variables are obfuscation to soften the idea that OpenAI will be pouring over users' private conversations to figure out if they're over/under age.).
You give too much credit to Apple in the AI age. Especially since they've already partnered with Google to power Siri with Gemini now.
Apple is a has-been. Anthropic is best positioned to take up the privacy mantle, and may even be forced to, given that most of their revenue is enterprise and B2B.
Exactly. The more data they can collect, the better.
Is it not in OpenAI's best interested for them to accidentally flag adults as teens so they have to "verify" their age by handing over their biometric data (Persona face scan) or their government ID? Certainly that level of granularity will enhance the product they offer for their advertisers.
> We’re learning from the initial rollout and continuing to improve the accuracy of age prediction over time
> While this is an important milestone, our work to support teen safety is ongoing.
agreed, I guess we'll be seeing some pushbacks similar to Apple's CSAM but overall it's about getting a better demographics on their consumers for better advertising especially when you have a one-click actions combined with it. We'll be seeing handful of middleware plugins (like Honey) popping up, which I think the intended usecase for something like chat based apps
Random reply: 20 days ago you asked for my ChatGPT custom instructions to be more skeptical. It is :
Use an encouraging tone. Adopt a skeptical, questioning approach. Call me on things which don't seem right. List possible assumptions I'm making if any.
Whenever anything involving advertisers and AI comes up I wonder, are we supposed to believe both that they are on the cusp of creating God and that advertising revenue is a meaningful second goal?
When we look at how fast and coordinated the rollout of age verification has been around the globe, it's hard not to wonder if there was some impetus behind it.
Agreed - Interesting that these systems inevitably involve proving you're a citizen in some way, which seems unnecessary if your goal is to try to figure out someone's age.
Considering that OpenAI is having trouble getting its models to avoid recommending suicide (something it probably does not want for ANY user), I rather doubt this age prediction is going to be that helpful for curbing the tool's behavior.
It's nonsense and doesn't work. They have "age predicted" my account a couple of months back saying I'm under 18, while I'm a man in my 40s who uses ChatGPT for mostly work related stuff, and nothing that would indicate that it's someone under 18. So now they are asking for a government ID to prove it. Yeah, no thanks.
I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.
Interesting. Do you believe OpenAI has earned user trust and will be good stewards of the enhanced data (biometric, demographic, etc) they are collecting?
To me, this feels nefarious with the recent push into advertising. Not only are people dating these chat bots, but they are more trusting of these AI systems than people in their own life. Now, OpenAI is using this "relationship" to influence user's buying behavior.
This is a thoughtful response and deserves discussion. Yes, certainly, OpenAI might get your age wrong. Yes, certainly, they’re signaling to advertisers.
But consider OPs point — ChatGPT has become a safety-critical system. It is a tool capable of pushing a human towards terrible actions, and there are documented cases of it doing this.
In that context, what is the responsibility of OpenAI to keep their product away from the most vulnerable, and the most easily influenced? More than zero, I believe.
It's really really not. "Safety-critical system" has a meaning, and a chat bot doesn't qualify. Treating the whole world as if it needs to be wrapped in bubble-wrap is extremely unhealthy and it generally just used as an excuse for creeping authoritarianism.
I'm an engineer working on safety-critical systems and have to live with that responsibility every day.
When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.
I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".
There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.
It feels like OpenAI is moving into the extraction phase far too soon. They are making their product less appealing to end users with ads and aggressive user-data gathering (which is what this really is). Usually you have to be very secure in your position as a market segment owner before you start with the anti-consumer moves, but they are rapidly losing market share, and they have essentially no moat. Is the goal just to speed-run an IPO before they lose their position?
I imagine they're building this system with the goal of extracting user demographics (age, sex, income) from chat conversations to improve advertising monetization.
This seems to be a side project of their goal and a good way to calibrate the future ad system predictions.
For context though, people have been screaming lately at OpenAI and other AI companies about not doing enough to protect the children. Almost like there is no winning, and one should just make everything 18+ to actually make people happy.
Every "protect the children" measure that involves increased surveillance is balanced by an equal and opposing "now the criminal pedophiles in positions of power have more information on targets".
And also to reduce account sharing. How will a family share an account when they simultaneously make the adult account more “adult”, and make the kids account more annoying for adults.
safe to say everything in existence is created to extract user demographics.
it was never about you finding information, this trillion market exists to find you.
it was never about you summarizing information getting around advertisement, it is about profiling you.
this trillion market is not about empowering users to style their pages more quickly, heh
> typical times of day when someone is active, usage patterns over time,
> Users [...] will always have a [...] simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Nice, so now your most secret inner talk with the LLM can be directly associated with face and ID. Get ready for the fun moment when Trump decide that he needs to see what are your discussions with the AI when you pass the border or piss him off...
Wait, I don't understand this. Does it mean that they can erroneously predict I'm a minor, covertly restrict my account without me knowing? I guess it's time to cancel my subscription.
Yes, it listed a few of my past questions as reference points. I had asked some questions about Nintendo 64/Dreamcast and Gamecube games. It also used information I had asked it about programming languages and some work-related questions to guess my age.
I don't know how OpenAI plans to do this going forward, just quickly read the article and figured that might be a good question to ask ChatGPT.
Edit: I just followed that up with, "Based on everything I've asked, what gender am I?" It refused to answer, stating it wouldn't assume my gender and treats me as gender neutral.
So I guess it's ok for an AI agent to assume your age, but not your gender... ?
I don't really feel like diving into the ethics of OpenAI at the moment lol.
OpenAI are liars. I have all the privacy settings on, and it still assumes things about me that it would only do if it knew all my previous conversations.
> ChatGPT automatically applies additional protections designed to reduce exposure to sensitive content, such as:
* Graphic violence or gory content
* Viral challenges that could encourage risky or
harmful behavior in minors
* Sexual, romantic, or violent role play
* Depictions of self-harm
* Content that promotes extreme beauty standards,
unhealthy dieting, or body shaming
That wording implies that's not comprehensive, but hate and disinformation are omitted.
Let's be honest - to protect the children, big tech will put everyone under the suspicion of being one. And the issue is not how they use the technologies they have, because they have a moral responsibility to do it safely, but that we don't have technologies of hours.
What I wonder lately is how an adult person can be empowered by tech to bare the consequences of their action and the answer usually is that we cannot. We don''t have the means of production in the literal marxist definition of the phrase and we are being shaped by outside forces that define what we can do with ourselves. And it does not matter if those forces are benevolent or not, it matters that it is not us.
The winter is coming and we are short on thermal underware.
The chinese open models being reason for hope is just a very sad joke.
well I hope it's better than Spotify's age prediction which came to the conclusion that I'm 87 years old.
Seriously though this is the most easily game-able thing imaginable, pretty sure teens are clever enough to figure out how to pretend to be an adult. If you've come to the conclusion that your product is unsuited for kids implement actual age verification instead of these shoddy stochastic surveillance systems. There's a reason "his voice sounded deep" isn't going to work for the cashier who sold kids booze
It’s absolutely crucial for effective ad monetization to know the users age - significant avenues are closed down due to various legislation like COPPA and similar around the world. It severely limits which users can even be subject to ads, the kind of ads, and whether data can be collected for profiling and targeting for ads.
For some reason ChatGPT has suddenly started thinking i'm a teen. Every answer it starts out "Since you are a teen I will..." and prompts me to upload an ID to show my age. I'm 35.
> simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service
The normalization of identity verification to use internet services is itself a problem. It's described much better than I could by EFF here:
https://www.eff.org/deeplinks/2026/01/so-youve-hit-age-gate-...
OpenAI demanded I prove my age in November 2025. I’m an educated 50 year old and had been paying for the service for over a year. When they insisted I prove my age I went through two layers of support but got nowhere. They insisted I go through their verification process. I refused and cancelled my subscription. This may be a losing battle but I’m not going to upload a photo to these services.
Does OpenAI have an incentive to get age prediction "wrong" so that more people "verify" their ages by uploading an ID or scanning their face, allowing "OpenAI" to collect more demographic data just in time to enable ads?
Maybe they should start scanning users' irises.
Oh yes. In fact, I read on Reddit that they have secret project to use Worldcoin.
"How do you do, fellow kids? Err...skibidi toilet?" That should work, it's at least three years old now I think?
Forever young, my dude. Forever young.
Haha ya I asked my my younger brother if he’s gotten it and he said he didn’t. I’m like alright I must give off youthful vibes ;)
1 reply →
>behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
Surely they're using "history of the user-inputted chat" as a signal and just choosing not to highlight that? Because that would make it so much easier to predict age.
I also saw a reddit post once which showed that even if you type something and don't press enter, even that's logged in chatgpt
SO I am pretty sure that they might be using that information as well. I don't see any reason for them not to.
So if you wrote something to chatgpt and then removed it/ didn't ask it? Yea they might be using that history too.
Chat history would be a good signal to predict age until you give kids a reason to try to confound it.
I, for one, would love to see the gen alphas tiktoking about what 401k questions to type into chatgpt
4 replies →
Even more adults would be flagged as children.
Everyone's saying this is for advertising, but I don't think it is. It's so they can let ChatGPT sext with adults.
It’s worse than that.
They want to build “the scream room” from Frank Herbert’s _The Jesus Incident_.
There is immense power to wield over someone when you know what they did in the scream room.
Its for both; the loosened guardrails around sex and the advertising roll-out are both explicitly for logged-in adults, and the age prediction is how they determine the logged-in user is an “adult”.
Do you expect the data collected for age verification will be completely separate from the advertising apparatus? I would expect the incentives would align for this to enhance their advertising options.
I'll almost certainly get used for both, but I believe "adult content" is the primary motivator. If it was just for ads they wouldn't even bother announcing it as a "feature", they'd just do it.
Also:
"Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account."
3 replies →
Agreed, see https://www.reuters.com/business/openai-allow-mature-content...
Discussed here at the time,
https://news.ycombinator.com/item?id=45604313 ("Chat-GPT becomes Sex-GPT for verified adults (twitter.com/sama)")
Exactly. Advertising will drive away users unless they are slowly trained into it, which will take time, time that Open AI doesn't have.
Meanwhile, the adult market is huge and sureshot revenue from a user base that is more likely to not mind the ads.
Hard no. It's so easy to get "flagged" by opaque systems for "Age verification" processes or account lockouts that require giving far too much PII to a company like this for my liking.
> Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Yea, my linkedin account which was 15 years old and was a paid pro user for several years got flagged for verification (no reason ever given, I rarely used it for anything other than interacting with recruiters) with this same company as their backend provider. They wouldn't accept a (super invasive feeling) full facial scan + a REAL ID, they also wanted a passport. So I opted out of the platform. There was no one to contact - it wasn't "fast" or "easy" at all. This kind of behavior feels like a data grab for more nefarious actors and data brokers further downstream of these kinds of services.
The unfortunate reality is this isn't just corporations acting against user's interests, governments around the world are pushing for these surveillance systems as well. It's all about centralizing power and control.
Don't forget the journalists.
Facebook made journalists a lot less relevant, so anything that hurts Meta (and hence anything that hurts tech in general) is a good story that helps journalists get revenge and revenue.
"Think of the children", as much as it is hated on HN, is a great way to get the population riled up. If there's something that happens which involves a tech company and a child, even if this is an anecdote that should have no bearing on policy, the media goes into a frenzy. As we all know, the media (and their consumers) love anecdotes and hate statistics, and because of how many users most tech products have, there are plenty of anecdotes to go around, no matter how good the company's intentions.
Politicians still read the NY Times, who had reporters admit on record that they were explicitly asked to put tech in an unfavorable light[1], so if the NYT says something is a problem, legislators will try to legislate that problem away, no matter how harebrained, ineffective and ultimately harmful the solution is.
[1] https://x.com/KelseyTuoc/status/1588231892792328192?lang=en
It cracks me up that Persona is the vendor OpenAI will use to do manual verifications (as someone who works on integrations with Persona).
I'm glad ChatGPT will get a taste of VC startup tech quality ;)
Yep. Whenever platforms opt for more data I opt out. And like clockwork they let loose all that PII to hackers within months.
Yeah, this is all far far too invasive. The goal is obviously to gather as much data on you as possible under whatever pretense users are most likely to accept. "Think of the children", as always. This will then be used to sell advertising to you, or outright sell it to data brokers.
New boss, same as the old boss.
Pretty cool. Evidence that you can do whatever you want under the banner of 'protecting the kids.'
Protecting the kids and fighting terror. Anything that can't be argued against is always used as a justification of people in power who don't want to incite a riot.
Politicians, CEOs, Lawyers it's standard practice because it's so effective.
Does regulators really care about a predicted age? I feel like they require hard proof of being above age to show explicit content. The only ones that care about predicted age is advertisers.
In the UK, age verification just has to be "robust" (not further specified). Face scanning, for example, is explicitly designated as an allowed heuristic.
It's not much for the regulators as much as its for the advertisers.
At this point, just use gemini (yes its google and has its issues if you need SOTA) or I have recently been trying out more and more chat.z.ai for simple text issues (like hey can you fix this docker issue etc.) and I feel like chat.z.ai is pretty good plus open source models (honestly chat.z.ai feels pretty SOTA to me)
Kagi's Assistant is the most useful tool I've found as far as searching goes, and occasionally simple codegen. Let's you use a wide variety of models and isn't tracking me.
1 reply →
Any proof that you are above a certain age will also expose you identity. That is the only reason regulators care about children safety online, because they care about ID. LLMs are very good at profiling users in hacker news and finding alt accounts, for example. profiling is the best use case for llms.
So there you go, maybe it wont give exactly what regulators say they want, but it will give exactly what they truly want.
>young people deserve technology that both expands opportunity and protects their well-being.
Then maybe OpenAI should just close shop, since (SaaS) LLMs do neither in the mid to long term.
> Viral challenges that could encourage risky or harmful behavior in minors
Why would it encourage this for anyone?
I was depressed but not surprised at how easy it turned out to be to get people to take horse-dewormer and wash it down with unpasteurized milk
Hey Al, they might be implying that non-minors would be impervious to the viral challenges based on some sort of well-developed critical thinking facilities. I am not so optimistic.
A lot of people are using AI as a trusted friend, because they don't really have anyone else. A good friend would, I hope, talk someone out of doing something dangerous just to get a silly viral video. With AI being trained on the internet, that's going to have a very different take on things, as the internet only cares about the spectacle, not the person performing it.
3 replies →
After watching politics over the past decade it seems people of all ages have no critical thinking skills.
This could have the unintended consequence of encouraging under-agers to ask more 'adult' questions in order to try to trick it into thinking they're an adult. Analogous to the city that wanted to get rid of rats, so offered a bounty for every dead rat, and to the surprise of nobody except policy makers, the city ended up with more rats, not less. (lesson: they thought they were incentivising less rats, but unintentionally incentivised more)
The padding in OpenAI's statement is easy to see through:
> The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
(the only real signal here is 'usage patterns' - AKA the content of conversations - the other variables are obfuscation to soften the idea that OpenAI will be pouring over users' private conversations to figure out if they're over/under age.).
Worth also noting 'neutered' AI models tend to be less useful. Example: Older Stable Diffusion models were preferred over newer, neutered models: https://www.youtube.com/watch?v=oFtjKbXKqbg&t=1h16m
They're trying to make ChatGPT more attractive to advertisers.
I hope Anthropic or someone pulls an Apple and has the taste to say no to ads. Maybe it will just be Apple. (Even their resistance is slowly fading..)
We don't need to jam ads into every orifice.
I hope there's more value to be had not doing ads than there is to include them. I'd cancel my codex/chatgpt/claude if someone planted the flag.
OpenAI seems to think it has infinite trust to burn.
If you don’t want ads, pay for ChatGPT. If you aren’t making them money via ads or subscriptions, why should they care about you?
10 replies →
Apple makes tens of billions of dollars per year on ads. Your conception of Apple and who they are have diverged.
1 reply →
You give too much credit to Apple in the AI age. Especially since they've already partnered with Google to power Siri with Gemini now.
Apple is a has-been. Anthropic is best positioned to take up the privacy mantle, and may even be forced to, given that most of their revenue is enterprise and B2B.
3 replies →
Exactly. The more data they can collect, the better.
Is it not in OpenAI's best interested for them to accidentally flag adults as teens so they have to "verify" their age by handing over their biometric data (Persona face scan) or their government ID? Certainly that level of granularity will enhance the product they offer for their advertisers.
> We’re learning from the initial rollout and continuing to improve the accuracy of age prediction over time
> While this is an important milestone, our work to support teen safety is ongoing.
agreed, I guess we'll be seeing some pushbacks similar to Apple's CSAM but overall it's about getting a better demographics on their consumers for better advertising especially when you have a one-click actions combined with it. We'll be seeing handful of middleware plugins (like Honey) popping up, which I think the intended usecase for something like chat based apps
Random reply: 20 days ago you asked for my ChatGPT custom instructions to be more skeptical. It is :
Use an encouraging tone. Adopt a skeptical, questioning approach. Call me on things which don't seem right. List possible assumptions I'm making if any.
Whenever anything involving advertisers and AI comes up I wonder, are we supposed to believe both that they are on the cusp of creating God and that advertising revenue is a meaningful second goal?
When we look at how fast and coordinated the rollout of age verification has been around the globe, it's hard not to wonder if there was some impetus behind it.
There are dark sides to the rollout that EFF details in their resource hub: https://www.eff.org/issues/age-verification
There is a confluence of surveillance capitalism and a global shift towards authoritarianism that makes it particularly alarming right now.
Agreed - Interesting that these systems inevitably involve proving you're a citizen in some way, which seems unnecessary if your goal is to try to figure out someone's age.
Considering that OpenAI is having trouble getting its models to avoid recommending suicide (something it probably does not want for ANY user), I rather doubt this age prediction is going to be that helpful for curbing the tool's behavior.
How long before a phrase is found that causes a predicted birthdate of 1970/01/01 ?
More bullshit from the masters of the bullshit generator.
It's nonsense and doesn't work. They have "age predicted" my account a couple of months back saying I'm under 18, while I'm a man in my 40s who uses ChatGPT for mostly work related stuff, and nothing that would indicate that it's someone under 18. So now they are asking for a government ID to prove it. Yeah, no thanks.
That's because it was never really about protecting the children.
No more typing "how is babby formed" I guess.
"Am I pregante"
Creepy people doing creepy things.
"it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."
See it starts with gender, and if (user.gender === "Female") user.age = 29.
After that, the algorithm gets very complex and becomes a black box. I'm sure they spent billions training it.
Looks like an elegant solution. And yes, demographics are useful for advertising.
This title gave me a weird feeling as if they were going to predict my own age.
Maybe at some point they will graduate to being able to predict who I am from "Friends"
exactly, i`m genuinely impress by how few people here have figure this
That’s what they’re doing
I think this is good.
I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.
Interesting. Do you believe OpenAI has earned user trust and will be good stewards of the enhanced data (biometric, demographic, etc) they are collecting?
To me, this feels nefarious with the recent push into advertising. Not only are people dating these chat bots, but they are more trusting of these AI systems than people in their own life. Now, OpenAI is using this "relationship" to influence user's buying behavior.
This is a thoughtful response and deserves discussion. Yes, certainly, OpenAI might get your age wrong. Yes, certainly, they’re signaling to advertisers.
But consider OPs point — ChatGPT has become a safety-critical system. It is a tool capable of pushing a human towards terrible actions, and there are documented cases of it doing this.
In that context, what is the responsibility of OpenAI to keep their product away from the most vulnerable, and the most easily influenced? More than zero, I believe.
> It is a tool capable of pushing a human towards terrible actions
So is Catcher In The Rye and Birth of a Nation.
> the most vulnerable, and the most easily influenced
How exactly is age an indicator of vulnerability or subject-to-influence?
1 reply →
> ChatGPT has become a safety-critical system.
It's really really not. "Safety-critical system" has a meaning, and a chat bot doesn't qualify. Treating the whole world as if it needs to be wrapped in bubble-wrap is extremely unhealthy and it generally just used as an excuse for creeping authoritarianism.
"I'm sorry Dave, I'm afraid I can't do that"
I'm an engineer working on safety-critical systems and have to live with that responsibility every day.
When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.
I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".
There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.
12 replies →
It feels like OpenAI is moving into the extraction phase far too soon. They are making their product less appealing to end users with ads and aggressive user-data gathering (which is what this really is). Usually you have to be very secure in your position as a market segment owner before you start with the anti-consumer moves, but they are rapidly losing market share, and they have essentially no moat. Is the goal just to speed-run an IPO before they lose their position?
The minority reports vibe is getting stronger by the minutes.
I imagine they're building this system with the goal of extracting user demographics (age, sex, income) from chat conversations to improve advertising monetization.
This seems to be a side project of their goal and a good way to calibrate the future ad system predictions.
Exactly, all under the guise of "protect the children" - a tried and true surveillance and control trope.
For context though, people have been screaming lately at OpenAI and other AI companies about not doing enough to protect the children. Almost like there is no winning, and one should just make everything 18+ to actually make people happy.
9 replies →
Every "protect the children" measure that involves increased surveillance is balanced by an equal and opposing "now the criminal pedophiles in positions of power have more information on targets".
Tbf, they already had all the data they could use as they liked as per their EULA, I don't think they need a cover story.
And also to reduce account sharing. How will a family share an account when they simultaneously make the adult account more “adult”, and make the kids account more annoying for adults.
Lots of upsides for them.
I think they're quite explicitly saying they already can determine your demographics from your chats. Which is almost certainly true for most users.
I don't chat - I prompt. And usually zero-shot in an incognito window. I doubt it'll determine much of anything about my demographics.
2 replies →
they definitely can do that already, and chatGPT will happily tell you all the demographics it thinks it knows about you if you ask it.
Chatgpt has ads now?
https://openai.com/index/our-approach-to-advertising-and-exp...
safe to say everything in existence is created to extract user demographics. it was never about you finding information, this trillion market exists to find you. it was never about you summarizing information getting around advertisement, it is about profiling you.
this trillion market is not about empowering users to style their pages more quickly, heh
Age detection was already very effective with Leisure Suit Larry 3 age questions.
https://allowe.com/games/larry/tips-manuals/lsl3-age-quiz.ht...
That's how I learned the meaning of "prophylactic" at age 10, in front of a neighborhood friend's 286.
Some of those questions are highly regional. Like the W-4 being a tax form... Not where I'm from it's not!
That's really funny and clever that this was a thing, thanks for sharing. I never had the fortune of trying to guess my way through these.
> typical times of day when someone is active, usage patterns over time,
> Users [...] will always have a [...] simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Nice, so now your most secret inner talk with the LLM can be directly associated with face and ID. Get ready for the fun moment when Trump decide that he needs to see what are your discussions with the AI when you pass the border or piss him off...
20 years of Google searches, iCloud calendar events, Microsoft EMails, Netflix viewing and Amazon purchases have doomed you already.
Wait, I don't understand this. Does it mean that they can erroneously predict I'm a minor, covertly restrict my account without me knowing? I guess it's time to cancel my subscription.
Yes, but it's to protect the children! ;) Just scan your face in the app and your restrictions will be lifted.
I just asked ChatGPT. "Based on everything I've asked, how old do you think I am?" It was dead-on with its answer. It guessed 30-35. I'm 32.
That was just a spur-of-the-moment question. I've been using ChatGPT for over six months now.
>It was dead-on with its answer. It guessed 30-35. I'm 32.
Horoscopes must feel like literal magic to you.
But did it tell on what basis it made that assumption?
Yes, it listed a few of my past questions as reference points. I had asked some questions about Nintendo 64/Dreamcast and Gamecube games. It also used information I had asked it about programming languages and some work-related questions to guess my age.
I don't know how OpenAI plans to do this going forward, just quickly read the article and figured that might be a good question to ask ChatGPT.
Edit: I just followed that up with, "Based on everything I've asked, what gender am I?" It refused to answer, stating it wouldn't assume my gender and treats me as gender neutral.
So I guess it's ok for an AI agent to assume your age, but not your gender... ?
I don't really feel like diving into the ethics of OpenAI at the moment lol.
2 replies →
In case it wasn't clear LLM conversations are being analyzed in a similar way to the social media advertising profiles...
"Q: ipsum lorem
ChatGPT: response
Q: ipsum lorem"
OpenAI: take this user's conversation history and insert the optimal ad that they're susceptible to, to make us the most $
OpenAI are liars. I have all the privacy settings on, and it still assumes things about me that it would only do if it knew all my previous conversations.
sounds like a good reason to stop using it no?
Their teen content restrictions
> ChatGPT automatically applies additional protections designed to reduce exposure to sensitive content, such as:
That wording implies that's not comprehensive, but hate and disinformation are omitted.
Let's be honest - to protect the children, big tech will put everyone under the suspicion of being one. And the issue is not how they use the technologies they have, because they have a moral responsibility to do it safely, but that we don't have technologies of hours.
What I wonder lately is how an adult person can be empowered by tech to bare the consequences of their action and the answer usually is that we cannot. We don''t have the means of production in the literal marxist definition of the phrase and we are being shaped by outside forces that define what we can do with ourselves. And it does not matter if those forces are benevolent or not, it matters that it is not us.
The winter is coming and we are short on thermal underware.
The chinese open models being reason for hope is just a very sad joke.
well I hope it's better than Spotify's age prediction which came to the conclusion that I'm 87 years old.
Seriously though this is the most easily game-able thing imaginable, pretty sure teens are clever enough to figure out how to pretend to be an adult. If you've come to the conclusion that your product is unsuited for kids implement actual age verification instead of these shoddy stochastic surveillance systems. There's a reason "his voice sounded deep" isn't going to work for the cashier who sold kids booze
This is 100% for advertising, not user safety.
It’s absolutely crucial for effective ad monetization to know the users age - significant avenues are closed down due to various legislation like COPPA and similar around the world. It severely limits which users can even be subject to ads, the kind of ads, and whether data can be collected for profiling and targeting for ads.
Are you suggesting regulations designed to protect children actually require more data collection and enable the surveillance economy?
Yes…? Even the EFF thinks so - https://www.eff.org/issues/age-verification
If this was for safety, they could’ve done it literally any time in the past few years - instead of within days of announcing their advertising plans!
How do I block these ads?
Stop using ChatGPT or pay $20/month (for now). Alternatively use the APIs instead
I meant the ads submitted to HN