Comment by sho_hn
15 hours ago
I think this is good.
I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.
15 hours ago
I think this is good.
I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.
Interesting. Do you believe OpenAI has earned user trust and will be good stewards of the enhanced data (biometric, demographic, etc) they are collecting?
To me, this feels nefarious with the recent push into advertising. Not only are people dating these chat bots, but they are more trusting of these AI systems than people in their own life. Now, OpenAI is using this "relationship" to influence user's buying behavior.
This is a thoughtful response and deserves discussion. Yes, certainly, OpenAI might get your age wrong. Yes, certainly, they’re signaling to advertisers.
But consider OPs point — ChatGPT has become a safety-critical system. It is a tool capable of pushing a human towards terrible actions, and there are documented cases of it doing this.
In that context, what is the responsibility of OpenAI to keep their product away from the most vulnerable, and the most easily influenced? More than zero, I believe.
> It is a tool capable of pushing a human towards terrible actions
So is Catcher In The Rye and Birth of a Nation.
> the most vulnerable, and the most easily influenced
How exactly is age an indicator of vulnerability or subject-to-influence?
> So is Catcher In The Rye and Birth of a Nation.
No, those are books. Tools are different, particularly tools that talk back to you. Your analogy makes no sense.
> How exactly is age …
In my experience, 12-year-old humans are much easier to sway with pleasant-sounding bullshit than 24-year-old humans. Is your experience different?
> ChatGPT has become a safety-critical system.
It's really really not. "Safety-critical system" has a meaning, and a chat bot doesn't qualify. Treating the whole world as if it needs to be wrapped in bubble-wrap is extremely unhealthy and it generally just used as an excuse for creeping authoritarianism.
"I'm sorry Dave, I'm afraid I can't do that"
I'm an engineer working on safety-critical systems and have to live with that responsibility every day.
When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.
I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".
There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.
I am against companies doing age verification like this due to the surveillance effects, but I agree with you that the censorship angle is not a good one.
I suppose mainly because I don't think a non-minor committing suicide with ChatGPT's help and encouragement matters less than a minor doing so. I honestly thing the problem is the user interface for GPT being a chat. I think it has a psychological effect that you can talk to ChatGPT the same way you can talk to Emily from school. I don't think this is a solvable problem if OpenAI wants this to be their main product (and obviously they do).
Maybe we don’t all need saving from ourselves. Maybe we need to grow up and have some personal responsibility. As someone who is happy to do that, seeing personal freedom endlessly slashed in the name of safety is tiresome.
My feelings have absolutely nothing to do with censorship. That’s just an easy straw man for you to try and dismiss my point of view, because you’re scared of not feeling safe.
7 replies →
A society which took psychological safety seriously would never have created ChatGPT in the first place. But of course seriously advocating for safety would cost one their toys, and for one unwilling to pay that cost, empowering the surveillance apparatus seems very reasonable and easily confused for safe. When one’s children or friends’ children can no longer enter an airport because some vibe-coded slop leaked their biometrics, we’ll see if that holds true.
Sorry, but for every chat log with one teenager who commited suicide due to AI, I'm sure you can find many more of people/teens with suicide thoughts or intent that are explicitly NOT doing it because of advice from AI systems.
I'm pretty sure AI has saved more lives than it has taken, and there's pretty strong arguments to say that someone whose thinking of committing suicide will likely be thinking about it with or without AI systems.
Yes, sometimes you really do "have to take one for the team" in regards to tragedy. Indeed, Charlie Kirk was literally talking about this the EXACT moment he took one for the team. It is a very good thing that this website is primarily not parents, as they cannot reason with a clear unbiased opinion. This is why we have dispassionate lawyers to try to find justice, and why we should have non parents primarily making policy involving systems like this.
Also, parents today are already going WAY to far with non consensual actions taken towards children. If you circumcised your male child, you have already done something very evil that might make them consider suicide later. Such actions are so normalized in the USA that not doing it will make you be seen as weird.
1 reply →