I'm an engineer working on safety-critical systems and have to live with that responsibility every day.
When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.
I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".
There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.
We shouldn't build our products and policies around one-off darwin-award level people like that teenager. It reduces the products quality and increases the burden on every user.
I wholeheartedly reject the fully sanitized "good vibes only" nanny world some people desire.
I am against companies doing age verification like this due to the surveillance effects, but I agree with you that the censorship angle is not a good one.
I suppose mainly because I don't think a non-minor committing suicide with ChatGPT's help and encouragement matters less than a minor doing so. I honestly thing the problem is the user interface for GPT being a chat. I think it has a psychological effect that you can talk to ChatGPT the same way you can talk to Emily from school. I don't think this is a solvable problem if OpenAI wants this to be their main product (and obviously they do).
Maybe we don’t all need saving from ourselves. Maybe we need to grow up and have some personal responsibility. As someone who is happy to do that, seeing personal freedom endlessly slashed in the name of safety is tiresome.
My feelings have absolutely nothing to do with censorship. That’s just an easy straw man for you to try and dismiss my point of view, because you’re scared of not feeling safe.
Cool, I'd like you to make a commercial system you sell access to and ensure that it is unsafe. I'll represent the injured and we'll own all your corporate assets, and like will pierce the corporate veil due to your wonton behavior.
A society which took psychological safety seriously would never have created ChatGPT in the first place. But of course seriously advocating for safety would cost one their toys, and for one unwilling to pay that cost, empowering the surveillance apparatus seems very reasonable and easily confused for safe. When one’s children or friends’ children can no longer enter an airport because some vibe-coded slop leaked their biometrics, we’ll see if that holds true.
Sorry, but for every chat log with one teenager who commited suicide due to AI, I'm sure you can find many more of people/teens with suicide thoughts or intent that are explicitly NOT doing it because of advice from AI systems.
I'm pretty sure AI has saved more lives than it has taken, and there's pretty strong arguments to say that someone whose thinking of committing suicide will likely be thinking about it with or without AI systems.
Yes, sometimes you really do "have to take one for the team" in regards to tragedy. Indeed, Charlie Kirk was literally talking about this the EXACT moment he took one for the team. It is a very good thing that this website is primarily not parents, as they cannot reason with a clear unbiased opinion. This is why we have dispassionate lawyers to try to find justice, and why we should have non parents primarily making policy involving systems like this.
Also, parents today are already going WAY to far with non consensual actions taken towards children. If you circumcised your male child, you have already done something very evil that might make them consider suicide later. Such actions are so normalized in the USA that not doing it will make you be seen as weird.
The relatively arbitrary cutoff at 18 is also an indication that this is a blunt tool, intended to alleviate some low-lying fruit of potential misuse but which will clearly miss the larger mark since there will be plenty of false positives (not to mention false negatives).
Some kids are mature enough from day one to never need tech overlords to babysit them, while others will need to be hand-held through adulthood. (I've been online since I was 12, during the wild and wooly Usenet and BBS days, and was always smart enough not to give personal info to strangers; I also saw plenty of pornographic images [paper] from an even younger age and turned out just fine, thank you.)
Maybe instead of making guesses about people's ages, when ChatGPT detects potentially abusive behavior, it should walk the user through a series of questions to ensure the user knows and understands the risks.
I'm an engineer working on safety-critical systems and have to live with that responsibility every day.
When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.
I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".
There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.
We shouldn't build our products and policies around one-off darwin-award level people like that teenager. It reduces the products quality and increases the burden on every user.
I wholeheartedly reject the fully sanitized "good vibes only" nanny world some people desire.
I am against companies doing age verification like this due to the surveillance effects, but I agree with you that the censorship angle is not a good one.
I suppose mainly because I don't think a non-minor committing suicide with ChatGPT's help and encouragement matters less than a minor doing so. I honestly thing the problem is the user interface for GPT being a chat. I think it has a psychological effect that you can talk to ChatGPT the same way you can talk to Emily from school. I don't think this is a solvable problem if OpenAI wants this to be their main product (and obviously they do).
Maybe we don’t all need saving from ourselves. Maybe we need to grow up and have some personal responsibility. As someone who is happy to do that, seeing personal freedom endlessly slashed in the name of safety is tiresome.
My feelings have absolutely nothing to do with censorship. That’s just an easy straw man for you to try and dismiss my point of view, because you’re scared of not feeling safe.
Cool, I'd like you to make a commercial system you sell access to and ensure that it is unsafe. I'll represent the injured and we'll own all your corporate assets, and like will pierce the corporate veil due to your wonton behavior.
4 replies →
I'm not under 18. I assume you aren't either.
1 reply →
A society which took psychological safety seriously would never have created ChatGPT in the first place. But of course seriously advocating for safety would cost one their toys, and for one unwilling to pay that cost, empowering the surveillance apparatus seems very reasonable and easily confused for safe. When one’s children or friends’ children can no longer enter an airport because some vibe-coded slop leaked their biometrics, we’ll see if that holds true.
Sorry, but for every chat log with one teenager who commited suicide due to AI, I'm sure you can find many more of people/teens with suicide thoughts or intent that are explicitly NOT doing it because of advice from AI systems.
I'm pretty sure AI has saved more lives than it has taken, and there's pretty strong arguments to say that someone whose thinking of committing suicide will likely be thinking about it with or without AI systems.
Yes, sometimes you really do "have to take one for the team" in regards to tragedy. Indeed, Charlie Kirk was literally talking about this the EXACT moment he took one for the team. It is a very good thing that this website is primarily not parents, as they cannot reason with a clear unbiased opinion. This is why we have dispassionate lawyers to try to find justice, and why we should have non parents primarily making policy involving systems like this.
Also, parents today are already going WAY to far with non consensual actions taken towards children. If you circumcised your male child, you have already done something very evil that might make them consider suicide later. Such actions are so normalized in the USA that not doing it will make you be seen as weird.
The relatively arbitrary cutoff at 18 is also an indication that this is a blunt tool, intended to alleviate some low-lying fruit of potential misuse but which will clearly miss the larger mark since there will be plenty of false positives (not to mention false negatives).
Some kids are mature enough from day one to never need tech overlords to babysit them, while others will need to be hand-held through adulthood. (I've been online since I was 12, during the wild and wooly Usenet and BBS days, and was always smart enough not to give personal info to strangers; I also saw plenty of pornographic images [paper] from an even younger age and turned out just fine, thank you.)
Maybe instead of making guesses about people's ages, when ChatGPT detects potentially abusive behavior, it should walk the user through a series of questions to ensure the user knows and understands the risks.