Comment by ncouture
7 hours ago
I would absolutely not consider this overreaching if the statement within this thread that "it had referred the user to mental help hotlines multiple times in the past" is true.
That reaches near the fact that a lot of AI is not ready for the enterprise especially when interconnected with other AI agents since it lacks identity and privileged access management.
Perhaps one could establish the laws of "being able to use AI for what it is", for instance, within the boundary of the general public's web interface, not limiting the instances where it successfully advertises itself as "being unable to provide medical advice" or "is prone to or can make mistake", and such, to validating that the person understands by asking them directly and perhaps somewhat obviously indirectly and judging if they're aware that this is a computer you're talking to.
No comments yet
Contribute on Hacker News ↗