← Back to context

Comment by labrador

20 hours ago

Like cigarettes, we may see requirements for stronger warnings on AI output. The standard "ChatGPT can make mistakes" seems rather weak.

For example, the "black box warning" on a pack of cigarettes or a prescription drug?

Like:

Use of this product may result in unfavorable outcomes including self-harm, misguided decisions, delusion, addiction, detection of plagiarism and other unintended consequences.