Comment by a4isms
1 day ago
That is a true and useful component of analyzing risk, but the point is that human behaviour isn't a simple risk calculation. We tend to over-guard against things that subjectively seem dangerous, and under-guard against things that subjectively feel safe.
This isn't about whether AI is statistically safer, it's actually about the user experience of AI: If we can provide the same guidance without lulling a human backup into complacency, we will have an excellent augmented capability.
No comments yet
Contribute on Hacker News ↗