← Back to context

Comment by a4isms

3 days ago

If you show me a tool that does a thing perfectly 99% of the time, I will stop checking it eventually. Now let me ask you: How do you feel about the people who manage the security for your bank using that tool? And eventually overlooking a security exploit?

I agree that there are domains for which 90% good is very, very useful. But 99% isn't always better. In some limited domains, it's actually worse.

Counterpoint.

Humans don't get it right 100% or the time.

  • That is a true and useful component of analyzing risk, but the point is that human behaviour isn't a simple risk calculation. We tend to over-guard against things that subjectively seem dangerous, and under-guard against things that subjectively feel safe.

    This isn't about whether AI is statistically safer, it's actually about the user experience of AI: If we can provide the same guidance without lulling a human backup into complacency, we will have an excellent augmented capability.