Comment by joe_the_user

2 years ago

I think the debate between "does amazing on metric X" versus "doesn't really understand the problem" reappears many places and doesn't have any direct way to be resolved.

That's more or less because "really understands the problem" generally winds-up being a placeholder for things the system can't. Which isn't to say it's not important. One thing that is often included in "understanding" is the system knowing the limits of its approach - current AI systems have a harder time giving a certainty value than giving a prediction. But you could have a system that satisfied a metric for this and other things would pop up - for example, what kind of certainty or uncertainty are we talking about (crucial for decision making under uncertainty).