Comment by ffsm8
4 hours ago
But how specifically in Chess?
These human in the loop systems basically lists possible moves with likelihood of winning, no?
So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.
The point of this strategy was to mitigate traps, but this would now have to become inverted: the opponent AI would have to be able to gaslight the human into thinking he's stopping his AI from falling into a trap. While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit
>So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.
The human will be a demerit any time it's not picking the choice the model would have made.
>While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit
Sure, but it's not a Centaur game if the human is doing literally nothing every time. It does not make much sense to say Human + AI can't be weaker than AI only because the human will always do what the AI says. That's not a team. You've just delayed the response of the computer for no good reason. The point is that humans no longer have insight to give computers on chess. The only way for a human+ai team to not be outright worse than only ai is for the human to do nothin at all and that's not a team.