← Back to context

Comment by Symmetry

7 hours ago

After Deep Blue Garry Kapsparav proposed "Centaur Chess"[1] where teams of humans and computers would complete with each other. For about a decade a team like that was superior to either an unaided computer or an unaided AI. These days pure AI teams tend to be much stronger.

[1] https://en.wikipedia.org/wiki/Advanced_chess

How would pure ai ever be "much stronger" in this scenario?

That doesn't make any sense to me whatsoever, it can only be "equally strong", making the approach non-viable because they're not providing any value... But the only way for the human in the loop to add an actual demerit, you'd have to include time taken for each move into the final score, which isn't normal in chess.

But I'm not knowledgeable on the topic, I'm just expressing my surprise and inability to contextualize this claim with my minor experience of the game

  • You can be so far ahead of someone, their input (if you act on it) can only make things worse. That's it. If a human 'teams up' with chess AI today and does anything other than agree with its moves, it will just drag things down.

    • But how specifically in Chess?

      These human in the loop systems basically lists possible moves with likelihood of winning, no?

      So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.

      The point of this strategy was to mitigate traps, but this would now have to become inverted: the opponent AI would have to be able to gaslight the human into thinking he's stopping his AI from falling into a trap. While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit

  • If you had a setup where the computer just did its thing and never waited for the human to provide input but the human still had an unused button they could press to get a chance to say something that might technically count as "centaur", but that isn't really what people mean by the term. It's the delay in waiting for human input that's the big disadvantage centaur setups have when the human isn't really providing any value these days.

    • But why would that be a disadvantage large enough to cause the player to lose, which would be necessary for

      > pure AI teams tend to be much stronger.

      Maybe each turn has a time limit, and a human would need "n moments" to make the final judgement call whereas the AI could delay the final decision right to the last moment for it's final analysis? So the pure AI player gets an additional 10-30s to simulate the game essentially?

  • Why? If the human has final say on which play to make I can certainly see them thinking they are proposing a better strategy when they are actually hurting their chances.

With intelligence of models seeming spikey/lumpy I suspect we'll see tasks and domains fall to AI one at a time. Some will happen quickly and others may take far longer than we expect.