Comment by itishappy

1 year ago

They had no prior knowledge of the course.

I don't think so. The article says the race track was controlled by the race organisers but not that it wasn't known to the participants before the race.

Anyway given the state of the art, flying autonomously at great speeds and beating human champions without pre-training, i.e. on an unknown race track, would be a much bigger breakthrough than just beating some human champions (which has already happened except in a less official environment). You can rest assured that if that was what the team achieved, the article would be telling us all about it.

  • Shoot, you're totally right. They had no prior knowledge before the event, but I don't know how they teach it the course. There's more than one gate visible at a time, so they must do something to fine-tune it.

    That being said, I'm sure they have a base model too, so I'm right back to wondering about the parent question: would it work if you set it down in front of a few fresh gates?

    • Probably not. RL is really bad at generalising to unseen environments. There was a paper about an ... otter?

      Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability

      https://arxiv.org/abs/2107.06277

      OK, it's a robotic zookeeper looking for the otter cage.

      Where does it say they had no prior knowledge before the event? I can't find that in the text. Is it in the video?

      I guess there's no paper yet.

      2 replies →