Comment by dmurray

3 days ago

The only interesting part of the model's output was

{ "current_play": "ruck", }

So the vision model can correctly identify that there's a ruck going on and that the ball is most likely in the ruck.

Why not build on this? Which team is in possession? Who was the ball carrier at the start of the ruck, and who tackled him? Who joined the ruck, and how quickly did they get there? How quickly did the attacking team get the ball back in hand, or the defending team turn over possession? What would be a good option for the outhalf if he got the ball right now?

All of these except the last would be straightforward enough for a human observer with basic rugby knowledge going through the footage frame by frame, and I bet it would be really valuable to analysts. It seems like computer vision technology is at a stage where this could be automated too.

Multiple companies sell Rugby data of various levels of granularity. I don't know if rugby has all the toys (i.e. full tracking outside of wearables) that soccer or American football have because there's less money sloshing around.

  • Most pros now have the vests, but also they tend to have additional tech in their mouth guards. This is mostly for CTE monitoring, but I imagine that there's other data that can be extracted

ESPN has play by play stuff for free like this on their website for some other sports

not sure if it is done by a human or not

curious how “an AI can do it” yields much difference in terms of result for the casual watcher

  • > curious how “an AI can do it” yields much difference in terms of result for the casual watcher

    An AI can do it in volume, and therefore cheaper. I don't think a human could do everything I said in real time - maybe with a lot of training and custom software.

    A human could transcribe the scoreboard, but the article still thinks that's an interesting application of cutting-edge machine vision.

    • Humans can do _most_ of what you said in real time, both providers using bespoke software and club analysts using off the shelf stuff like Sportscode. For full positional data on every player, every frame then yes, computer vision is doing most of the work but the quality isn't always great. Providers with in-stadium multi-camera systems provide great data, but you don't necessarily have access to the size of dataset you'd want for recruitment, and so lower-quality broadcast tracking exists (with all the problems you can imagine like missing players, occlusions, crazy camerawork etc). Most clubs also have wearables for their own analysis. Almost every fully automated broadcast tracking solution has hit a wall (sometimes on the first day of a season) in terms of quality that is often only solved by human QA, or by just discarding some games, so this is far from a completely solved problem. Fun domain to work in, but lots of horrible edge cases.

  • If this is the final product, not much difference at all.

    But where the human version is pretty much as far as it’s going to go, this is v0.01 of the AI version. Pretty soon the AI will be predicting what will happen next, commenting on whether this was a good idea (based on statistics), and letting the viewer ask questions about what exactly happened and why.