Comment by dimatura
1 day ago
The few examples they show do look pretty good for a wifi-based method, although who knows how cherry-picked they are. I wonder how much the "SLAM" part is contributing and how sensitive that is to the sensor quality on the phone. I would've assumed that they'd be using vision, which seems to be the method of choice for other companies like niantic. The ground-truth data part for vision would certainly be more onerous, though.
He explains it fairly well if you understand how you'd go from wifi accuracy to SLAM. THE WIFI was providing 3m accuracy and the SLAM down to 1M. how much it provides is those two numbers. I'm sure the algorithms are complex but he points out that SLAM is corrected by the actual maps made by the self service app. So it's fairly easy to understand: the map provides a probability space, the wifi puts you within 3m and the SLAM is use to fill in the blanks with help from the probability space.