Comment by shadowgovt
6 months ago
I also worked at Google, and this kind of telemetry collection doesn't seem surprising to me at all. I don't know if you are / were familiar with the huge pile of metrics the UIs collect in general (via Analytics). I never worked on anything that was cpu-intense enough to justify this kind of back-channel, but I don't doubt we'd have asked for it if we thought we needed it... And you'd rather have this as an internal Google-to-Google monitor than punch a big security hole open for any arbitrary domain to query.
JS is easier to debug (even with Google's infrastructure), and they have no need of everyone else's videoconference telemetry (which when this was added, would have been, iirc, Flash-based).
I believe the things they learned via this closed loop let Google learn things that informed the WebRTC standard, hence my contention it got us there faster. Unless I've missed something, this API was collecting data since 2008. WebRTC was 3 years later.
I think you've misunderstood my question regarding "What would the alternative be?" I meant what would the alternative be to collecting stats data via a private API only on Google domains when we didn't have a standard for performance collection in browsers? We certainly don't want Google railroading one into the public (with all the security concerns that would entail). And I guess I'm just flat out not surprised that they would have dropped one into their browser to simplify debugging a very performance intensive service that hadn't been supported in the browser outside plugins before. Is your contention that they should have gone the flash route and done a binary as a plug-in, then put telemetry in the binary? Google had (And mostly still has) a very web-centric approach; doing it as a binary wouldn't be in their DNA.
No comments yet
Contribute on Hacker News ↗