Comment by thaumasiotes
2 years ago
> It was especially egregious when MSFT was trying their own EdgeHTML/Trident-based Edge. Issues would go away by faking user-agent.
Why is there more than one user-agent? Does somebody still expect to receive different content based on the user-agent, and furthermore expect that the difference will be beneficial to them?
What was Microsoft trying to achieve by sending a non-Chrome user-agent?
User agents are useful. However they tend to be abused much more often than effectively used
1. They are useful for working around bugs. You can match the user agent to work around the bugs on known-buggy browser versions. Ideally this would be a handful of specific matches (like Firefox versions 12-14). You can't do feature detection for many bugs because they may only trigger in very specific situations. Ideally this blacklist would only be confirmed entries and manually tested if the new versions have the same problem. (Unfortunately these often end up open-ended because testing each new release for a bug that isn't on the priority list is tedious.)
2. Diagnosing problems. Often times you see that some specific group of user-agents is hammering some API or fails to load a page. It is much easier to track down if this user agent is a precise identifier of the client for which your site doesn't work correctly.
3. Understanding users. For example if you see that a browser you have never heard of is a significant amount of traffic you may want to add it to your testing routine.
But yes, the abuse of if (/Chrome/.test(navigator.userAgent)) { mainCode() } else { untestedFallback() } is a major issue.
Only option 1 is something that users, who are the people who decide what user-agent to send, might care about. And as you yourself point out, it doesn't happen.
I'm pretty sure that users care that websites can fix bugs affecting their browser. In fact option 1 is very difficult to actually implement when you can't figure out which browser is having problems in the first place.
Why do you think users wouldn't care about sites diagnosing problems that are making pages fail to load (#2) or sites testing the site on the browser that the user uses (#3)?
It is normal practice for each browser to have its own user-agent, no? But the fact that Google intentionally detected it and used polyfills or straight up invalid JS at the time was insane. A similar spin today is "Your browser is unsupported" you see here and there. When a major platform such as YouTube does it, it is really impactful.
It would never do feature detection, would give lower quality h264 video, etc. Back then, there was really nice third-party application myTube which had made this less of an issue but it was eventually killed through API changes.
It may have been intended to be a normal practice, but as far back as IE vs Netscape everyone has been mucking with user agents for non-competitive (and counter-non-competetive) reasons