Comment by crote
1 day ago
The problem is that group (1) results in a nightmarish race-to-the-bottom. File creators have zero incentive to create spec-compliant files, because there's no penalty for creating corrupted files. In practice this means a large proportion of documents are going to end up corrupt. Does it open in Chrome? Great, ship it! The file format is no longer the specification, but it has now become a wild guess at whatever weird garbage the incumbent is still willing to accept. This makes it virtually impossible to write a new parser, because the file format suddenly has no specification.
On the other hand, imagine a world where Chrome would slowly start to phase out its quirks modes. Something like a yellow address bar and a "Chrome cannot guarantee the safety of your data on this website, as the website is malformed" warning message. Turn it into a red bar and a "click to continue" after 10 years, remove it altogether after 20 years. Suddenly it's no longer that one weird customer who is complaining, but everyone - including your manager. Your mistakes are painfully obvious during development, so you have a pretty good incentive to properly follow the spec. You make a mistake on a prominent page and the CTO sees it? Well, guess you'll be adding an XHTML validator to your CI pipeline next week!
It is very tempting to write a lenient parser when you are just one small fish in a big ecosystem, but over time it will inevitably lead to the degradation of that very ecosystem. You need some kind of standards body to publish a validating reference parser. And like it or not, Chrome is big enough that it can act as one for HTML.
>File creators have zero incentive to create spec-compliant files, because there's no penalty for creating corrupted files
This depends. If you are a small creator with a unique corruption then you're likely out of luck. The problem with big creators is 'fuck you' I do what I want.
>"Chrome cannot guarantee the safety of your data on this website, as the website is malformed" warning message.
This would appear on pretty much every website. And it would appear on websites that are no longer updated and they'd functionally disappear from any updated browser. In addition the 10-20 year thing just won't work in US companies, simply put if they get too much pressure next quarter on it, it's gone.
>Your mistakes are painfully obvious during development,
Except this isn't how a huge number of websites work. They get html from many sources and possibly libraries. Simply put no one is going to follow your insanity, hence why xhtml never worked in the first place. They'll drop Chrome before they drop the massive amount of existing and potential bugs out there.
>And like it or not, Chrome is big enough that it can act as one for HTML.
And hopefully in a few years between the EU and US someone will bust parts of them up.
We don't accept this from any other file format - why is HTML different? For example, if I include random blocks of data in a JPEG file, the picture is all broken or the parser gives up (which is often turned into a partial picture by some abstraction layer that ignores the error code) - in both cases the end user treats as completely broken. If I add random bytes into a Word or LibreOffice document I expect it not to load at all.
That would break decades of the web with no incentive for Google to do so. Plus, any change of that scale that they make is going to draw antitrust consideration from _somebody_.
You’re right, but even standards bodies aren’t enough. At the end of the day, it’s always about what the dominant market leader will accept. The standard just gives your bitching about the corrupted files some abstract moral authority, but that’s about it.