Comment by vertex-four
5 years ago
Right, so the web was a wonderful place when IE6 was the only browser anyone developed for, and for the short period of time when Chrome was the only browser anyone developed for. This definitely didn't affect anyone's ability to choose a browser which met their needs, and definitely didn't result in half-baked and overly complex specifications being forced through the standards process by the only browser vendor with any power.
If Google got their way, we'd be shipping modified LLVM bitcode to clients ("PNacl"), and every browser would be shipping some random fork of LLVM stuck in the past forever. If Microsoft got their way, GMail would be an ActiveX plugin.
Gecko has massive improvements over Webkit/Blink, btw - WebRender is huge.
You're forming a false dichotomy, and you're mistaking competition of code with competiton of institutions.
Having different organisations with different goals is what prevents these scenarios.
Otherwise the webkit blink thing wouldn't have been successfull.
Mozilla could also have forked blink and started replacing it with rust, and you would've gotten the same improvments.
Mozilla could have taken SQLite as a foundation, started a living spec, and immediately begun translating the codebase to rust. The effort would have been the same as for their half assed IndexedDB stuff, but the result would have been much better.
It doesn't matter where a code base comes from, it matters where it goes to. And when it comes to diversity of implementation the repelling forces of different ideas, viewpoints and aesthetics that normally result in dreaded project forks, work for the advantage of all in browsers.
Conways law: The software of projects reflects their social structure.
"Rewrite it in Rust" is not the only difference between Gecko and Webkit/Blink (which are still similar enough that they might as well be one codebase), and believing so is showing your bias. WebRender, for example, is not simply "rewriting part of a renderer in Rust". There's significant differences between how Gecko and Webkit handle media under the hood. And both have pushed various specifications that would be easy to implement in one but not the other. Google are, admittedly, much better at being incredibly loud about "standards" they try to force through.
In theory, the purpose of a standard is to allow other people to implement it, from spec. The spec cannot be "just use this existing codebase". Otherwise we'd have one HTML parser that sits entirely undocumented, and the HTML spec would be "do whatever libhtml does" - we've seen that in the form of OOXML. The media streaming spec would be "just use this binary blob from Adobe, or you can't do video at all". If I came along today, and wanted to implement WebSQL, which is entirely specified as "do whatever SQLite does", from scratch... how exactly would I start? In theory, right now, with enough time and money, I could implement a javascript interpreter or html renderer or whatever else without ever referring to any other browser's source code or depending on anything - a clean room implementation. Some companies still actually do that, because Webkit, Blink and Gecko don't meet their needs and wouldn't without a complete rearchitecture. Imagine if the javascript spec was "just do whatever V8 does", and we could never get things like QuickJS or Duktape.
When I, a web developer, come across something that looks like a bug in the One True Codebase, how do I know whether it's a bug or something someone forgot to document properly? What if that bug isn't present in another implementation? Do we have to be 100% bug compatible with some arbitrary version of SQLite/V8/Blink forever? Getting rid of most "quirks" was the best thing to happen to the web from a developer perspective in a very long time, IMO.
What about when someone comes along and suggests something that would work really well in the One True Codebase, GeBlinKit, but it turns out that nobody else with a different code design could reasonably implement it?
You're really bringing up false dichotomies all the time.
Nobody ever argued that it was about the programming languages or equal implementations, but about project stewardship and diverging code bases. They influence each other, it's not only about one or the other.
I don't know where you get your weird 100% bug compatibility idea from, that's literally how nothing is handled anywhere. This is also orthogonal to specs, you can have specs that completely dictate specifications (like CORBA) or that are super loose in what they allow (ANSI C).
There are not only reference documents but also reference implementations, as projects grow it's ok do diverge from them, and find common ground in other documents like specs. Sometimes they cover reasonable behaviour so well that they can work as an alternative to a specification, like sqlite and https://sqljet.com/ . That doesn't mean that they'll never change, SQLite regularly has bugs discovered and fixed. If the SQLite devs don't even adhere to your assumed "aLL bUgS aND BEhAViOUrs aRE SAcrEd AnD MUsT Be KEpT InDeFInitElY" philosophy, why would anybody else?
As if there is some kind of weird rigid black and white process involved with these complex projects, that is either good base implementation and no spec ever in the future with 100% backwards compatibility, or waterfall spec development followed by implementations that asymptotically approach the spec.
Where theres a will theres a way, these projects and documents are all about people and the ways they collaborate and work. It's not as rigid as you make it out to be.
>When I come across a bug how do I know whether it's a bug or something someone forgot to document properly? What if that bug isn't present in another implementation?
You do what you currently do. You go to a place where the people that steward the project reside and you ask. Why and how do you think specs get revised? They contain ambiguities, bugs, and unspecified behaviour. Somebody stumbles upon it, and asks a question.
>What about when someone comes along and suggests something that would work really well but it turns out that nobody else with a different code design could reasonably implement it?
You'd do what you currently do. You talk about it, and in the end you might even write it down somewhere, in a spec, in an rfc, in a piece of documentation.
You seem to think that SQLite would stay the reference implementation for ever, which is simply not true. It's a good starting point yeah. But webkit didn't stay the reference implementation either, nor did netscape.
Don't be so rigid.