Comment by cheevly

3 days ago

2029? I have no idea why you would think this is so far off. More like Q2 2026.

You're either overestimating the capabilities of current AI models or underestimating the complexity of building a web browser. There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.

  • Even if AI will not achieve the ability to perform at this level on its own, it clearly is going to be an enormous force multiplier, allowing highly skilled devs to tackle huge projects more or less on their own.

    • Skilled devs compress, not generate (expand).

      https://www.youtube.com/watch?v=8kUQWuK1L4w

      The "discoverer" of APL tried to express as many problems as he could with his notation. First he found that notation expands and after some more expansion he found that it began shrinking.

      The same goes to Forth, which provides means for a Sequitur-compressed [1] representation of a program.

      [1] https://en.wikipedia.org/wiki/Sequitur_algorithm

      Myself, I always strive to delete some code or replace some code with shorter version. First, to better understand it, second, to return back and read less.

  • It's most likely both.

    > There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.

    Firstly the CI is completely broken on every commit, all tests have failed and its and looking closely at the code, it is exactly what you expect for unmaintainable slop.

    Having more lines of code is not a good measure of robust software, especially if it does not work.

  • Not only edge cases and standards, but also tons of performance optimizations.

Web browsers are insanely hard to get right, that’s why there are only ~3 decent implementations out there currently.

  • The one nice thing about web browsers is that they have a reasonably formalized specification set and a huge array of tests that can be used. So this makes them a fairly unique proposition ideally suited to AI construction.

    • As far as I read on Ladybird's blog updates, the issue is less the formalised specs, and more that other browsers break the specs, so websites adjust, so you need to take the non-compliance to specs into account with your design

You should make your own predictions, and then we can do a retrospective on who was right.

Yeah if you let them index chromium I'm sure it could do it next week. It just won't be original or interesting.