← Back to context

Comment by jey

7 hours ago

I don't think the point was to say "look, AI can just take care of writing a browser now". I think it was to show just how far the tools have come. It's not meant to be production quality, it's meant to be an impressive demo of the state of AI coding. Showing how far it can be taken without completely falling over.

EDIT: I retract my claim. I didn't realize this had servo as a dependency.

This is entirely too charitable. Basically all this proves is that the agent could run in a loop for a week or so, did anyone doubt that?

They marketed as if we were really close to having agents that could build a browser on their own. They rightly deserve the blowback.

This is an issue that is very important because of how much money is being thrown at it, and that effects everyone, not just the "stakeholders". At some point if it does become true that you can ask an agent to build a browser and it actually does, that is very significant.

At this point in time I personally can't predict whether that will happen or not, but the consequences of it happening seem pretty drastic.

  • > This is entirely too charitable. Basically all this proves is that the agent could run in a loop for a week or so, did anyone doubt that?

    yes, every AI skeptic publicly doubted that right up until they started doing it.

  • I find it hard to believe after running agents fully autonomously for a week you'd end up with something that actually compiles and at least somewhat functions.

    And I'm an optimist, not one of the AI skeptics heavily present on HN.

    From the post it sounds like the author would also doubt this when he talks about "glorified autocomplete and refactoring assistants".

    • You don't run coding agents for a week and THEN compile their code. The best available models would have no chance of that working - you're effectively asking them to one-shot a million lines of code with not a single mistake.

      You have the agents compile the code every single step of the way, which is what this project did.

    • That is a good point. It is impressive. Llms from two years ago were impressive, llms a year ago were impressive, and from a month ago even more impressive.

      Still, getting "something" to compile after a week of work is very different from getting the thing you wanted.

      What is being sold, and invested in, is the promise that LLMs can accomplish "large things" unaided.

      But they can't, as of yet, they cannot, unless something is happening in one of the SOTA labs that we don't know about.

      They can however accomplish small things unaided. However there is an upper bound, at least functionally.

      I just wish everyone was on the same page about their abilities and their limitations.

      To me they understand conext well (e.g. the task, build a browser doesn't need some huge specification because specifications already exist).

      They can write code competently (this is my experience anyway)

      They can accomplish small tasks (my experience again, "small" is a really loose definition I know)

      They cannot understand context that doesn't exist (they can't magically know what you mean, but they can bring to bear considerable knowledge of pre-existing work and conventions that helps them make good assumptions and the agentic loop prompts them to ask for clarification when needed)

      They cannot accomplish large tasks (again my experience)

      It seems to me there is something akin to the context window into which a task can fit. They have this compact feature which I suspect is where this limitation lies. Ie a person can't hold an entire browser codebase in their head, but they can create a general top level mapping of the whole thing so they can know where to reach, where areas of improvement are necessary, how things fit together and what has been and what hasn't been implemented. I suspect this compaction doesn't work super well for agents because it is a best effort tacked on feature.

      I say all this speculatively, and I am genuinely interested in whether this next level of capability is possible. To me it could go either way.

Maybe so, but I don't think 3 million lines of code to ultimately call `servo.render()` is a great way to demonstrate how good AI coding is.

  • lmao okay, touché. I did not realize it had servo as a dependency.

Yeah, but starting with a codebase that is (at least approaching) production quality and then mangling it into something that's very far from production quality... isn't very impressive.

It didn't have Servo as a dependency.

Take a look in the Cargo.toml: https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...

  • I haven't really looked at the fastrender project to say how much of a browser it implements itself, but it does depend on at least one servo crate: cssparser (https://github.com/servo/rust-cssparser).

    Maybe there is a main servo crate as well out there, and fastrender doesn't depend on that crate, but at least in my mind fastrender depends on some servo browser functionality.

    EDIT: fastrender also includes the servo HTML parser: html5ever (https://github.com/servo/html5ever).

    • Yes, it depends on cssparser and html5ever from Servo, and also uses Taffy which is a dependency shared with Servo.

      I do not think that makes it a "Servo wrapper", because calling it that implies it has no rendering code of its own.

      It has plenty of rendering code of its own, that's why the rendered pages are slow and have visual glitches you wouldn't get with Sero!

> I think it was to show just how far the tools have come.

In… terms of sheer volume of production of useless crap?