Cursor's latest “browser experiment” implied success without evidence

1 day ago (embedding-shapes.github.io)

Related: Scaling long-running autonomous coding - https://news.ycombinator.com/item?id=46624541 - Jan 2026 (174 comments)

The comment that points out that this week-long experiment produced nothing more than a non-functional wrapper for Servo (an existing Rust browser) should be at the top:

https://news.ycombinator.com/item?id=46649046

  • It’s not just a wrapper for Servo, the linked poster just checked the dependencies in the Cargo file and proclaimed that without checking anything further.

    In reality this project does indeed implement a functioning custom JS Engine, Layout engine, painting etc. It does borrow the CSS selectors package from Servo but that’s about it.

  • I've responded to this claim in more detail at [0], with additional context at [1].

    Briefly, the project implemented substantial components, including a JS VM, DOM, CSS cascade, inline/block/table layout, paint systems, text pipeline, and chrome, and is not merely a Servo wrapper.

    [0] https://news.ycombinator.com/item?id=46655608

    • Just for context, this was the original claim by Cursor's CEO on Twitter:

      > We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.

      > It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.

      > It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.

      https://xcancel.com/mntruell/status/2011562190286045552#m

    • Could you somewhere make clear exactly how much of the code was "autonomously" built vs how much was steered by humans? Because at this point it's clear that it wasn't 100% autonomous as originally claimed, but right now it's not clear if this was just the work of an engineer running Cursor vs "autonomously organised a fleet of agents".

    • You're claiming that the JS VM was implemented. Is it actually running? Because this screenshot shows that the ACID3 benchmark is requesting that you enable JavaScript (https://imgur.com/fqGLjSA). Why don't you upload a video of you loading this page?

      Your slop is worthless except to convince gullible investors to give you more money.

    • Does any of it actually work? Can you build that JS VM separately and run serious JS on it? That would be an accomplishment.

      Looking at the comments and claims (I've not got the time to review a large code base just to check this claim) I get an impression _something_ was created, but none of it actually builds and no one knows what is the actual plan.

      Did your process not involve recursive planning stages (these ALWAYS have big architectural error and gotchas in my experience, unless you're doing a small toy project or something the AI has seen thousands of already).

      I find agents doing pretty well once you have a human correct their bad assumptions and architectural errors. But this assumes the human has absolute understanding of what is being done down to the tiniest component. There will be errors agents left to their own will discover at the very end after spending dozens of millions of tokens, then they will try the next idea they hallucinated, spend another few dozen million tokens and so on. Perhaps after 10 iterations like this they may arrive at something fine or more likely they will descent into hallucinations hell.

      This is what happens when one of :the complexity, the size, or it being novel enough (often a mix of all 3) of the task exceed the capability of the agents.

      The true way to success is the way of a human-ai hybrid, but you absolutely need a human that knows their stuff.

      Let me give you a small example from systems field. The other day I wanted to design an AI observability system with the following spec: - use existing OS components, none or as little code as possible - ideally runs on stateless pods on an air gapped k3s cluster (preferably uses one of existing DBs, but clickhouse acceptable) - able to proxy openai, anthropic(both api and clause max), google(vercel+gemini), deepinfra, openrouter including client auth (so it is completely transparent to the client) - reconstruct streaming responses, recognises tool calls, reasoning content, nice to have ability to define own session/conversation recognition rules

      I used gemini 3 and opus 4.5 for the initial planning/comparison of os projects that could be useful. Both converged on helicone as being supposedly the best. Until towards the very end of implementation it was found helicone has pretty much zero docs for properly setting up self hosted platform, it tries redirecting to their Web page for auth and agents immediately went into rewriting parts of the source attempting to write their own auth/fixing imaginary bugs that were really miscondiguration.

      Then another product was recommended (I forgot which), there upon very detailed questioning, requesting re-confirmations of actual configs for multiple features that were supposedly supported it turned out it didn't pass through auth for clause max.

      Eventually I chose litellm+langfuse (that was turned down initially in favour of helicone) and I needed to make few small code changes so Claude max auth could be read, additional headers could be passed through and within a single endpoint it could send Claude telemetry as pure pass through and real llm api through it's "models" engine (so it recognised tool calls and so on).

    • I cannot make these two statements true at the same time in my head:

      > Briefly, the project implemented substantial components, including a JS VM

      and from the linked reply:

      > vendor/ecma-rs as part of the browser, which is a copy of my personal JS parser project vendored to make it easier to commit to.

      If it's using a copy of your personal JS parser that you decided it should use, then it didn't implement it "autonomously". The references you're linking don't summarize to the brief you've provided.

      What the fuck is going on?

      2 replies →

    • Did you actually review these implementations and compare them to Servo (and WebKit)? Can you point to a specific part or component that was fully created by the LLM but doesn't clearly resemble anything in existing browser engines?

  • Has anyone tried to rewrite some popular open source project with IA? I imagine modern LLMs can be very effective at license-washing/plagiarizing dependencies, it could be an interesting new benchmark too

  • Apparebtly this person actually got it to compile: https://xcancel.com/CanadaHonk/status/2011612084719796272#m

    • https://x.com/CanadaHonk/status/2011612084719796272 as well.

      I went through the motions. There are various points in the repo history where compilation is possible, but it's obscure. They got it to compile and operate prior to the article, but several of the PRs since that point broke everything, and this guy went through the effort of fixing it. I'm pretty sure you can just identify the last working commit and pull the version from there, but working out when looks like a big pain in the butt for a proof of concept.

      2 replies →

  • Negative results are great. When you publish them on purpose, it's honorable. When you reveal them by accidentally, it's hilarious. Cheers to Cursor for today's entertainment.

  • A bit off topic, but fun for people having lots of Claude credits. Auto Claude is a nice opensource repo to let Claude generate entire application from just one prompt. Lots of Jolo vibing her, but nevertheless impressive. Last week I asked it in one sentence to create a full blown Hotel website including all the software tools for backoffice. It took almost 4 days with 4 Claude accounts. It actually created a working thing.

  • What the hell?

    I was seeing screenshots and actually getting scared for my job for a second.

    It’s broken and there’s no browser engine? Cursor should be tarred and feathered.

  • Why is the top comment on this item just a link to another comment on this same story?

The blog[0] is worded rather conservatively but on Twitter [2] the claim is pretty obvious and the hype effect is achieved [2]

CEO stated "We built a browser with GPT-5.2 in Cursor"

instead of

"by dividing agents into planners and workers we managed to get them busy for weeks creating thousands of commits to the main branch, resolving merge conflicts along the way. The repo is 1M+ lines of code but the code does not work (yet)"

[0] https://cursor.com/blog/scaling-agents

[1] https://x.com/kimmonismus/status/2011776630440558799

[2] https://x.com/mntruell/status/2011562190286045552

[3]https://www.reddit.com/r/singularity/comments/1qd541a/ceo_of...

  • Even then, "resolving merge conflicts along the way" doesn't mean anything, as there are two trivial merge strategies that are always guaranteed to work ('ours' and 'theirs').

    • Haha. True, CI success was not part of PR accept criteria at any point.

      If you view the PRs, they bundle multiple fixes together, at least according to the commit messages. The next hurdle will be to guardrail agents so that they only implement one task and don't cheat by modifying the CI piepeline

      22 replies →

    • We use claude code a lot for updating systems to a newer minor/major version. We have our own 'base' framework for clients which is a, by now, very large codebase that does 'everything you can possibly need'; so not only auth, but payments, billing, support tickets, email workflows, email wysiwyg editing, landing page editor, blogging, cms, AI /agent workflows etc etc (across our client base, we collect features that are 'generic' enough and create those in the base). It has many updates for the product lead working on it (a senior using Claude code) but we cannot just update our clients (whose versions are sometimes extremely customised/diverging) at the same pace; some do not want updates outside security, some want them once a year etc. In this case AI has been really a productivity booster; our framework always was quite fast moving before AI too when we had 3.5 FTE (client teams are generally much larger, especially the first years) on it but then merging, that to mean; including the new features and improvements in the client version that are in the new framework version without breaking/removing changes on the client side, was a very painful process taking a lot of time and at at least 2 people for an extended period of time; one from the client team, one from the framework team. With CC it is much less painful: it will merge them (it is not allowed, by hooks, to touch the tests), it will run the client tests and the new framework tests and report the difference. That difference is evaluated usually by someone from the client team who will then merge and fix the tests (mostly manually) to reflect the new reality and test the system manually. Claude misses things (especially if functionalities are very similar but not exactly the same, it cannot really pick which to take so it does nothing usually) but the biggest bulk/work is done quickly and usually without causing issues.

  • The link [0] implies that the browser worked. Can you help me understand what's "conservative" about that?

I'm eager to find out if this was actually successfully compiled at one point (otherwise how did they get the screenshots?), so I'm running `cargo check` for each of the last 100 commits to see if anything works. Will update here with the results once it's ready.

Edit: As mentioned, I ran `cargo check` on all the last 100 commits, and seems every single of them failed in some way: https://gist.github.com/embedding-shapes/f5d096dd10be44ff82b...

  • > otherwise how did they get the screenshots

    Their AI is probably better at producing images than writing code

  • Should compile now: https://news.ycombinator.com/item?id=46650998

    • > Yeah, seems latest commit does let `cargo check` successfully run. I'm gonna write an update blog post once they've made their statement, because I'm guessing they're about to say something.

      > Sometime fishy is happening in their `git log`, it doesn't seem like it was the agents who "autonomously" actually made things compile in the end. Notice the git username and email addresses switching around, even a commit made inside a EC2 instance managed to get in there: https://gist.github.com/embedding-shapes/d09225180ea3236f180...

      Gonna need to look closer into it when I have time, but seems they manually patched it up in the end, so the original claim still doesn't stand :/

  • I wouldn’t be surprised if any form of screen shot is fake (as in not made the way it claims), in my experience Occam’s razor tends to lead that way when extraordinary claims are made regarding LLM’s.

Like it or not, it's a fundraising strategy. They have followed it mutliple times (eg: vague posts about how much their inhouse model is writing code, online RL, and lines of code etc. earlier) and it was less vague before. They released a model and did not give us the exact benchmarks or even tell us the base model for the same. This is not to imply there is no substance behind it, but they are not as public about their findings as one would like them to be. Not a criticism, just an observation.

  • Never releasing the benchmarks or being openly benched unlike literally every other model provider always irked me.

    I think they know they're on the backfoot at the moment. Cursor was hot news for a long time but now it seems terminal based agents are the hot commodity and I rarely see cursor mentioned. Sure they already have enterprise contracts signed but even at my company we're about to swap from a contract with cursor to Claude code because everyone wants to use that instead now - especially since it doesn't tie you to one editor.

    So I think they're really trying to get "something" out there that sticks and puts them in the limelight. Long context/sessions are one of the hot things especially with Ralph being the hot topic so this lines up with that.

    Also I know cursor has its own cli but I rarely see mention of it.

  • Unfortunately all the major LLM companies have realized the truth doesn't really matter anymore. We even saw this with the GPT-5 launch with obviously vibe coded + nebulous metrics.

    Diminishing returns are starting to really set in and companies are desperate for any illusion to the contrary.

  • I used to hate this, I've seen Apple do it with claims of security and privacy, I've seen populist demagogues do this with every proposal they make. Now I realize this is just the reality of the world.

    Its just a reminder not to trust, instead verify. Its more expensive, but trust only leads to pain.

    • “Lying is just the reality of the world” is a cop-out

      Don’t give them, or anyone, a free pass for bad behavior.

      1 reply →

    • Fraud, lies, and corruption are so often the reality of the world right now because people keep getting away with it. The moment they're commonly and meaningfully held accountable for lying to the public we'll start seeing it happen less often. This isn't something that can't be improved, it just takes enough people willing to work together to do something about it.

      1 reply →

Hey, Wilson here, author of the blog post and the engineer working on this project. I've been reading the responses here and appreciate the feedback. I've posted some follow up context on Twitter/X[0], which I'll also write here:

The repo is a live incubator for the harness. We are actively researching the behavior of collaborative long running agents, and may in the future make the browser and other products this research produces more consumable by end users and developers, but it's not the goal for now. We made it public as we were excited by the early results and wanted to share; while far off from feature parity with the most popular production browsers today, we think it has made impressive progress in the last <1 week of wall time.

Given the interest in trying out the current state of the project, I've merged a more up-to-date snapshot of the system's progress that resolves issues with builds and CI. The experimental harness can occasionally leave the repo in an incomplete state but does converge, which was the case at the time of the post.

I'm here to answer any further questions you have.

[0] https://x.com/wilsonzlin/status/2012398625394221537?s=20

  • That doesn’t really address much of the criticism in this thread. No one is shocked that it’s not as good as production web browsers. It’s that it was billed as “from scratch” but upon deeper inspection it looks like it’s just gluing together Servo and some other dependencies, so it’s not really as impressive or interesting because the “agents” didn’t really create a browser engine.

    • Upon deeper inspection? Someone checked the Cargo file and proclaimed it was just Servo and QuickJS glued together without actually bothering to look if these dependencies are even being used.

      In reality while project does indeed have Servo in its dependencies it only uses it for HTML tokenization, CSS selector matching and some low level structures. Javascript parsing and execution, DOM implementation & Layout engine was written from scratch with only one exception - Flexbox and Grid layouts are implemented using Taffy - a Rust layout library.

      So while “from scratch” is debatable it is still immensely impressive to be that AI was able to produce something that even just “kinda works” at this scale.

      1 reply →

    • Thanks for the feedback. I agree that for some parts that use dependencies, the agent could have implemented them itself. I've begun the process of removing many of these and developing them within the project alongside the browser. A reasonable goal for "from scratch" may be "if other major browsers use a dependency, it's fine to do so too". For example: OpenSSL, libpng, HarfBuzz, Skia.

      I'd push back on the idea that all the agents did was glue dependencies together — the JS VM, DOM, CSS cascade, inline/block/table layouts, paint systems, text pipeline, chrome, and more are all being developed by agents as part of this project. There are real complex systems being engineered towards the goal of a browser engine, even if not fully there yet.

  • Make it port Firefox's engine to iOS, that's something people would actually use (in countries where Apple is forced to allow other browser engines).

If you look at the original Cursor post, they say they are currently running similar experiments, for instance, this Excel clone:

https://github.com/wilson-anysphere/formula

The Actions overview is impressive: There have been 160,469 workflow runs, of which 247 succeeded. The reason the workflows are failing is because they have exceeded their spending limit. Of course, the agents couldn't care less.

  • I actually ran this one. It measures some 700k lines of code, and seems to contain things like a full VBA implementation, complex currency and date parsing, etc. But the UI is extremely basic, doesn't seem to expose any of this advanced functionality, and and is buggy to the point of being unusable. Focus will jump around as you type, cells will reset to old values, it will stop responding to keyboard events, etc.

  • IMHO people are missing the forest for the trees. The point of this experiment is not to build a functional browser but to develop ways to make agents create large codebases from scratch over a very long time span. A Web browser is just a convenient target because there are lots of documentation, specs and tests available.

    • The point is to learn how to make very large codebases that don't compile? Why do you need tests and specs if it's not going to even run, much less run correctly?

      2 replies →

    • ...but it didn't develop ways of doing that did it?

      Any idiot can have cursor run for 2 weeks and produce a pile of crap that doesn't compile.

      You know the brilliant insight they came out with?

      > A surprising amount of the system's behavior comes down to how we prompt the agents. Getting them to coordinate well, avoid pathological behaviors, and maintain focus over long periods required extensive experimentation. The harness and models matter, but the prompts matter more.

      i.e. It's kind of hard and we didn't really come up with a better solution than 'make sure you write good prompts'.

      Wellll, geeeeeeeee! Thanks for that insight guys!

      Come on. This was complete BS. Planners and workers. Cool. Details? Any details? Annnnnnnyyyyy way to replicate it? What sort of prompts did you use? How did you solve the pathalogical behaviours?

      Nope. The vagueness in this post... it's not an experiment. It's just fund raising hype.

      2 replies →

For my 11th or 12th birthday, I got a pet porcupine and I was ecstatic. It was my first pet, and I spent hours researching what they eat, what habitats they like, etc. I carefully curated my room to accommodate him (him being 'Sonic'), even keeping it clean for the first time in forever so I wouldn't lose him amidst the mess of soiled undergarments and such. He loved it, and I loved him. Of course, it made no difference when my uncle sat on him on Christmas morning. We rushed him to the vet, but they told us his scans showed fractures on several vertebrae or something like that. We took him home, and waited for him to die, but the waiting was too painful. I'll spare the details, but what transpired next involved my dad, his shovel, and a lot of tears.

About an hour later, we got a call from the vet - they'd misread the scan, and Sonic was gonna be fine. I think I was traumatized at the time, but the whole thing later became an inside joke (?) for my family - "Don't kill your porcupine before the vet calls" (a la "Don't count your chickens before they hatch").

I guess my point, as it pertains to Cursor, its AI offerings, and other corporations in the space is that we shouldn't jump the gun before a reasonable framework exists to evaluate such open-ended technologies. Of course Cursor reported this as a success, the incentive structure demands they do so. So remember - don't kill your porcupine before the vet calls.

  • Welcome to HN, thanks for sharing. That’s a very sad story, I hope you aren’t traumatized still.

    A reasonable framework does exist. Since the claim is “we made a web browser from scratch” the framework is:

    1. Does it actually f*** work?

    2. Is it actually from scratch?

    It fails on both counts. Further, even when compiled successfully, as others have pointed out, it takes more than a minute to load some pages which is a fail for #1.

  • > other corporations in the space is that we shouldn't jump the gun before a reasonable framework exists to evaluate such open-ended technologies

    How else will they raise a Bajillion $ for the next model?

The latest commit now builds and runs (at least on my Mac). It’s tragically broken and the code is…dunno…something. 3m lines of something.

I couldn’t make it render the apple page that was on the Cursor promo. Maybe they’ve used some other build.

  • Yeah, seems latest commit does let `cargo check` successfully run. I'm gonna write an update blog post once they've made their statement, because I'm guessing they're about to say something.

    Sometime fishy is happening in their `git log`, it doesn't seem like it was the agents who "autonomously" actually made things compile in the end. Notice the git username and email addresses switching around, even some commits made inside a EC2 instance managed to get in there: https://gist.github.com/embedding-shapes/d09225180ea3236f180...

  • I am not an expert AI user, but one typical 'failure mode' I see constantly is the AI reimplementing features that already exist in the codebase, or breaking existing ones.

I think the original post was just headline bait. There is such a fast news cycle around AI that many people would take "Thousands of AI agents collaborate to make a web browser" at face value.

  • At least I now have something to link to, when this inevitable gets mentioned in some off-hand HN comment, about how "now AI agents can build whole browsers from scratch".

    • Literally happened at work. Breathless thread of people saying how insane it was and then we got to link this and it immediately 180-ed and everyone was like “holy shit that’s messed up”

      2 replies →

  • A fast news cycle around projects that don't actually work. It's a real bummer that "fake news" became politically charged because it's a perfect description of this segment.

The CEO said

> It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.

"From scratch" sounds very impressive. "custom JS VM" is as well. So let's take a look at the dependencies [1], where we find

- html5ever

- cssparser

- rquickjs

That's just servo [2], a Rust based browser initially built by Mozilla (and now maintained by Igalia [3]) but with extra steps. So this supposed "from scratch" browser is just calling out to code written by humans. And after all that it doesn't even compile! It's just plain slop.

[1] - https://github.com/wilsonzlin/fastrender/blob/main/Cargo.tom...

[2] - https://github.com/servo/servo

[3] - https://blogs.igalia.com/mrego/servo-2025-stats/

  • Why would they think it's a great idea to claim they implemented CSS and JS from scratch when the first thing any programmer would do is to look at the code and immediately find out that they're just using libraries for all of that?! They can't be as dumb as thinking no one would've noticed?!

    I guess the answer is that most people will see the claim, read a couple of comments about "how AI can now write browsers, and probably anything else" from people who are happy to take anything at face value if it supports their view (or business) and move on without seeing any of the later comotion. This happens all the time with the news. No one bothers to check later if claims were true, they may live their whole lives believing things that later got disproved.

    • > They can't be as dumb as thinking no one would've noticed?!

      With over 20 years of experience as an adult, and more years of noticing dumb mistakes of adults when I was a teen, I can absolutely assure you that even before LLMs were blowing smoke up their user's backsides and flattering their user's intelligence, plenty of people are dumb enough to make mistakes like this without noticing anything was wrong.

      For example, I'm currently dealing with customer support people that can't seem to handle two simultaneous requests or read the documents they send me, even after being ordered to pay compensation by an Ombudsman. This kind of person can, of course, already be replaced by an LLM.

    • I mean... Cursor is the CEO's first non-internship job. And it was a VSCode Extension that caught fire atop the largest technological groundswell in a few decades.

      The default assumption should be that this is a moderately bright, very inexperienced person who has been put way out over his skis.

      2 replies →

    • > Why would they think it's a great idea to claim they implemented CSS and JS from scratch when the first thing any programmer would do is to look at the code and immediately find out that they're just using libraries for all of that?!

      Programmers were not the target audience for this announcement. I don’t 100% know who was, but you can kind of guess that it was a mix of: VC types for funding, other CEOs for clout, AI influencers to hype Cursor.

      Over-hyping a broken demo for funding is a tale as old as time.

      That there’s a bit of a fuck-you to us pleb programmers is probably a bonus.

    • I don't think he intentionally lied. He just didn't know how to check that and AI wrote

         - [tick mark emoji] implemented CSS and JS rendering from scratch - **no dependencies**.

    • I'm actually impressed by their ignorance. I could never sleep at night knowing my product is built on such brazen lies.

      Bullshitting and fleecing investors is a skill that needs to be nurtured and perfected over the years.

      I wonder how long this can go on.

      Who is the dumb money here? Are VCs fleecing "stupid" pension funds until they go under?

      Or is it symptom of a larger grifting economy in the US where even the president sells vaporware, and people are just emulating him trying to get a piece of the cake?

    • > They can't be as dumb as thinking no one would've noticed?!

      Maybe they're just hoping that there's an investor out there who is exactly that dumb.

  • Yeah, it's

    - Servo's HTML parser

    - Servo's CSS parser

    - QuickJS for JS

    - selectors for CSS selector matching

    - resvg for SVG rendering

    - egui, wgpu, and tiny-skia for rendering

    - tungstenite for WebSocket support

    And all of that has 3M of lines!

  • I'm reminded of the viral tweet along the lines of "Claude just one-shotted a 10k LOC web app from scratch, 10+ independent modules and full test coverage. None of it works, but it was beautiful nonetheless."

  • Thanks for the feedback. I've addressed similar feedback at [0] and provided some more context at [1].

    I do want to briefly note that the JS VM is custom and not QuickJS. It also implemented subsystems like the DOM, CSS cascade, inline/block/table layouts, paint systems, text pipeline, and chrome, and I'd push back against the assertion that it merely calls out to external code. I addressed these points in more detail at [0].

    [0] https://news.ycombinator.com/item?id=46650998 [1] https://news.ycombinator.com/item?id=46655608

    • > I do want to briefly note that the JS VM is custom and not QuickJS

      It's hard to verify because your project didn't actually compile. But now that you've fixed the compilation manually, can you demonstrate the javascript actually executing? Some of the people who got the slop compiling claimed credibly that it isn't executing any JavaScript.

      You merely have to compile your code, run the binary and open this page - http://acid3.acidtests.org. Feel free to post a video of yourself doing this. Try to avoid the embellishment that has characterised this effort so far.

      2 replies →

  • Also selectors and taffy.

    It's also using weirdly old versions of some dependencies (e.g. wgpu 0.17 from June 2023 when the latest is 28 released in Decemeber 2025)

    • That is because I've noticed the AI just edits the version management files (package.json, cargo.toml, etc) directly instead of using the build tool (npm add, cargo add), so it always hallucinates a random old version that's found in its training set. I explicitly have to tell the AI to use the build tool whenever I use AI.

      9 replies →

  • Honestly as soon as I saw browser in rust I assumed it had just reproduced the servo source code in part, or utilised its libraries.

    • I thought they'd plagiarise, not import. Importing servo's code would make it obvious because it's so easy to look at their dependencies file. And yet ... they did. I really think they thought no one would check?

      3 replies →

    • You know, a good test would be to tell it to write a browser using a custom programming language, or at least some language for which there are no web browsers written.

      7 replies →

  • Is it using Servo's layout code or did Cursor write its own layout? That's one of the hardest parts.

    • It's using layout code from my library (Taffy) for Flexbox and CSS Grid. Servo uses Taffy for CSS Grid, and another open source engine that I work on (Blitz) uses it for Flexbox, CSS Grid, Block and float layout.

      The older block/inline layout modes seem to be custom code that looks to me similar but not exactly the same as Servo code. But I haven't compared this closely.

      I would note that the AI does not seem to have matched either Servo or Blitz in terms of layout: both can layout Google.com better than the posted screenshot.

    • It seemingly did but after I saw it define a VerticalAlign twice in different files[1][2][3] I concluded that it's probably not coherent enough to waste time on checking the correctness.

      Would be interesting if someone who has managed to run it tries it on some actually complicated text layout edge cases (like RTL breaking that splits a ligature necessitating re-shaping, also add some right-padding in there to spice things up).

      [1] https://github.com/wilsonzlin/fastrender/blob/main/src/layou...

      [2] https://github.com/wilsonzlin/fastrender/blob/main/src/layou...

      [3] Neither being the right place for defining a struct that should go into computed style imo.

  • > The JS engine used a custom JS VM being developed in vendor/ecma-rs as part of the browser, which is a copy of my personal JS parser project vendored to make it easier to commit to.

    https://news.ycombinator.com/item?id=46650998

    • It looks like there are two JS backends: quickjs and vm-js (vendor/ecma-rs/vm-js), based on a brief skim of the code. There is some logic to select between the two. I have no idea if either or both of them work.

  • > is just calling out to code written by humans

    We at least it's not outright ripping them off like it usually does.

  • To be fair, even if "from scratch" means "download and build Chromium", that's still nontrivial to accomplish. And with how complicated a modern browser is, you can get into Ship of Theseus philosophy pretty fast.

    I wouldn't particularly care what code the agents copied, the bigger indictment is the code doesn't work.

    So really, they failed to meet the bar of "download and build Chromium" and there's no point to talk about the code at all.

I think that the companies that have the mindset "Let's give engineers tools that can leverage their strengths and eliminate toil" have way more success than those scammy "get-rich-fast let's automate software development and stop paying those sv salaries, invest in us!!!" gigs like Cursor and Devin.

Their whole attitude leads to them wasting time with those Willy the Coyote Plans instead of building good products like Amp.

  • Huge distinction between the two, one is about "Augmenting the human intellect" and the other is about "Get rich quick", but unfortunately it seems it's hard even for software developers to see which is which sometimes.

These are stories that solely exist just to sell shovels and would cause one uninformed CEO to layoff actual humans.

I really doubt this marketing approach is effective. Isn't this just shooting themselves in the foot? My actual experience with Cursor has been: their design is excellent and the UX is great—it handles frontend work reasonably well. But as soon as you go deeper, it becomes very prone to serious bugs. While the addition of Claude's new models has helped somewhat, the results are still not as good as Google's Antigravity (despite its poor UX and numerous bugs). What's worse, even with this much-hyped Claude model, you can easily blow through the $20 subscription limit in just a few days. Maybe they're betting on models becoming 10x better and 10x cheaper, but that seems unlikely to happen anytime soon.

  • Hitting my head into buggy apps made by these AI companies and seeing them all be amazed in parallel that skills/MCP would be necessary for real work has me pretty relaxed about ‘our jobs’.

    OpenAIs business-model floundering, degenerating inline to ads soon (lol), shows what can be done with infini-LLM, infini-capital, and all the smarts & connections on Earth… broadly speaking, I think the geniuses at Google who invented a lot of this shizz understand it and were leveraging it appropriately before ChatGPT blew up.

    • We use mcp at work. Due to some typo the model ran absolutely random queries on our database most of the cases. We had initially kept ot open ended but after that, we wrote custom tools that took an input, gave an output and that was strictly mentioned in the prompt. Only then did it work fine.

That's actually the state of autonomous coding in 2026, scale the output, skip the verification.

  • Also since firefox is FOSS and any model has reasonably been trained on the code base of at least Firefox if not also Chromium, it's not a shock that agents are able to generate a similar code!

I wonder who they actually tried to impress with that? People who understand and appreciate the difficulty of building a browser from scratch would surely be interested to understand what you (or your Agent) did to a degree that they would understand if you didn’t.

This is par for the course with this AI slop. Most of the big claims about LLM productivity have completely lacked any backing evidence. Big claims require big evidence, but all I've seen so far is loud assertions and pathetic results.

  • I’m happy that this shows that hard work, understanding your codebase, having performant software, having actually working software, rigorously measuring and proving proof of results still matters.

    There’s a huge difference between using LLMs to offload any hard work and for LLMs to be of some assistance while you are in control and take ownership of the output.

    Unfortunately, the general public probably didn’t try a git clone and cargo build, and took the article at face value.

Can’t help but draw parallels to how working with AI feels like. Your coworker opens a giant impressive looking PR and marks it ready for review. Meanwhile it’s up to someone else in the team to do the actual work of checking. Meanwhile the PR author gets patted on the back by management for being forward thinking and pro-active while everyone else is “nitpicky” and holding progress back.

  • I’m dealing with similar issues.

    It’s reasonable to come up with team rules like:

    - “if the reviewer finds more than 5 issues the PR shall be rejected immediately for the submitter to rework”

    - “if the reviewer needs to take more than 8 hours to thoroughly review the PR it must be rejected and sent back to split up into manageable change sets”

    Etc etc. let’s not make externalizing work for others appropriate behavior.

    • Eight hours to review! Girlie how big are these PRs?

      I can’t imagine saying, “ah, only six hours of heads down time to review this. That’s reasonable.”

      A combination of peer reviewed architecture documentation and incremental PRs should prevent anything taking nearly 8 hours of review.

      1 reply →

  • Not to mention the fact that juniors can now put the entire problem statement in AI chatbot which spits out _some_ code. The said juniors then don't understand half the code and run the code and raise the PR. They don't get a pat on the back but this raises countless bugs later on. This is much worse as they don't develop skills on their own. They blindly copy from AI.

That's kind of hilarious (...ly sad) to read knowing that I have on my desk https://browser.engineering so I literally went the opposite direction some months ago.

Not only did I actually build a Web browser myself, from scratch (ok OK of course with a working OS and Python, and its libraries ;) but mine, did work! And it took me what, few hours, maybe few days if adding it altogether but, not only it did work (namely I did browse my own Website with it) but I had fun with it (!), I learned quite a bit with it (including the provable fact that I can indeed build a Web browser, woohoo!) and finally I did it on... I want say few kilowatts at most, including my computer (obviously) but also myself and the food I ate along the way.

So... to each their own ̄\_ (ツ)_/ ̄

I feel that getting anywhere into the neighborhood of “kind of working” for a project like this is noteworthy and a huge milestone. Maybe a better headline would be, however: Agents almost create a working browser.

  • Yes, if Cursor claimed "We let autonomous agents run for weeks, and they produced millions of lines of code, and it kind of looks like a browser, and it kind of runs", then I wouldn't have written and published TFA.

    But their claim wasn't so nuanced, it was "hundreds of agents can work on a single codebase autonomously for weeks and build an entire browser from scratch that works (kinda)". Considering the hand-holding that seems to have been required to get it to compile, this claim doesn't seem to hold up to scrutiny.

    • I've watched them today work in the new repo - https://github.com/wilson-anysphere/fastrender/tree/main , adding another 50k lines trying to optimize scroll/rendering performance (spoiler: not really)

      At this point, its 1.5mlocs without the vendored crates (so basically excluding the js engine etc). If you compare that to Servo/Ladybird which are 300k locs each and actually happen to work, agents do love slinging slop.

If it just forks chromium because it found it on the web it would also claim it made a browser from scratch. LLM does not know. It is not a person, it is a thing, just an algorithm

So they prove that if you have enough money to burn you can use AI to generate terabytes of useless junk?

Who would have thought of that?

Key phrase "They never actually claim this browser is working and functional " This is what most AI "successes" turn out to be when you apply even a modicum of scrutiny.

  • In my personal experience, Codex and Claude Code are definitively capable tools when used in certain ways.

    What Cursor did with their blogpost seems intentionally and outright misleading, since I'm not able to even run the thing. With Codex/Claude Codex it's relatively easy to download it and run it to try for yourself.

    • "definitively capable tools when used in certain ways". This sounds like "if it doesn't work for you is because you don't use in the right way" imo.

      Reminds me of SAAP/Salesforce.

      11 replies →

    • > Codex and Claude Code are definitively capable tools when used in certain ways.

      They definitely can make some things better and you can do somethings faster, but all the efficiency is gonna get sucked up by companies trying to drop more slop.

      1 reply →

I haven’t studied the project that this is a comment on, but: The article notices that something that compiles, runs, and renders a trivial HTML page might be a good starting point, and I would certainly agree with that when it’s humans writing the code. But is it the only way? Instead of maintaining “builds and runs” as a constant and varying what it does, can it make sense to have “a decent-sized subset of browser functionality” as a constant and varying the “builds and runs” bit? (Admittedly, that bit does not seem to be converging here, but I’m curious in more general terms.)

  • In theory you could generate a bunch of code that seems mostly correct and then gradually tweak it until it's closer and closer to compiling/working, but that seems ill-suited to how current AI agents work (or even how people work). AI agents are prone to make very local fixes without an understanding of wider context, where those local fixes break a lot of assumptions in other pieces of code.

    It can be very hard to determine if an isolated patch that goes from one broken state to a different broken state is on net an improvement. Even if you were to count compile errors and attempt to minimize them, some compile errors can demonstrate fatal flaws in the design while others are minor syntax issues. It's much easier to say that broken tests are very bad and should be avoided completely, as then it's easier to ensure that no patch makes things worse than it was before.

    • > generate a bunch of code that seems mostly correct and then gradually tweak it until it's closer and closer to compiling/working

      The diffusion model of software engineering

  • ...What use is code if it doesn't build and run? What other way is there to build a browser that doesn't involved 'build and run'?

    Writing junk in a text file isn't the hard part.

    • Obviously, it has to eventually build and run if there’s to be any point to it, but is it necessary that every, or even any, step along the way builds and runs? I imagine some sort of iterative set-up where one component generates code, more or less "intelligently", and others check it against the C, HTML, JavaScript, CSS and what-have-you specs, and the whole thing iterates until all the checking components are happy. The components can’t be completely separate, of course, they’d have to be more or less intermingled or convergence would be very slow (like when lcamtuf had his fuzzer generate a JPEG out of an empty file), but isn’t that basically what (large) neural networks are; tangled messes of interconnected functions that do things in ways too complicated for anyone to bother figuring out?

      3 replies →

  • > an it make sense to have “a decent-sized subset of browser functionality” as a constant and varying the “builds and runs” bit?

    I mean by definition something that doesn't build and run doesn't have any browser-like functionality at all.

Dear god please let AI get forever stuck at this point because it would be so funny

  • Just view the "input cost" vs "output accuracy" graph.

    It _is_ stuck at this point.

    There's so much money involved no one wants to admit it out loud.

    They have no path to the necessary exponential gains and no one is actually working on it.

    • Hilarious thing to say when we've just had some of the biggest leaps ever with Gemini 3 and Opus 4.5.

    • The greatest grift of all time.

      I don’t mean the tech itself—-which is kind of useful. I mean the 99% of the value inflation of a kind of useful tool (if you know what you’re doing).

      2 replies →

    • Just one more new model bro the next one is AGI bro just give me a trillion dollars and I’ll build the datacenters and everything will be perfect bro I promise bro please

  • Even if it doesn't see any improvements beyond this point it wouldn't be a big deal. It's good enough for most programmers and any improvements are just a bonus.

    • The masters of mankind are yearning to replace expensive tech workers with this. With agentic versions of LLMs we are at a point now where they can (and should) certainly try and create a more hilarious world

Out of curiosity, what is the most difficult thing about building a browser?

  • The very long task list.

    Browsers contain several high complexity pieces each of could take a while to build on its own, and interconnect them with reasonably verbose APIs that need to be implemented or at least stubbed out for code to not crash. There is also the difficulty of matching existing implementations quirk for quirk.

    I guess the complexity is on-par with operating systems, but with the added compatibility problems that in order to be useful it doesn't just have to load sites intended to be compatible with it, it has to handle sites people actually use on the internet, and those are both a moving target, and tend to use lots of high complexity features that you have to build or at least stub out before the site will even work.

  • In all sincerity, this question is almost identical to "what's the most difficult thing about building an operating system" as a modern browser is tens of millions of lines of code that can run sophisticated applications. It has a network stack, half a dozen parsers, frame construction and reflow modules, composite, render and paint components, front end UI components, an extensibility framework, and more. Each one of these must enable supporting backward compatibility for 30 year old content as well as ridiculously complex contemporary web apps. And it has to load and render sites that a completely programming illiterate fool like me wrote. It must do this all in a performant and secure way using minimal system resources. Also, it probably also must run on Mac, Windows, Linux, Android, iOS, and maybe more.

  • Check out the list of all CSS specifications [1], and then open any one of them and see how lengthy and elaborate each is. Then do the same for each version of the spec published over the last thirty years. Before you can start, you must read and understand all of this at a great level of depth. Still, specifications never tell the complete story. You must be aware of all the nuances that are implied by each requirement in the spec and know how to handle the zillion corner cases that will crop up inevitably.

    And this is just one part. Not even considering the fully sandboxed, mini operating system for running webapps.

    [1] https://www.w3.org/Style/CSS/specs.en.html

I am just so utterly tired of AI companies lying about everything, constantly without end.

The things that modern machine learning can do are absolutely incredible, mindblowing and have myriad uses. But this culture of startup scams to siphon money out of the economy and into the bank accounts of a few investment firms and a couple "visionaries" has just turned what should be an exciting field full of technical advancement into a deluge of mental sewage that's constantly pumped into our faces.

there’s a curve where something of a conservative middle in AI marketing stunts are held to a higher level of criticism than headlines on either side

> company claims they "built a browser" from scratch

> looks inside

> completely useless and busted

30 billion dollar VS Code fork everyone. When we do start looking at these people for what they are: snake oil salesmen.

They slop laundered the FOSS Servo code into a broken mess and called it a browser, but dumbasses with money will make line go up based on lies. EFF right off.

If this is what makes the AI bubble pop I'll laugh so hard.

  • Wishful thinking. They’re trying to (and maybe successfully) doing a military-industrial complex style thing with AI.

    • Probably, but this is one of the few cases where instead of being told how amazing some AI tool is we are shown just what it can do.

This is why AI skeptics exist. We’re now at the point where you can make entirely unsubstantiated claims about AI capability, and even many folks on HN will accept it with a complete lack of discernment. The hype is out of control.

  • > folks on HN will accept it with a complete lack of discernment

    Well, I'm a heavy LLM user, I "believe" LLM helps me a lot for some tasks, but I'm also a developer with decades of experience, so I'm not gonna claim it'll help non-programmers to build software, or whatever. They're tools, not solutions in themselves.

    But even us "folks on HN" who generally keep up with where the ecosystem is going, have a limit I suppose. You need to substantiate what you're saying, and if you're saying you've managed to create a browser, better let others verify that somehow.

    • > but I'm also a developer with decades of experience, so I'm not gonna claim it'll help non-programmers to build software, or whatever. They're tools, not solutions in themselves.

      Also with decades experience, I'd say that it depends how big the non-programmer is dreaming:

      To agree with you: A well-meaning friend sent an entrepreneur my direction, whose idea was "Uber for aircraft". I tried to figure out exactly what they meant, ending the conversation when I realised all answers were rephrasing of that vague three words pitch, that they didn't really know what they wanted to do in any specific enumerable sense.

      LLMs can't solve the problem when even the person asking doesn't know what they want.

      But on the other end the scale, I've been asked to give an estimate for an app which, in its entirety, would've been one day's work even with the QA and acceptance testing and going through the Apple App Store upload process. Like, I kept asking if there was any other hidden complexity, and nope, the entire pitch was what you'd give as a pre-interview code-challenge.

      An LLM would've spat out the solution to that in less time than I spent with the people who'd asked me to estimate it.

Lesson 1:

Always take any pronouncement from an AI company (heavily dependent on VC and public sentiment on AI) with a heavy grain of salt..

hype over reality

I’m building an AI startup myself and I know that world and its full of hypsters and hucksters unfortunately - also social media communication + low attention span + AI slop communication is a blight upon todays engineering culture

(this has been fixed)

  • Thank you for telling me about the email, it had a typo :( Been fixed now.

    Regarding the downvotes, I think it's because it's feeling like you're pushing your project although it isn't really super relevant to the topic. The topic is specifically about Cursor failing to live up to their claims.

I think it's only a matter of time until this becomes reality. It's almost inevitable.

My prediction last year was already that in the distant future - more than 10 years into the future - operating systems will create software on the fly. It will be a basic function of computers. However, there might remain a need for stable, deterministic software, the two human-machine interaction models can live together. There will be a need for software that does exactly what one wants in a dumb way and there will be a need for software that does complex things on the fly in an overall less reliable ad hoc way.

  • We might cure cancer in 10 years. We could have Martian colonists in the next decade. Everyone might be commuting to work with a jet pack. Literally anything could happen, especially given a long enough time horizon.

    • You do realize that AI can already today write fairly complex software autonomously, don't you? It's not as if I haven't tested that. It works quite well for certain tasks and with certain programming languages.

      Anyone who knows history knows that people initially tend to underestimate the impact of technologies, yet few people learn something from that lesson.

The amount of negativity in the original post was astounding.

People were making all sorts of statements like: - “I cloned it and there were loads of compiler warnings” - “the commit build success rate was a joke” - “it used 3rd party libs” - “it is AI slop”

What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

If you are hung up on commit build quality, or code quality, you are completely missing the point, and I fear for your job prospects. These things will get better; they will get safer as the workflows get tuned; they will scale well beyond any of us.

Don’t look at where the tech is. Look where it’s going.

  • As mentioned elsewhere (I'm the author of this blogpost), I'm a heavy LLM user myself, use it everyday as a tool, get lots of benefits from it. It's not a "hit post" on using LLM tools for development, it's a post about Cursor making grand claims without being able to back them up.

    No one is hung up on the quality, but there is a ground fact if something "compiles" or "doesnt". No one is gonna claim a software project was successful if the end artifact doesn't compile.

    • I think for the point of the article, it appeared to, at some point, render homepages for select well known sites. I certainly did not expect this to be a serious browser, with any reliability or legs. I don’t think that is dishonest.

      2 replies →

  • > What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

    Correct, but Gas Town [1] already happened and what's more _actually worked_, so this experiment is both useless (because it doesn't demonstrate working software) _and_ derivative (because we've already seen that you can set up a project where with spend similar to the spend of a single developer you can churn out more code than any human could read in a week).

    [1]: https://github.com/steveyegge/gastown

  • > What they all seem to be just glossing over is how the project unfolded: without human intervention, using computers, in an exceptionally accelerated time frame, working 24hr/day.

    The reason I have yet to publish a book is not because I can't write words. I got to 120k words or so, but they never felt like the right words.

    Nobody's giving me (nor should they give me) a participation trophy for writing 120k words that don't form a satisfying novel.

    Same's true here. We all know that LLMs can write a huge quantity of code. Thing is, so does:

      yes 'printf("Hello World!");'
    

    The hard part, the entire reason to either be afraid for our careers or thrilled we can switch to something more productive than being code monkeys for yet-another-CRUD-app (depending on how we feel), that's the specific test that this experiment failed at.

  • Spending 24h/day to build nothing isn't impressive - it's really, really bad. That's worse than spending 8h/day to build nothing.

    If the piece of shit can't even compile, it's equivalent to 0 lines of code.

    > Don’t look at where the tech is. Look where it’s going.

    Given that the people making the tech seem incapable of not lying, that doesn't give me hope for where it's going!

    Look, I think AI and LLMs in particular are important. But the people actively developing them do not give me any confidence. And, neither do comments like these. If I wanted to believe that all of this is in vain, I would just talk to people like you.

  • >If you are hung up on commit build quality

    I'm sorry but what? Are you really trying to argue that it doesn't matter that nothing works, that all it produced is garbage and that what is really important is that it made that garbage really quickly without human oversight?

    That's.....that's not success.

    • Quality absolutely matters, but it's hyper context dependent.

      Not everything needs to, or should have the same quality standards applied to them. For the purposes of the Cursor post, it doesn't bother me that most of the commits produced failed builds. I assume, from their post, that at some points, it was capable of building, and rendering the pages shown in the video on the post. That alone, is the thing that I think is interesting.

      Would I use this browser? Absolutely not. Do I trust the code? Not a chance in hell. Is that the point? No.

      3 replies →

  • It is hard to look at where it is going when there are so many lies about where the tech is today. There are extraordinary claims made on Twitter all the time about the technology, but when you look into things, it’s all just smoke and mirrors, the claims misrepresent the reality.

  • What a silly take. Where the tech is is extremely relevant. The reality of this blog post is it shows the tech is clearly not going anywhere better either, as they seem to imply. 24 hours of useless code is still useless code.

    This idea that quality doesn't matter is silly. Quality is critical for things to work, scale, and be extensible. By either LLMs or humans.

  • People that spend time poking holes in random vendor claims remind me of folks you see video of standing on the beach during a tsunami warning. Their eyes fixed on the horizon looking for a hundred foot wave, oblivious to the shore in front of them rapidly being gobbled up by the sea.

    • > oblivious to the shore in front of them rapidly being gobbled up by the sea

      Am I misunderstanding this metaphor? Tsunamis pull the sea back before making landfall.