Comment by somebee

4 years ago

Tbh, I think the js-framework-benchmark is flawed. It mostly tests the performance of the browser. I should write a whole blog post about this. Just as an example, all the table benchmarks uses a table with non-fixed width, which results in a full repaint AND layout of the whole table+page whenever a cell changes. If you change the table to a fixed width (as all real tables are) the relative difference between the frameworks increase by a factor of 5 or more.

And when you benchmark the speed of creating 10000 dom elements in an instant, less than 5% of the time should really be spent inside the framework one is supposed to test.

I stand by my claim in the mentioned article that tiny changes to a larger dom tree is a far better indicator of real world performance than anything else. Here Imba is really orders of magnitudes faster than react.

The last time I tested it, Imba was more than 10x faster than Svelte as well, but I'm not proficient enough in Svelte to claim that as a fact, and I have tremendous respect for Rich Harris and everything he's done with Svelte and other libraries.

Setting aside performance comparisons, how does Imba's approach compare to Svelte's from a design perspective? From your Meet the Memoized DOM article, I take it that Imba is basically converting declarative code into imperative code that mutates the DOM – on first glance, that sounds very similar to Svelte's compiler-driven approach.

Are the two strategies as similar as they sound, or am I misunderstanding something?

This is a cool project somebee. Interested to explore more.

On benchmarking: I went through the same concerns and ended up building a little benchmarking tool for a simple reactive UI library I'm working on. It's not super user-friendly yet but doing a good job of profiling tasks.

You can write custom benchmarks by clearly separating pre-setup work than relying on ready-made benchmarks (a bit of a pain initially, but helps a lot to fine-tune at unit-level going forward).

It uses Chrome DevTools Protocol(CDP) through Puppeteer and allows to analyze execution durations separately (Scripting, Layout, Paint, etc). Plus, it saves raw JSON profiling data, so you could import & examine it visually on DevTools Performance Tab's Timeline.

Think it will be helpful: https://github.com/dumijay/pfreak This is how the results look like: https://caldom.org/benchmark/

Flawed in what sense? I don't doubt that there are some conceptual drawbacks, and this only benchmarked chrome, not other browsers. I do think there's some utility in relative comparisons that have a standard/fixed baseline, as it would still seem to show what overhead a framework/library brings to the table.

FWIW, my initial impression of imba is that it's very impressive. I do think you rightly point out that, at this point, it may still be hard to leave larger ecosystems of react/vue/etc. DOM/UI speed of the project's JS toolkit generally has not been any meaningful impact in the projects I've worked on in the last several years - the data size and audience and app space just don't really call for it. However... as my needs change, imba will be something I'll revisit. Thank you.

  • > DOM/UI speed of the project's JS toolkit generally has not been any meaningful impact in the projects I've worked on in the last several years

    Maybe you're the exception among your peers or something, but I'd wager you're wrong. Benchmark or no benchmark, imba.io and the site for Scrimba are way snappy. In contrast, when I find myself having to derp around on a landing page or a UI made with React or contemporary frameworks, I can feel the bloat. Is it possible that being elbow deep in this stuff has dulled your senses?

    • Not sure how you got from this that I was saying that there's no different between react and imba, or that it's not snappy/fast.

      I was just saying that for the majority of LOB apps I'm working on, whether a table of 2000 entries renders in .2 seconds or .3 seconds has no meaningful impact on the client projects I'm working on. Even though it's a 50% slowdown, or 33% speed up, depending on how you measure, that speed difference has no impact on these projects. If we got up to 20000 entries, and we were hitting 2sec vs 3sec, that might be noticeable and something to address.

      2 replies →