Agreed. No reason to show more than 100+ entries on a single table. Event sourcing isn't about UI patterns but rather one level beyond it: the "back of the frontend" [1]:
When you go pass 100k-200k records on the client side, typically fetched with event sourcing pattern, you must resort to WASM. Here's a RUST based example with 150k records to tackle the situation: https://mpa.nuejs.org/app/?rust
The React example w/ the complicated table API will work fine for zillions of records. Virtualization is not complex math, and there are many libraries that will implement it for you in various languages.
I just tried making an array w/ 1 million items in it in my browser console `Array.from({ length: 1_000_000 }, () => ({ id: ++nextId, data: Math.random() }))` without issue. Virtualization is just simple arithmetic to select the firstRender and lastRender indexes in the array. I don't think you need WASM for this.
There is an offset (depending on the app) where JS crashes with stack overflow exception, and only WASM can continue from there. On the Nue example with user records the treshold was around 150k records (only slightly depending on the browser).
This example seems to be able to do 200k+ rows without any problems at all. For me it's smooth up to 10,000,000 even with dynamic row heights. Is there something I'm missing?
I would recommend pagination for a table of that size.
Agreed. No reason to show more than 100+ entries on a single table. Event sourcing isn't about UI patterns but rather one level beyond it: the "back of the frontend" [1]:
[1]: https://bradfrost.com/blog/post/front-of-the-front-end-and-b...
AgGrid, for example, virtualises the dataset and easily render a 100k records: https://www.ag-grid.com/example/
On our app we render large datasets (e.g. 40-50k records) and provide filtering/searching with rxjs.
Search even uses a levenshtein distance and the entire collection is sorted based on the similarity score.
Works like a charm.
When you go pass 100k-200k records on the client side, typically fetched with event sourcing pattern, you must resort to WASM. Here's a RUST based example with 150k records to tackle the situation: https://mpa.nuejs.org/app/?rust
The React example w/ the complicated table API will work fine for zillions of records. Virtualization is not complex math, and there are many libraries that will implement it for you in various languages.
I just tried making an array w/ 1 million items in it in my browser console `Array.from({ length: 1_000_000 }, () => ({ id: ++nextId, data: Math.random() }))` without issue. Virtualization is just simple arithmetic to select the firstRender and lastRender indexes in the array. I don't think you need WASM for this.
> you must resort to WASM
Where does the 'must' come from? A react component will trivially handle 200k records with list virtualisation with just javascript
There is an offset (depending on the app) where JS crashes with stack overflow exception, and only WASM can continue from there. On the Nue example with user records the treshold was around 150k records (only slightly depending on the browser).
3 replies →
Can you elaborate on that?
https://bvaughn.github.io/react-virtualized/#/components/Lis...
This example seems to be able to do 200k+ rows without any problems at all. For me it's smooth up to 10,000,000 even with dynamic row heights. Is there something I'm missing?