← Back to context

Comment by bram2w

2 days ago

When I started working on Baserow (this seems similar based on the roadmap), a couple of years ago, I thought it would be a big challenge to quickly render a million rows in the browser. Introducing a system that fetches a page of rows based on the scroll offset, and with a small debounce did the trick. We only had a couple of field types, and it was all incredibly fast

The thing that make performance complicated for a no-code database is when you have 30 interconnected tables, some tables with 200 fields, containing many formulas or other computed fields like lookups or rollups. Updating a single cell, can result in thousands of other rows that must be updated across different tables. If there are 30 users making constant changes, locking PostgreSQL rows under the hood while the formulas are recalculated, and then a couple of n8n workflows making a many API requests to those tables, that's when things get interesting. Especially in combination with features like webhooks, real-time updates, 100+ filters, grouping, 26 field types, date dependencies, aggregations, importing/exporting whole databases.

When implementing a new feature, I've heard users say that's not complicated because it's just adding a checkbox. Making to run it at scale and keeping things performant is what's making it complicated.