Comment by hahahahhaah

16 days ago

One index scan beats 200 index lookups though surely?

I.e. sometimes one query is cheaper. It is not network anymore.

Also you can run your "big" DB like postgres on the same machine too. No law against that.

For analytic queries, yes, a single SQL query often beats many small ones. The query optimizer is allowed to see more opportunities to optimize and avoid unnecessary work.

Most SQLite queries however, are not analytic queries. They're more like record retrievals.

So hitting a SQLite table with 200 "queries" is similar hitting a webserver with 200 "GET" commands.

In terms of ergonomics, SQLite feels more like a application file-format with a SQL interface. (though it is an embedded relational database)

https://www.sqlite.org/appfileformat.html

  • > The query optimizer is allowed to see more opportunities to optimize and avoid unnecessary work.

    Let's also not forget that db servers can have a memory, in that they can tweak query optimization based on previous queries or scans or whatever state is relevant. SQLite has no memory, in that sense. All query optimizations it makes are based solely upon the single query being processed.

Depends. Throughput is probably higher, but the latency of a big scan might be larger than a small one, so many small lookups might feel more responsive if they’re each rendered independently. The example on the page doesn’t look like it can be merged into a single scan. I’m not a SQL expert but at a glance it does look like it could maybe be compressed into one or two dozen larger lookups.

One query isn't cheaper than two queries that do the same amount of IO and processing and operate in the same memory space

  • How is it the same IO?

    Each query needs to navigate the index then read. The two queries do that twice.

    Is it faster to read pages 30-50 of a book by:

    a) Go to page 30, read until 50

    b) Go to page 30, read that page, close book, open book, go to page 31 and so on.

    Each page open you get to binary search to find the page.

    • It needs to read the index twice, sure, but that's also likely to be cached? Guessing though.

  • Yes, (index) scans are rarely faster typical web apps.

    Unless you have toy amounts data... or doing batch operations which is not typical (and can be problematic for other transactions due to locking, etc...)

    • I admit it is rare. It is more likely if the app has search and DB has been optimised to bring the needed retrevied data onto the index. But it isn't like I haven't reached for a clustered index a few times.