It turns out if you run SQLiteFS on SQLiteFS, performance increases exponentially, and as the number of SQLiteFS layers increases, seek and read times asymptotically approach zero. In other words, this is the world’s first O(0) file system.
At this rate, with only a few performance tweaks, SQLiteFS should be able to retrieve files before you even ask for them.
The problem is that it sends you files before you need them, which clogs up throughput as you try to figure out what the files are supposed to be for. It's more efficient to slow everything back down to NVMe speeds.
Jokes aside, if I understand correctly it's still assumed the performance increase from the sqlLite is down to the reduced number of OPEN and CLOSE calls. So if that's the case, then it will be as slow as however many sqllites you're running? Meaning it should in theory lose performance?!
It turns out if you run SQLiteFS on SQLiteFS, performance increases exponentially, and as the number of SQLiteFS layers increases, seek and read times asymptotically approach zero. In other words, this is the world’s first O(0) file system.
At this rate, with only a few performance tweaks, SQLiteFS should be able to retrieve files before you even ask for them.
The problem is that it sends you files before you need them, which clogs up throughput as you try to figure out what the files are supposed to be for. It's more efficient to slow everything back down to NVMe speeds.
Less humorously, if one were calculating a few files for compilation, or caching for a service, a SqliteFS using :memory: might be just the thing.
Jokes aside, if I understand correctly it's still assumed the performance increase from the sqlLite is down to the reduced number of OPEN and CLOSE calls. So if that's the case, then it will be as slow as however many sqllites you're running? Meaning it should in theory lose performance?!
Obviously adding redundant layers will reduce performance.