Comment by brandmeyer
5 years ago
My team has a few TB of data in SQLite files that are themselves dozens of GB each.
We're using them as a replacement for leveldb's sstables, but with the structure of full SQL. It is highly effective.
5 years ago
My team has a few TB of data in SQLite files that are themselves dozens of GB each.
We're using them as a replacement for leveldb's sstables, but with the structure of full SQL. It is highly effective.
Do you think your team’s usage of SQLite is representative of the average SQLite user?
This is the fundamental flaw of 80% thinking. The fact that SQLite continues to reach for more users is what has made it such a successful general-purpose tool.
You didn’t answer the question.
> The fact that SQLite continues to reach for more users is what has made it such a successful general-purpose tool.
I never disputed this. You’re responding to a straw man.
Where has it been suggested that this is the best solution for "the average SQLite user", instead of a tool you can use if it fits your requirements? To take your 10MB number, the article starts by mentioning you can probably just download the entire thing if you aren't above that exact same number.
I made two claims:
> this would be unusable over high latency links.
That is objectively true
> SQLite databases of pure data usually aren’t over 10MB in size.
No one here has refuted this point.
Any other counterargument is addressing a claim I did not make.
2 replies →