> Search the issues of the duckdb GitHub there’s at least 110 open and closed oom (out of memory) and maybe 400 to 500 that reference “memory”.
Ah, missed this the first time around. Will check this out. And yes, I noticed that DuckDB rather aggressively tries to use the resources of your computer.
I filed many issues. They were aurtoclosed after 3 months of inactivity
No, I tried Clickhouse instead, which worked without crashing or manual memory tuning.
Search the issues of the duckdb GitHub there’s at least 110 open and closed oom (out of memory) and maybe 400 to 500 that reference “memory”.
> Search the issues of the duckdb GitHub there’s at least 110 open and closed oom (out of memory) and maybe 400 to 500 that reference “memory”.
Ah, missed this the first time around. Will check this out. And yes, I noticed that DuckDB rather aggressively tries to use the resources of your computer.
Understood: SQLite is to Postgres as DuckDB is to ClickHouse.
I don’t see the analogy, if you’re using it to excuse crashing on small data sets and indexes.
SQLite isn’t small and crashy, it’s small and reliable.
There’s something fundamentally wrong with the codebase/architecture if there’s so many memory problems.
And the absolute baseline requirement for a production database is no crashes.
2 replies →