DuckDB also runs in Excel, by the way, via the free xlwings Lite add-in that you can install from the add-in store. It’s using the Python package and allows to write scripts, custom functions, as well as use a Jupyter-like notebook workflow.
If you start with Excel, I'll counter with Postgres: https://github.com/duckdb/pg_duckdb.
I haven't found the time to check this on one of our installation, though.
I'd be lost without it for log analysis. It's like a swiss army knife for making sense of disparate or large sets of data. So easy to pull up, so cooperative with data that's easy to compose from curl and bash, etc. It makes life so much easier
So is DuckDb a database or a cli tool to query all sorts of file format using SQL statements? I've used it as a CLI tool, somehow don't understand the comparison to a database, which stores your data reliably, besides responding to your SQL queries.
My personal use case is a replacement for pandas for ad hoc analysis in Jupyter notebooks, which I have to do very often these days. If I had to store the data I'd pick S3+Glue+Athena.
I benchmarked DuckDB 1.5.2 with the latest Java JDBC driver which now supports user defined functions. This allows very fast modifications https://sqg.dev/blog/java-duckdb-benchmark/
i must be doing something wrong but if i try a huge join on a table bigger than my ram no matter the flags or the spill-to-disk modes enabled i get crashes. im sure im doing something wrong.
Hm, our internal benchmarking shows something like a 30x speedup compared to SQLite (https://github.com/ClickHouse/ClickBench shows an even greater speedup due to not considering cache size). Calculating back on the envelope I'd estimate 8x for multithreading and 4x for SIMD. Should we expect even more?
"Performance
Does DuckDB use SIMD?
DuckDB does not use explicit SIMD (single instruction, multiple data) instructions because they greatly complicate portability and compilation. Instead, DuckDB uses implicit SIMD, where we go to great lengths to write our C++ code in such a way that the compiler can auto-generate SIMD instructions for the specific hardware. As an example why this is a good idea, it took 10 minutes to port DuckDB to the Apple Silicon architecture."
Probably because you probably don't have to do those arbitrary transformations that often. I do, being in a security-related role. But I wouldn't have recognized its usefulness in my previous roles as a front/backend dev.
Whoa, nice! I could see this being useful to people I work with. Do you think it would be a good setup for people who are technical but not great software developers? People who use basic R and Python for ETL and analysis, mostly.
I got introduced to it by Claude the other day as I was interrogating several GB of public csv files. Seemed magical as it out them all in parquet files and transformed what I needed into the normalized sqllite for my server. Coding agents seen quite comfortable with it!
claude + duckdb combo is legendary for doing quick analysis of huge datasets. every time i need to analyze a big ass csv (200mb+) or as you noted a parquet file or really anything columnar i'll tell claude, 'you have duckdb at your disposal for this' and within minutes it's all sorted (no pun intended)
I use it almost daily. Any time I benchmark changes or analyze logs, I collect the data I need as CSV and analyze it with duckdb. The flexibility and ease makes it so I find so much more interesting information. It's indispensable to me now
I found it unusable due to out of memory errors with a billion row 8 column dataset.
It needs manual tuning to avoid those errors and I couldn’t find the right incantation, nor should I need to - memory management is the job of the db, not me. Far too flakey for any production usage.
The problem space that ducklake solves is smaller, but it helped me to get a working metabase dashboard quickly on ~1tb of data with 128gb ram. Queries were much, much faster than all alternatives.
Some downsides are: No unique constraints with indexes (can accidentally shoot yourself in the foot with double ingestion), writing is a bit cumbersome if you already have parquet files.
I’ve had very good experience with it last year. I used it at large scale with data that had been in iceberg previously and it worked flawlessly. It’s only improved since. Highly recommend.
DuckDB also runs in Excel, by the way, via the free xlwings Lite add-in that you can install from the add-in store. It’s using the Python package and allows to write scripts, custom functions, as well as use a Jupyter-like notebook workflow.
If you start with Excel, I'll counter with Postgres: https://github.com/duckdb/pg_duckdb. I haven't found the time to check this on one of our installation, though.
It's been a lifesaver for some analysis I had to do on 70GB of Cloudflare logs.
I'd be lost without it for log analysis. It's like a swiss army knife for making sense of disparate or large sets of data. So easy to pull up, so cooperative with data that's easy to compose from curl and bash, etc. It makes life so much easier
Yep. And easy to reuse as well since it's just SQL.
So is DuckDb a database or a cli tool to query all sorts of file format using SQL statements? I've used it as a CLI tool, somehow don't understand the comparison to a database, which stores your data reliably, besides responding to your SQL queries.
My personal use case is a replacement for pandas for ad hoc analysis in Jupyter notebooks, which I have to do very often these days. If I had to store the data I'd pick S3+Glue+Athena.
I benchmarked DuckDB 1.5.2 with the latest Java JDBC driver which now supports user defined functions. This allows very fast modifications https://sqg.dev/blog/java-duckdb-benchmark/
Data engineer here: I use this all the time. It's amazing. For most of the data the sizes we often deal with it's perfect.
> For most of the data the sizes we often deal with it's perfect.
Interested here: for me it works for out of core work. Where is the limit? On a related note: do you need to handle concurrency restrictions?
i must be doing something wrong but if i try a huge join on a table bigger than my ram no matter the flags or the spill-to-disk modes enabled i get crashes. im sure im doing something wrong.
1 reply →
Did they finally enable full SIMD or keep insisting its okay not to have it?
Hm, our internal benchmarking shows something like a 30x speedup compared to SQLite (https://github.com/ClickHouse/ClickBench shows an even greater speedup due to not considering cache size). Calculating back on the envelope I'd estimate 8x for multithreading and 4x for SIMD. Should we expect even more?
fwiw:
"Performance Does DuckDB use SIMD? DuckDB does not use explicit SIMD (single instruction, multiple data) instructions because they greatly complicate portability and compilation. Instead, DuckDB uses implicit SIMD, where we go to great lengths to write our C++ code in such a way that the compiler can auto-generate SIMD instructions for the specific hardware. As an example why this is a good idea, it took 10 minutes to port DuckDB to the Apple Silicon architecture."
https://duckdb.org/faq
I use duckdb often too, but the way it is being hyped in these comments makes me feel like I'm missing out on some insane usecase.
I basically use it to load csv, jsonl, parquet etc etc formats and do arbitrary transformations. Are people doing something else with it?
Probably because you probably don't have to do those arbitrary transformations that often. I do, being in a security-related role. But I wouldn't have recognized its usefulness in my previous roles as a front/backend dev.
Maybe you are unconsciously doing the right thing(TM) already? So try doing it with SQLite instead :)
duckdb is a generational technology innovation. insanely good ergonomics, great performance, it's awesome.
Can confirm: together with `dbt` and `rill` I'm able do to [this](https://github.com/idesis-gmbh/GitHubExperiments/blob/master...) on my laptop.
Whoa, nice! I could see this being useful to people I work with. Do you think it would be a good setup for people who are technical but not great software developers? People who use basic R and Python for ETL and analysis, mostly.
1 reply →
is rill open source?
1 reply →
Why did you pick rill?
1 reply →
I got introduced to it by Claude the other day as I was interrogating several GB of public csv files. Seemed magical as it out them all in parquet files and transformed what I needed into the normalized sqllite for my server. Coding agents seen quite comfortable with it!
claude + duckdb combo is legendary for doing quick analysis of huge datasets. every time i need to analyze a big ass csv (200mb+) or as you noted a parquet file or really anything columnar i'll tell claude, 'you have duckdb at your disposal for this' and within minutes it's all sorted (no pun intended)
I use it almost daily. Any time I benchmark changes or analyze logs, I collect the data I need as CSV and analyze it with duckdb. The flexibility and ease makes it so I find so much more interesting information. It's indispensable to me now
sqlite
both
I found it unusable due to out of memory errors with a billion row 8 column dataset.
It needs manual tuning to avoid those errors and I couldn’t find the right incantation, nor should I need to - memory management is the job of the db, not me. Far too flakey for any production usage.
That sounds like a rather serious application. Did you file an issue?
No, I tried Clickhouse instead, which worked without crashing or manual memory tuning.
Search the issues of the duckdb GitHub there’s at least 110 open and closed oom (out of memory) and maybe 400 to 500 that reference “memory”.
5 replies →
Any opinions on DuckLake?
The problem space that ducklake solves is smaller, but it helped me to get a working metabase dashboard quickly on ~1tb of data with 128gb ram. Queries were much, much faster than all alternatives.
Some downsides are: No unique constraints with indexes (can accidentally shoot yourself in the foot with double ingestion), writing is a bit cumbersome if you already have parquet files.
I’ve had very good experience with it last year. I used it at large scale with data that had been in iceberg previously and it worked flawlessly. It’s only improved since. Highly recommend.
With my enterprise hat on, I'd say Athena + S3 is good enough. Only use DuckDB for ad hoc analysis.
Seems stable enough, they patched a bunch of things.