Quack-Cluster: A Serverless Distributed SQL Query Engine with DuckDB and Ray

4 days ago (github.com)

So DuckDB was developed to allow queries for bigish data finally without the need for a cluster to simplify data analysis... and we now put it to a cluster?

I think there are solutions for that scale of data already, and simplicity is the best feature of DuckDB (at lest for me).

  • > "So DuckDB was developed to allow queries for bigish data finally without the need for a cluster to simplify data analysis... and we now put it to a cluster?"

    This is a fair point, but I think there's a middle ground. DuckDB handles surprisingly large datasets on a single machine, but "surprisingly large" still has limits. If you're querying 10TB of parquet files across S3, even DuckDB needs help.

    The question is whether Ray is the right distributed layer for this. Curious what the alternative would be—Spark feels like overkill, but rolling your own coordination is painful.

  • Big fan of this push back, because there are alot of projects that have that smell over engineering with the wrong base. (especially with vibecoding now) Thought there are use cases where some have lots of medium-sized data divided up. For compliance, I have a lot of reporting data split such that duckdb instances running in separate processes work amazing for us especially with lower complexity to other compute engines in that environment. If I wanted to move everything into somewhere a clickhouse/trino/databrick/etc would work well the compliance complexity skyrockets and makes it so we have to have perfect configs and tons of extra time invested to get the same devex

Interesting take on extending DuckDB beyond single-machine limits. The discussion about "over-engineering" vs real scale needs resonates with a project I worked on recently - sometimes you hit that awkward middle ground where single-node DuckDB maxes out but full Spark feels like bringing a cannon to a knife fight. The Ray abstraction here is clever for bridging that gap, though the serverless claims seem overstated given Ray's infrastructure requirements.

What is the lifetime of the Ray workers, or, in other words, what is the scalability / scale-to-zero story that makes this serverless?

In my experience ray clusters don't scale well and end up costing you more money. You need to run permanent per-user instances etc.

What you need is a multi-tenancy shared infrastructure that is elastic.

> "Forget about managing complex server infrastructure for your database needs."

So what does this run on then?

No docs, it's not possible to find any deployment guides for Ray using serverless solutions like Lambda, Cloud Functions or be it your own Firecracker.

Instead, every other post mentions EKS or EC2.

The Ray team even rejected Lambda support expressedly as far back as 2020 [0]. Uuuuuugh.

No thanks! shiver

I'd rather cut complexity for practically the same benefit and either do it single machine or have a thin, manageable layer on top a truly serverless infra like in this talk [1] " Processing Trillions of Records at Okta with Mini Serverless Databases".

0: https://github.com/ray-project/ray/issues/9983

1: https://www.youtube.com/watch?v=TrmJilG4GXk