My Browser WASM't Prepared for This. Using DuckDB, Apache Arrow and Web Workers

11 days ago (motifanalytics.medium.com)

Tip for all the blog authors, do NOT post code as image. Specially do not add fake editor UI and drop shadow to the image.

In this case 25 lines of code is 50 kB of image binary.

Also it cannot be searched via search engine. Nor can it be read with screen reader.

We use the WASM build of DuckDB quite extensively at Count (https://count.co - 2-3m queries per month). There are a couple of bugs we've noticed, but given that it's pretty much maintained by a single person seems impressively reliable!

  • Looking at your insane pricing page I have to assume that you are sponsoring that single person?

    • I'm confused, nothing about their pricing looks that weird. Businesses don't typically have large BI teams so you can ride that $199/mo $2400/year for a long time which is so small most SMBs can probably expense it without approval.

      1 reply →

    • Gotta love being downvoted for daring to suggest a company should sponsor the sole open source dev making their whole product possible.

Author here. Thank you all for the comments. I take full responsibility for stupidly using an image for posting the code snippet. Sorry for that! Also, the article was originally posted almost 2 years ago (and "resurrected" with the recent migration to Medium). This is why a fairly old DuckDB version is referenced there. Some of the issues I observed are now gone too.

Obviously, many things have changed since then. We've experimented extensively and moved back and forth with using DuckDB for our internal cloud processing architecture. We eventually settled on just using it for reading the data and then handling everything else in custom workers. Even using TypeScript, we achieved close to 1M events/s per worker overall with very high scalability. However, our use-case is quite distinct. We use a custom query engine (for sequence processing), which has driven many design decisions.

Overall, I think DuckDB (both vanilla and WASM version) is absolutely phenomenal. It also matured since my original blog post. I believe we'll only see more and more projects using it as their backbone. For example, MotherDuck is doing some amazing things with it (e.g., https://duckdb.org/2023/03/12/duckdb-ui) but there are also many more exciting initiatives.

> [wasm] is executed in a stack-based virtual machine rather than as a native library code.

Wasm's binary format is indeed a stack-based virtual machine, but that is not how it is executed. Optimizing VMs convert it to SSA form, basic blocks, and finally machine code, much the same as clang or gcc compile native library code.

It is true that wasm has some overhead, but that is due to portability and sandboxing, not the stack-based binary format.

> On top of the above, memory available to WASM is limited by the browser (in case of Chrome, the limit is currently set at 4GB per tab).

wasm64 solves this, by allowing 64-bit pointers and a lot more than 4GB of memory.

The feature is already supported in Chrome and Firefox, but not everywhere else yet.

  • The more I read about WASM the more it sounds like the JVM

    I'm still not clear what at its core it's done differently (in a way that couldn't be bolted on to a subset of the JVM)

    • The JVM is designed around Java. That's really the main difference, and it brings some downsides for the goals of wasm, which include running native code - think C++ or Rust. The JVM is great at Java, which relies on runtime inlining etc., but not at C++, which assumes ahead-of-time inlining and other optimizations.

      1 reply →

I’ve been toying with the idea of implementing a distributed analytics engine on top of Cloudflare workers and DuckDB.

I’m not sure if this goes against the CloudFlare TOS tough (last time I checked they had some provisons against processing images).