← Back to context

Comment by aleda145

19 days ago

https://kavla.dev/

It's an infinite canvas that runs SQL.

I've been working with data my entire career. I feel like we need to alt+tab so much. What if we just put it all on a canvas?

Currently very WIP, but there's a simple titanic demo available!

Built with tldraw and duckdb wasm, running on cloudflare durable objects

Look at count.co for a Figma-like approach to databases.

We were using it at work (transitioning to Metabase); it's great for exploring and debugging and prototyping but it ends up too much of a tangled spaghetti mess for anything long-term. Would not recommend for user-/other-company-departments-facing reports or dashboards.

  • That's super interesting!

    With Kavla I want to lean into the exploring/debugging phase for analytics. "Embrace the mess", in a way.

    My vision is that there will be an "export to dbt" button when you're ready to standardize a dashboard.

    What made you pick count? Was spaghetti the major reason you left count, or something else?

    • The choice to use Count was made before I joined the company; IIRC they migrated to it from Tableau.

      We wanted to migrate (to Streamlit, back then) to have the SQL not live locked in a tool, but inside our git repository; to be able to run tests on the logic etc. But the spaghetti mess was felt too, even if it wasn't the main reason to switch.

      (But then, 1) some team changes happened that pushed us towards Metabase, and 2) we found that Streamlit managed by Snowflake is quite costly, compute-time wise. (The compute server that starts when you open a Streamlit report, stays live for tens of minutes, which was unexpected to us.)

      ----

      Export to DBT sounds great. Count has "export to SQL" which walks the graph of the cell dependencies and collects them into a CTE. I can imagine there being a way to export into a ZIP of SQL+YML files, with one SQL file per cell.

      1 reply →

Somehow i had landed on your page sometime back and was just impressed with the quality of landing page and also the concept. Hope to use it in near time.

Really interesting idea! I've only seen stuff like that in ETL pipelines (which are a pain). This sits somehow between a python notebook and a ETL pipeline.

By the way, I just shared in my company's Slack and looks like there is no opengraph data for it. Not a complain, just pointing out in case you didn't notice/think of it :)

Best of luck!

The website is great and the examples (like getting distinct values of a table as a prerequisite investigation) really get the point across.

In my job I always end up with big notebooks of data exploration that get messy fast and are hard to share anything but the final result, having a canvas that embraces the non-linear nature is a great idea.

  • That has been my experience as well!

    Aside from the non-linearity, what key features would make you use Kavla instead of a notebook?

Wow, this seems great for doing interviews with an analyst, or just demoing data generally. Cool product!

  • Interviewing is an interesting use case!

    Is that something you're doing? What pain points do you have as interviewer with existing tools?

as someone who loves sql and wants to transition into a DBA specialty from being more frontend, I am very inspired by this

  • Thank you!

    What resource(s) are you using for learning SQL and DBA concepts?

    I haven't really thought about Kavla as being a learning environment, maybe you are onto something!