Show HN: DBOS TypeScript – Lightweight Durable Execution Built on Postgres

1 day ago (github.com)

Hi HN - Peter from DBOS here with my co-founder Qian (qianl_cs)

Today we want to share our TypeScript library for lightweight durable execution. We’ve been working on it since last year and recently released v2.0 with a ton of new features and major API overhaul.

https://github.com/dbos-inc/dbos-transact-ts

Durable execution means persisting the execution state of your program while it runs, so if it is ever interrupted or crashes, it automatically resumes from where it left off.

Durable execution is useful for a lot of things:

- Orchestrating long-running or business-critical workflows so they seamlessly recover from any failure.

- Running reliable background jobs with no timeouts.

- Processing incoming events (e.g. from Kafka) exactly once

- Running a fault-tolerant distributed task queue

- Running a reliable cron scheduler

- Operating an AI agent, or anything that connects to an unreliable or non-deterministic API.

What’s unique about DBOS’s take on durable execution (compared to, say, Temporal) is that it’s implemented in a lightweight library that’s totally backed by Postgres. All you have to do to use DBOS is “npm install” it and annotate your program with decorators. The decorators store your program’s execution state in Postgres as it runs and recover it if it crashes. There are no other dependencies you have to manage, no separate workflow server–just your program and Postgres.

One big advantage of this approach is that you can add DBOS to ANY TypeScript application–it’s just a library. For example, you can use DBOS to add reliable background jobs or cron scheduling or queues to your Next.js app with no external dependencies except Postgres.

Also, because it’s all in Postgres, you get all the tooling you’re familiar with: backups, GUIs, CLI tools–it all just works.

Want to try DBOS out? Initialize a starter app with:

    npx @dbos-inc/create -t dbos-node-starter

Then build and start your app with:

    npm install
    npm run build
    npm run start

Also check out the docs: https://docs.dbos.dev/

We'd love to hear what you think! We’ll be in the comments for the rest of the day to answer any questions you may have.

Interesting idea. It seems like zodb (https://zodb.org) might enable some similar things for python - by simply being an object database?

Is it possible to mix typescript and python steps?

Could you genericise the requirement in postgresql and provide a storage interface we could plug into? I think I have a use for this in Polykey (https://GitHub.com/MatrixAI/Polykey) but we use rocksdb (transactional key value embedded db).

Hello! I'm a co-founder at DBOS here and I'm happy to answer any questions :)

  • Hi there, I think I might have found a typo in your example class in the github README. In the class's `workflow` method, shouldn't we be `await`-ing those steps?

  • Can you change the workflow code for a running workflow that already advanced some steps? What support DBOS have for workflow evolution?

  • I know this this might sound scripted or can be considered cliche but what is the use case for DBOS.

    • The main use case is to build reliable programs. For example, orchestrating long-running workflows, running cron jobs, and orchestrating AI agents with human-in-the-loop.

      DBOS makes external asynchronous API calls reliable and crashproof, without needing to rely on an external orchestration service.

  • How do you persist execution state? Does it hook into the Python interpreter to capture referenced variables/data structures etc, so they are available when the state needs to be restored?

    • That work is done by the decorators! They wrap around your functions and store the execution state of your workflows in Postgres, specifically:

      - Which workflows are executing

      - What their inputs were

      - Which steps have completed

      - What their outputs were

      Here's a reference for the Postgres tables DBOS uses to manage that state: https://docs.dbos.dev/explanations/system-tables

      1 reply →

> What’s unique about DBOS’s take on durable execution (compared to, say, Temporal) is that it’s implemented in a lightweight library that’s totally backed by Postgres. All you have to do to use DBOS is “npm install” it and annotate your program with decorators. The decorators store your program’s execution state in Postgres as it runs and recover it if it crashes. There are no other dependencies you have to manage, no separate workflow server–just your program and Postgres.

this is good until you the postgres server fills up with load and need to scale up/fan out work to a bunch of workers? how do you handle that?

(disclosure, former temporal employee, but also no hate meant, i'm all for making more good orcehstration choices)

  • That's a really good question! Because DBOS is backed by Postgres, it scales as well as Postgres does, so 10K+ steps per second with a large database server. That's good for most workloads. Past that, you can split your workload into multiple services or shard it. Past that, you've probably outscaled any Postgres-based solution (very few services need this scale).

    The big advantages of using Postgres are:

    1. Simpler architecturally, as there are no external dependencies.

    2. You have complete control over your execution state, as it's all on tables on your Postgres server (docs for those tables: https://docs.dbos.dev/explanations/system-tables#system-tabl...)

    • Unaffiliated with DBOS but I agree that Postgres will scale much further than most startups will ever need! Even Meta still runs MySQL under the hood (albeit with a very thick layer of custom ORM).

Do you consider ”durability” to include idempotency? How can you guarantee that without requiring the developer to specify a (verifiable) rollback procedure for each “step?” If Step 1 inserts a new purchase into my local DB, and Step 2 calls the Stripe API to “create a new purchase,” what if Step 2 fails (even after retries, eg maybe my code is using the wrong URL or Stripe banned me)? Maybe you haven’t “committed” the transaction yet, but I’ve got a row in my database saying a purchase exists. Should something clean this up? Is it my responsibility to make sure that row includes something like a “transaction ID” provided by DBOS?

It just seems that the “durability” guarantees get less reliable as you add more dependencies on external systems. Or at least, the reliability is subject to the interpretation of whichever application code interacts with the result of these workflows (e.g. the shipping service must know to ignore rows in the local purchase DB if they’re not linked to a committed DBOS transaction).

  • Yes, if your workflow interacts with multiple external systems and you need it to fully back out and clean up after itself after a step fails, you'll need backup steps--this is basically a saga pattern.

    Where DBOS helps is in ensuring the entire workflow, including all backup steps, always run. So if your service is interrupted and that causes the Stripe call to fail, upon restart your program will automatically retry the Stripe call and if that doesn't work, back out and run the step that closes out the failed purchase.

What are the limits on Retroaction? Can Retroactive changes revise history?

For example, if I change the code / transactions in a step, how do you reconcile what state to prepare for which transactions. For example, you'll need to reconcile deleted and duplicated calls to the DB?

I see the example for running a distributed task queue. The docs aren't so clear though for running a distributed workflow, apart from the comment about using a vm id and the admin API.

We use spot instances for most things to keep costs down and job queues to link steps. Can you provide an example of a distributed workflow setup?

  • Got it! What specifically are you looking for? If you launch multiple DBOS instances connected to the same Postgres database, they'll automatically form a distributed task queue, dividing new work as it arrives on the queue. If you're looking for a lightweight deployment environment, we also have a hosted solution (DBOS Cloud).

What is the determinism constraint? I noticed it mentioned several times in blog posts, but one of the use-cases mentioned here is for use with LLMs, which produce non-deterministic outputs.

Why typeorm over something like https://mikro-orm.io/?

Where is the state stored? In my own pg instance? Or is it stored somewhere in the cloud? Also, a small sample code snippet would be helpful.

  • The state can be stored in any Postgres instance, either locally or in any cloud.

    For code, here's the bare minimum code example for a workflow:

      class Example {
        @DBOS.step()
        static async step_one() {
          ...
        }
    
        @DBOS.step()
        static async step_two() {
          ...
        }
    
        @DBOS.workflow()
        static async workflow() {
          await Example.step_one()
          await Example.step_two()
        }
      }
    

    The steps can be any TypeScript function.

    Then we have a bunch more examples in our docs: https://docs.dbos.dev/.

    Or if you want to try it yourself download a template:

        npx @dbos-inc/create

    • Are there any constraints around which functions can be turned into steps? I assume their state (arguments?) need to be serializable?

      Also, what happens with versioning? What if I want to deploy new code?

      2 replies →