Show HN: DBOS TypeScript – Lightweight Durable Execution Built on Postgres
1 day ago (github.com)
Hi HN - Peter from DBOS here with my co-founder Qian (qianl_cs)
Today we want to share our TypeScript library for lightweight durable execution. We’ve been working on it since last year and recently released v2.0 with a ton of new features and major API overhaul.
https://github.com/dbos-inc/dbos-transact-ts
Durable execution means persisting the execution state of your program while it runs, so if it is ever interrupted or crashes, it automatically resumes from where it left off.
Durable execution is useful for a lot of things:
- Orchestrating long-running or business-critical workflows so they seamlessly recover from any failure.
- Running reliable background jobs with no timeouts.
- Processing incoming events (e.g. from Kafka) exactly once
- Running a fault-tolerant distributed task queue
- Running a reliable cron scheduler
- Operating an AI agent, or anything that connects to an unreliable or non-deterministic API.
What’s unique about DBOS’s take on durable execution (compared to, say, Temporal) is that it’s implemented in a lightweight library that’s totally backed by Postgres. All you have to do to use DBOS is “npm install” it and annotate your program with decorators. The decorators store your program’s execution state in Postgres as it runs and recover it if it crashes. There are no other dependencies you have to manage, no separate workflow server–just your program and Postgres.
One big advantage of this approach is that you can add DBOS to ANY TypeScript application–it’s just a library. For example, you can use DBOS to add reliable background jobs or cron scheduling or queues to your Next.js app with no external dependencies except Postgres.
Also, because it’s all in Postgres, you get all the tooling you’re familiar with: backups, GUIs, CLI tools–it all just works.
Want to try DBOS out? Initialize a starter app with:
npx @dbos-inc/create -t dbos-node-starter
Then build and start your app with:
npm install
npm run build
npm run start
Also check out the docs: https://docs.dbos.dev/
We'd love to hear what you think! We’ll be in the comments for the rest of the day to answer any questions you may have.
Interesting idea. It seems like zodb (https://zodb.org) might enable some similar things for python - by simply being an object database?
Is it possible to mix typescript and python steps?
Could you genericise the requirement in postgresql and provide a storage interface we could plug into? I think I have a use for this in Polykey (https://GitHub.com/MatrixAI/Polykey) but we use rocksdb (transactional key value embedded db).
That's definitely worth considering! The core algorithms can work with any data store. That said, we're focused on Postgres right now because of its incredible support and popularity.
You could imagine this working well for cloudflare workers - especially with time limits on execution. (Or with even aws compute market)
Also this reminds me of orthogonal persistence https://wiki.c2.com/?TransparentPersistence
Did you do literature research of Smalltalk?
Hello! I'm a co-founder at DBOS here and I'm happy to answer any questions :)
Hi there, I think I might have found a typo in your example class in the github README. In the class's `workflow` method, shouldn't we be `await`-ing those steps?
Nice catch. Fixing it :)
Can you change the workflow code for a running workflow that already advanced some steps? What support DBOS have for workflow evolution?
It's not recommended--the assumed model is that every workflow finishes on the code version it started. This is managed automatically in our hosted version (DBOS Cloud) and there's an API for self-hosting: https://docs.dbos.dev/typescript/tutorials/development/self-...
That said, we know sometimes you have to do surgery on a long-running workflow, and we're looking at adding better tooling for it. It's completely doable because all the state is stored in Postgres tables (https://docs.dbos.dev/explanations/system-tables).
I know this this might sound scripted or can be considered cliche but what is the use case for DBOS.
The main use case is to build reliable programs. For example, orchestrating long-running workflows, running cron jobs, and orchestrating AI agents with human-in-the-loop.
DBOS makes external asynchronous API calls reliable and crashproof, without needing to rely on an external orchestration service.
How do you persist execution state? Does it hook into the Python interpreter to capture referenced variables/data structures etc, so they are available when the state needs to be restored?
That work is done by the decorators! They wrap around your functions and store the execution state of your workflows in Postgres, specifically:
- Which workflows are executing
- What their inputs were
- Which steps have completed
- What their outputs were
Here's a reference for the Postgres tables DBOS uses to manage that state: https://docs.dbos.dev/explanations/system-tables
1 reply →
About workflow recovery: if I'm running multiple instance of my app that uses DBOS and they all crash, how do you divide the work of retrying pending workflows?
Each workflow is tagged by the executor ID that runs it. You can command each new executor to handle a subset of the pending workflows. This is done automatically on DBOS Cloud. Here's the self-hosting guide: https://docs.dbos.dev/typescript/tutorials/development/self-...
Hai, really cool project! This is something I can actually use.
FYI the “Build Crashproof Apps” button in your docs doesn’t do anything.
You'll need to click either the Python or TypeScript icon. We support both languages and will add more icons there.
2 replies →
> What’s unique about DBOS’s take on durable execution (compared to, say, Temporal) is that it’s implemented in a lightweight library that’s totally backed by Postgres. All you have to do to use DBOS is “npm install” it and annotate your program with decorators. The decorators store your program’s execution state in Postgres as it runs and recover it if it crashes. There are no other dependencies you have to manage, no separate workflow server–just your program and Postgres.
this is good until you the postgres server fills up with load and need to scale up/fan out work to a bunch of workers? how do you handle that?
(disclosure, former temporal employee, but also no hate meant, i'm all for making more good orcehstration choices)
That's a really good question! Because DBOS is backed by Postgres, it scales as well as Postgres does, so 10K+ steps per second with a large database server. That's good for most workloads. Past that, you can split your workload into multiple services or shard it. Past that, you've probably outscaled any Postgres-based solution (very few services need this scale).
The big advantages of using Postgres are:
1. Simpler architecturally, as there are no external dependencies.
2. You have complete control over your execution state, as it's all on tables on your Postgres server (docs for those tables: https://docs.dbos.dev/explanations/system-tables#system-tabl...)
Unaffiliated with DBOS but I agree that Postgres will scale much further than most startups will ever need! Even Meta still runs MySQL under the hood (albeit with a very thick layer of custom ORM).
Do you consider ”durability” to include idempotency? How can you guarantee that without requiring the developer to specify a (verifiable) rollback procedure for each “step?” If Step 1 inserts a new purchase into my local DB, and Step 2 calls the Stripe API to “create a new purchase,” what if Step 2 fails (even after retries, eg maybe my code is using the wrong URL or Stripe banned me)? Maybe you haven’t “committed” the transaction yet, but I’ve got a row in my database saying a purchase exists. Should something clean this up? Is it my responsibility to make sure that row includes something like a “transaction ID” provided by DBOS?
It just seems that the “durability” guarantees get less reliable as you add more dependencies on external systems. Or at least, the reliability is subject to the interpretation of whichever application code interacts with the result of these workflows (e.g. the shipping service must know to ignore rows in the local purchase DB if they’re not linked to a committed DBOS transaction).
Yes, if your workflow interacts with multiple external systems and you need it to fully back out and clean up after itself after a step fails, you'll need backup steps--this is basically a saga pattern.
Where DBOS helps is in ensuring the entire workflow, including all backup steps, always run. So if your service is interrupted and that causes the Stripe call to fail, upon restart your program will automatically retry the Stripe call and if that doesn't work, back out and run the step that closes out the failed purchase.
What are the limits on Retroaction? Can Retroactive changes revise history?
For example, if I change the code / transactions in a step, how do you reconcile what state to prepare for which transactions. For example, you'll need to reconcile deleted and duplicated calls to the DB?
Generally we recommend against retroaction--the assumed model is that every workflow finishes on the code version it started. This is managed automatically in our hosted version (DBOS Cloud) and there's an API for self-hosting: https://docs.dbos.dev/typescript/tutorials/development/self-...
That said, we know sometimes you have to do surgery on a long-running workflow, and we're looking at adding better tooling for it. It's completely doable because all the state is stored in Postgres tables (https://docs.dbos.dev/explanations/system-tables).
I see the example for running a distributed task queue. The docs aren't so clear though for running a distributed workflow, apart from the comment about using a vm id and the admin API.
We use spot instances for most things to keep costs down and job queues to link steps. Can you provide an example of a distributed workflow setup?
Got it! What specifically are you looking for? If you launch multiple DBOS instances connected to the same Postgres database, they'll automatically form a distributed task queue, dividing new work as it arrives on the queue. If you're looking for a lightweight deployment environment, we also have a hosted solution (DBOS Cloud).
What is the determinism constraint? I noticed it mentioned several times in blog posts, but one of the use-cases mentioned here is for use with LLMs, which produce non-deterministic outputs.
Great question! A workflow should be deterministic: if called multiple times with the same inputs, it should invoke the same steps with the same inputs in the same order. But steps don't have be deterministic, they can invoke LLMs, third party APIs, or any other operation. Docs page on determinism: https://docs.dbos.dev/typescript/tutorials/workflow-tutorial...
Why typeorm over something like https://mikro-orm.io/?
In addition to TypeORM, DBOS supports several popular ORMs:
- Drizzle (we're also a sponsor to Drizzle): https://docs.dbos.dev/typescript/tutorials/orms/using-drizzl...
- Knex: https://docs.dbos.dev/typescript/tutorials/orms/using-knex
- Prisma: https://docs.dbos.dev/typescript/tutorials/orms/using-prisma
More ORM support is on the way.
Why not always default to using transactions?
1 reply →
Where is the state stored? In my own pg instance? Or is it stored somewhere in the cloud? Also, a small sample code snippet would be helpful.
The state can be stored in any Postgres instance, either locally or in any cloud.
For code, here's the bare minimum code example for a workflow:
The steps can be any TypeScript function.
Then we have a bunch more examples in our docs: https://docs.dbos.dev/.
Or if you want to try it yourself download a template:
Are there any constraints around which functions can be turned into steps? I assume their state (arguments?) need to be serializable?
Also, what happens with versioning? What if I want to deploy new code?
2 replies →
Loved the Supabase coverage from a month ago, showing under the hood what DBOS is storing & how the data flow works on it. It made real what DBOS was for me, clicked; before DBOS felt very abstract to me.
https://supabase.com/blog/durable-workflows-in-postgres-dbos https://news.ycombinator.com/item?id=42379974
Is there a way to use it without decorators?
nice work