Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs

15 hours ago (github.com)

Hi HN, we're Sam, Shane, and Abhi.

Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.

Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.

If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.

Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.

Agent development is changing quickly, so we’ve added a lot since February:

- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.

- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.

- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.

- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.

(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)

Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.

We'll be around and happy to answer any questions!

> That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.

shudders in vietnam war flashbacks congrats on launch guys!!!

for those who want an independent third party endorsement, here's Brex CTO talking about Mastra in their AI engineering stack http://latent.space/p/brex

  • LOL thanks swyx. Yeah we realized although we _could_ fight that war again...it would be better for everyone if we didn't...

I worked with Mastra for three months and it is awesome. Thank you for making a great product.

One thing to consider is that it felt clunky working with workflows and branching logic with non LLM agents. I have a strong preference for using rules based logic and heuristics first. That way, if I do need to bring in the big gun LLM models, I already have the context engineering solved. To me, an agent means anything with agency. After a couple weeks of frustration, I started using my own custom branching workflows.

One reason to use rules, they are free and 10,000x faster, with an LLM agent fallback if validation rules were not passing. Instead of running an LLM agent to solve a problem every single time, I can have the LLM write the rules once. The whole thing got messy.

Otherwise, Mastra is best in class for working with TypeScript.

  • I learned that every step that can be solved reasonably without an LLM, should be solved without an LLM. Reliability, cost, performance, etc.

    I try to transfer as much work as I can out of LLMs and into deterministic steps. This includes most of the “orchestration” layer which is usually deterministic by nature.

    Sprinkle a little bit of AI in the right places and you’ll get something that appears genuinely intelligent. Rely too much on AI and it’s dumb as fuck.

    Make their tasks very small and simple (ideally, one step), give them only the context and tools that they need and nothing else, and provide them with feedback when they inevitably mess up (ideally, deterministically), and hope for the best.

  • Thank you for using us, and for the feedback!

    Do you have code snippets you can share about how you wanted to write the rules? Want to understand desired grammar / syntax better.

Mastra looks great!

- How do you compare Mastra with Tanstack AI? And/or do you plan to build on top of Tanstack AI like the Vercel AI SDK?

- Since there's a Mastra cloud, do you have an idea as to what features will be exclusive to the hosted version?

  • Re: Tanstack AI, really depends on adoption. We've known Tanner since his react-static days and if it takes off we'll def work together.

    Re: Mastra cloud -- this is basically hosted services, eg observability, hosted studio, hosted serverless deployments, as distinct from the framework.

    With server adapters you can now deploy your studio in your infra. We're going to pull multi-project / multi-user Mastra cloud features into a Mastra admin feature so you can run these locally or deploy them on your infra as well (with EE licensing for stuff like RBAC). Stay tuned here.

Ran through quickstart, created my first agent "Friendo" that acts as my best friend, chatted a bit. Nice UI, cool systems, hope to play with it more and build something, but I'm just not sure what yet.

I've been building with Mastra for a couple of weeks now and loving it, so congratulations on reaching 1.0!

It's built on top of Vercel AI elements/SDK and it seems to me that was a good decision.

My mental heuristic is:

Vercel AI SDK = library, low level

Mastra = framework

Then Vercel AI Elements gives you an optional pre built UI.

However, I read the blog post for the upcoming AI SDK 6.0 release last week, and it seems like it's shifting more towards being a framework as well. What are your thoughts on this? Are these two tools going to align further in the future?

https://vercel.com/blog/ai-sdk-6

  • Have a ton of respect for the AI SDK team. Initially we only used AI SDK model routing, but now we also have our own built-in model routing as well.

    I see each of us having different architectures. AI SDK is more low-level, and Mastra is more integrated with storage powering our studio, evals, memory, workflow suspend/resume etc.

    • What a corporate and wishy washy response that just basically repeated what I said back at me.

      I was hoping to actually engage with you but I guess you just came here to do marketing.

      > AI SDK is more low-level

      AI SDK was more low level. My question was, since the latest V6 release is moving towards higher level components, what do you think about that? How will you continue to differentiate your product if Vercel makes moves to eat your lunch?

      That's almost certainly their intention here, following their highly successful Next.js playbook: start by creating low level dev tools, gradually expand the scope, make sure that all the docs and setup guides you towards deploying on their infrastructure.

Congratulations! I’m a fan of the publicity work and general out-of-the-box DX! That stuff matters a lot and I’m happy you’re aware.

I wonder: Are there any large general purpose agent harnesses developed using Mastra? From what I can tell OpenCode chose not to use it.

A lot of people on here repeat that rolling your own is more powerful than using Langchain or other frameworks and I wonder how Mastra relates to this sentiment.

  • When Langchain was the only option rolling your own made a lot of sense!

    These days we see things going the other way, where teams that started rolling their own shift over to Mastra so they can focus on the agent vs having to maintain an internal framework.

    The Latent Space article swyx linked earlier includes a quote from the Brex CTO talking about how they did that.

We use typescript for all our entire stack and it's super dope to see a production-grade framework (with no vendor lock in) launch!

  • Thanks! That's a lot of why we built Mastra. We wanted something that felt like it was made for us.

Why should I use this over say Strands Agents [1] or Spring AI [2]?

[1]: https://strandsagents.com

[2]: https://spring.io/projects/spring-ai

  • You should use whatever framework you feel like has the best DX / fits your stack best!

    We're TypeScript-first, TypeScript-only so a lot of the teams who use us are full-stack TypeScript devs and want an agent framework that feels TS-native, easy to use, and feature-complete.

  • seems non of these are typescript-based? Strand appears to have a typescript sdk available but not natively ts.

    • Language, although an important factor, should not be the only factor to decide using a tool. I'm curious is there something unique Mastra is bringing to the table, compared to other alternatives.

is "from the Gatsby devs" some how supposed to help the credential? Looks like a cool framework regardless of that.

  • If I had some heartfelt advice for the Mastra devrel team it would be to shut up about about Gatsby.

    I'm a happy Mastra user and I'm biased to their success. But I think linking it to an unrelated project is only going to matter to non-technical CXOs who choose technology based on names not merits. And that's not the audience Mastra needs to appeal to to be successful. Good dev tools and techs trickle from the bottom up in engineering organizations.

    • Thanks for the feedback. We hear from a lot of devs with fond memories of Gatsby but if it cuts the opposite way for you that's also fair!

      Most of us spent a lot of the last decade building Gatsby so it's sort of a personal identity/pride thing for us more than a marketing thing. But maybe we need to keep our identity small! Either way, thanks for saying something, worth thinking about.

From punch cards to assembly, to C, to modern languages and web frameworks, each generation raised the abstraction. Agentic frameworks are the next one.

> a `.network()` method that turns any agent into a routing agent

say more pls?

  • We've always supported letting folks specify their agent hierarchy, eg agent supervisor, workflow orchestrator, mix and match, etc.

    But people kept asking us for a multi-agent primitive out of the box so we shipped `agent.network()`, which is basically dynamic hierarchy decided at runtime, pass in an array of workflows and agents to the routing agent and let it decide what to do, how long to execute for, etc!

    https://mastra.ai/docs/agents/networks

You’re not locked into a model, but you likely are locked in to a platform. This DX and convenience just shifts within the stack where the lock in occurs. Not criticizing - just a choice people should be conscious of.

Another useful question to ask: since you’re likely using 1 of 3 frontier models anyway, do you believe Claude Agent SDK will increasingly become the workflow and runtime of agentic work? Or if not Claude itself, will that set the pattern for how the work is executed? If you do, why use a wrapper?

  • Re: lessons from coding agents, we're building some of the key abstractions like sandboxes, filesystem, skills/knowledge as Mastra primitives in over the next month.

    For any agent you're shipped to production though you probably want a harness that's open-source so you more fully control / can customize the experience.

    • I think that’s fair, totally, but I also think a Skill would be considered a primitive in and of itself by Anthropic. So to me it’s still wrapping an open primitive. Anyway, trade offs.

the framework is great, but how are you gonna make real money?

  • Cloud-hosted observability + studio features (and self-hosted with EE bits).

    You can take a look at the cloud platform at cloud.mastra.ai, it's in beta currently

    It's the same play we did at Gatsby to get to several million in ARR in a couple of years

Offtopic but how much is AI used these days for generating code at your place? Curious because we see a major shift last months where almost everything is generated. Still human checked and human quality gates. Big difference compared to last year.

  • There's the normal stuff you'd expect -- we're all Opus-pilled, use Claude Code, a PR review bot etc. But it's been especially helpful with highly templatized code like our storage adapters, we already have 10-15 working examples which makes the n+1st adapter almost trivial to write.

So the ultimate real life use case of this is having a bubble on your site that you click to chat with a bot?! Most users prefer to chat with an actual human being 99% of the times or immediately ask the bot to chat with one.

  • Less frequently sites and more frequently a SaaS app, for example Sanity released a content agent in their CMS, Factorial released an agent inside their HR/payroll product.

    But tons of other use cases too, eg dev teams at Workday and PayPal have built an agentic SRE to triage their alerts, etc etc