Obviously, you want your tests to test the implementation, not test that the mocks are working. I didn't read all the code, but lots of it not great. Generally, you want to treat your test code as any other production code, build abstractions and simple design/architecture that lets you heavily reduce test duplication, otherwise you end up with huge balls of spaghetti that are impossible to get a clear overview of, actually reasonably change, and hard to understand what is actually being tested. Like that run.test.ts.
That's what I keep thinking about when I see those "once I started taking Claude, I lost 95% of my developement time" posts. Are they really making 20x the software, or are they and their customers simply believing that, watching all those endless streams of green checkmark and rocket emojis?
There is still space left in the title to add more context, how about something like "Vm0, build agents and automate workflows with natural language" ?
I built something similar to this before Langraph had their agent builder @braid.ink, because Claude Code kept referencing old documentation. But the problem ended up solving itself when Langraph came out with their agent builder, and Claude Code can better navigate its documentation.
The only thing I would mention is that building a lot of agents and working with a lot of plug-ins and MCPs is everything is super situation- and context-dependent. It's hard to spin up a general agent that's useful in a production workflow because it requires so much configuration from a standard template. And if you're not being very careful in monitoring it, then it won't meet your requirements when it's completed, when it comes to agents, precision and control is key.
This really resonates - the opacity problem is exactly what makes MCP-based agents hard to trust in production. You can't control what you can't see.
We built toran.sh specifically for this: it lets you watch real API requests from your agents as they happen, without adding SDKs or logging code. Replace the base URL, and you see exactly what the agent sent and what came back.
The "precision and control" point is key though - visibility is step one, but you also need guardrails. We're working on that layer too (keypost.ai for policy enforcement on MCP pipelines).
Would love to hear what monitoring approaches you've found work well for production agent workflows.
Has there every been a tool before AI that we just accept as being so actively hostile to users, that we need all sorts of third party bolt ons to "secure" them? Like, "make sure you run your AI in a sandbox so it doesn't steal your secrets or wipe your harddrive" is for virus', not actual tooling
Well, obviously, `npm` has the same destructive power: package might include a script which steals secrets or wipes a hard drive. But people just assume that usually they don't.
We should probably stop using adjectives to describe programs/systems.
"Run" X,Y,Z...where, where does it run?
"Isolated environment". How isolation was achieved? Is it a VM, if yes then what is the virtualization stack and what it contains? Is it firecracker, just a docker image?
What are the defaults and network rules in this isolated environments?
One thing I'd never use LLMs for, yet I use them daily, a lot, and has since Codex CLI became available, would be to write tests wholesale.
Taking a look at 1400 lines long test file: https://github.com/vm0-ai/vm0/blob/1aaeaf1fed3fd07afaef8668b... and it becomes really clear why we shouldn't yet use LLMs (without detailed reviews) for this.
Obviously, you want your tests to test the implementation, not test that the mocks are working. I didn't read all the code, but lots of it not great. Generally, you want to treat your test code as any other production code, build abstractions and simple design/architecture that lets you heavily reduce test duplication, otherwise you end up with huge balls of spaghetti that are impossible to get a clear overview of, actually reasonably change, and hard to understand what is actually being tested. Like that run.test.ts.
The whole project seems to be LLM coded, even the docs are: https://github.com/vm0-ai/vm0/blob/main/.vm0/agents/docs-wri...
That's what I keep thinking about when I see those "once I started taking Claude, I lost 95% of my developement time" posts. Are they really making 20x the software, or are they and their customers simply believing that, watching all those endless streams of green checkmark and rocket emojis?
I cannot find anywhere in the documentation that shows exactly where or how this "cloud sandbox" is deployed or maintained.
Is this Docker, Kubernetes, KVM/Xen, AWS, Azure, GCP, Fly.io, some other VM tech, or some rando's basement ?
Very little detail and I don't trust this at all.
The canonical solution in this space so far is https://github.com/Dicklesworthstone/agentic_coding_flywheel..., and I would bet that this tool is just a paid version of the same, or worse, just a paid wrapper
I am also a bit confused by how this is all presented but it seems to be on GitHub too: https://github.com/vm0-ai/vm0
Especially with the wait list sign up.
E2B https://github.com/vm0-ai/vm0/blob/main/turbo/package.json#L...
There is still space left in the title to add more context, how about something like "Vm0, build agents and automate workflows with natural language" ?
I built something similar to this before Langraph had their agent builder @braid.ink, because Claude Code kept referencing old documentation. But the problem ended up solving itself when Langraph came out with their agent builder, and Claude Code can better navigate its documentation.
The only thing I would mention is that building a lot of agents and working with a lot of plug-ins and MCPs is everything is super situation- and context-dependent. It's hard to spin up a general agent that's useful in a production workflow because it requires so much configuration from a standard template. And if you're not being very careful in monitoring it, then it won't meet your requirements when it's completed, when it comes to agents, precision and control is key.
This really resonates - the opacity problem is exactly what makes MCP-based agents hard to trust in production. You can't control what you can't see.
We built toran.sh specifically for this: it lets you watch real API requests from your agents as they happen, without adding SDKs or logging code. Replace the base URL, and you see exactly what the agent sent and what came back.
The "precision and control" point is key though - visibility is step one, but you also need guardrails. We're working on that layer too (keypost.ai for policy enforcement on MCP pipelines).
Would love to hear what monitoring approaches you've found work well for production agent workflows.
Still too much work. Can we get an AI that writes intents for me?
This astroturfed? Moderation fail.
AI slop.
Has there every been a tool before AI that we just accept as being so actively hostile to users, that we need all sorts of third party bolt ons to "secure" them? Like, "make sure you run your AI in a sandbox so it doesn't steal your secrets or wipe your harddrive" is for virus', not actual tooling
Well, obviously, `npm` has the same destructive power: package might include a script which steals secrets or wipes a hard drive. But people just assume that usually they don't.
The Internet?
We should probably stop using adjectives to describe programs/systems.
"Run" X,Y,Z...where, where does it run? "Isolated environment". How isolation was achieved? Is it a VM, if yes then what is the virtualization stack and what it contains? Is it firecracker, just a docker image? What are the defaults and network rules in this isolated environments?