And we also blogged[1] about how the whole thing works. We're very excited about getting this out but we have a ton of improvements we'd like to make still. Please let us know if you have any questions!
Where we are tied is the LLM provider - you will need to supply your own keys for Anthropic / Gemini.
We did a couple runs on top of Ollama + Gemma - expect support for local LLMs. Can't swear on the timeline, but one of our core contributors recently built a water cooled rig with a bunch of 3090s so my guess is "pretty soon".
Exactly. Non-AI projects have always been easy to build without issues. That's why we have so many build systems. We perfected it the first try and then made lots of new versions based on that perfect Makefile.
OP here, everything is available on GitHub:
- https://github.com/appdotbuild/agent
- https://github.com/appdotbuild/platform
And we also blogged[1] about how the whole thing works. We're very excited about getting this out but we have a ton of improvements we'd like to make still. Please let us know if you have any questions!
[1]: https://www.app.build/blog/app-build-open-source-ai-agent
Probably not the question you want to hear, but, what template or css lib is that for the landing page? I'm really in love with it.
Also open-source: https://github.com/appdotbuild/website.
Important part of the context missing or was cut off - it's for building apps on top of the Neon platform (PostgreSQL open source SAAS)
ie inextricably coupled to their services? Or is it a matter of swapping out a few "provider" modules?
Completely agnostic, if you run it locally, we provide a docker compose, if you have other deployment preferences pointing to your DB is a matter of changing env var https://github.com/appdotbuild/agent/blob/main/agent/trpc_ag...
We have baseline cursor rules included in case you want to hack on this manually https://github.com/appdotbuild/agent/tree/main/agent/trpc_ag...
Where we are tied is the LLM provider - you will need to supply your own keys for Anthropic / Gemini.
We did a couple runs on top of Ollama + Gemma - expect support for local LLMs. Can't swear on the timeline, but one of our core contributors recently built a water cooled rig with a bunch of 3090s so my guess is "pretty soon".
The CLI for this feels extremely buggy, Im attempting to build the application but the screen is flickering like crazy: https://streamable.com/d2jrvt
Yeah, we have a PR in the works for this (https://github.com/appdotbuild/platform/issues/166), should be fixed tomorrow!
Alright sounds good. Question, what LLM model does this use out of the box? Is it using the models provided by Github (after I give it access)?
2 replies →
Average experience for AI-made/related products.
Exactly. Non-AI projects have always been easy to build without issues. That's why we have so many build systems. We perfected it the first try and then made lots of new versions based on that perfect Makefile.