Comment by davidpolberger
14 days ago
I'm a co-founder of Calcapp, an app builder for formula-driven apps, and I recently received an email from a customer ending their subscription. They said they appreciated being able to kick the tires with Calcapp, but had now fully moved to an AI-based platform. So we're seeing this reality play out in real time.
The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.
I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.
I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.
Sorry to hear about the customer churn, but the MCP-first strategy makes sense to me and seems like it could be really powerful. I also suspect that the bring your own agent future will be really exciting, and I've been surprised we haven't seen more of it play out already.
Agree there will be a place for no-code and low-code interfaces, but I do think it's an open question where the value capture will be--as SaaS vendors, or by the LLM providers themselves.
I saw a Second Brain demo [0] using no-code with using AI inside the widgets to do the work. It took a little handholding, but it turned out to be very flexible, showing me a new way to look at the whole industry.
[0] https://youtu.be/gLaMDOrDGHA?si=CIWVD-TLJrPju1RO
Skills are the way, MCP is decent if you just have a few "endpoints", otherwise it's just polluting the context.
I highly suggest you expose functionality through Graphql. It lets users send out an agent with a goal like: "Figure out how to do X" and because graphql has introspection, it can find stuff pretty reliably! It's really lovely as an end user. Best of luck!
A proper REST API would also work without all the extra overhead of GraphQL.
People may dislike XML, but it is easy to make a REST API with and it works well as an interface between computer systems where a human doesn't have to see the syntax.
Depends mostly on efficiency: GraphQL (or Odata as a REST compliant alternative that has more or less the same functionality) provide the client with more controls out of the box to tune the response it needs. It can control the depth of the associated objects it needs, filter what it doesn't need, etc. This can make a lot of difference for the performance of a client. I actually like Odata more than GraphQL for this purpose, as it is REST compliant, and has standardized more of the protocol.
REST + Swagger I'd say
12 replies →
I tried this recently and found the token overhead makes it prohibitive for any non-trivial schema. Dumping the full introspection result into the context window gets expensive fast and seems to increase hallucination rates compared to just providing specific, narrow tool definitions.
a friend (and colleague, disclaimer) pushed this recently to github. It passes data through a duck fb layer exactly to avoid context bloat:
https://github.com/agoda-com/api-agent
worth taking a look to see multiple approaches to the problem
Hasura is working on this approach: https://promptql.io