Comment by NitpickLawyer

1 day ago

This is cool! Seems like this is what AutoGen Studio wanted to be. And what a lot of "agentic" libs fell short of - a way to chain together stuff by using natural language.

Quick questions (I only looked at the demo video and briefly skimmed the docs, sorry if the qs are explained somewhere):

- it looks to me that a lot of the heavy weight "logic" is handled via prompts (when a new agent is created your copilot edits the "prompts"). Have you tested this w/ various models (and especially any open weights ones) to make sure the flows still work? This reminds me of the very early agent libraries that worked w/ oAI GPTs but not much else.

- if the above assumption is correct, are there plans of using newer libs where a lot of the logic / lifting is done by code instead of simply chaining prompts and hope the model can handle it? (A2A, pydantic, griptape, etc)

Thanks!

1. That's right - Rowboat's agent instructions are currently written in structured prompt blocks, and a lot of logic does live there (with @mentions for tools, other agents, and reusable prompts). We support oAI GPTs at the moment (we chose to start with the oAI Agents SDK), but we're actively working on expanding to other LLMs as well. One of our community contributors just created a fork for Rowboat + OpenRouter. Re: performance, we expect other closed LLMs to perform comparably, and (with good prompt hygiene + role instructions) open LLMs as well, if individual agent scope is kept precise.

2. We've been discussing both A2A and pydantic! Right now, Rowboat is designed to be prompt-first, but we’re integrating more typed interfaces. Design-wise, its likely that prompts might stay central - encoding part of the logic and also acting as the glue layer between more code-based components. Similar to how code has comments, config, and DSLs, agent systems could benefit from human-readable intent even when the core logic is more structured.

Does that make sense?