Comment by overfeed
15 days ago
> [...]coding agents only get the information they actually need and nothing more
Extrapolating from this concept led me to a hot-take I haven't had time to blog about: Agentic AI will revive the popularity of microservices. Mostly due to the deleterious effect of context size on agent performance.
Why would they revive the popularity of microservices? They can just as well be used to enforce strict module boundaries within a modular monolith keeping the codebase coherent without splitting off microservices.
And that's why they call it a hot take. No, it isn't going to give rise to microservices. You absolutely can have your agent perform high-level decomposition while maintaining a monolith. A well-written, composable spec is awesome. This has been true for human and AI coders for a very, very long time. The hat trick has always been getting a well-written, composable spec. AI can help with that bit, and I find that is probably the best part of this whole tooling cycle. I can actually interact with an AI to build that spec iteratively. Have it be nice and mean. Have it iterate among many instances and other models, all that fun stuff. It still won't make your idea awesome or make anyone want to spend money on it, though.
In a fresh project that is well documented and set up it might work better. Many issues that Agents have in my work is that the endpoints are not always documented correctly.
Real example that happened to me, Agent forgets to rename an expected parameter in API spec for service 1. Now when working on service 2, there is no other way of finding this mistake for the Agent than to give it access to service 1. And now you are back to "... effect of context size on agent performance ...". For context, we might have ~100 services.
One could argue these issues reduce over time as instruction files are updated etc but that also assumes the models follow instructions and don't hallucinate.
That being said, I do use Agents quite successfully now - but I have to guide them a bit more than some care to admit.
> In a fresh project that is well documented and set up it might work better.
I guess this may be dependent on domain, language, codebase, or soke combination of the 3. The biggest issues I've had with agents is when they go down the wrong path and it snowballs from there. Suddenly they are loading more context unrelated to the tasks and getting more confused. Documenting interfaces doesn't help if the source is available to the agent.
My agentic sweet spot is human-designed interfaces. Agents cannot mess up code they don't have access to, e.g. by inadvertently changing the interface contract and the implementation.
> Agent forgets to rename an expected parameter in API spec for service 1
Document and test your interfaces/logic boundaries! I have witnessed this break many times with human teams with field renames, change in optionality, undocumented field dependencies, etc, there are challenging trade-offs with API versioning. Agents can't fix process issues.