Comment by quaintdev

2 days ago

Begs the question if we should move on to minimal microservices so that whole project lives in context of llm. I hardly have to do anything when I'm working with small project with llm.

Why not take it a step further? Make each function in the codebase its own project. Then the codebase can fit into the context window easily. All you have to do is debug issues between functions calling each other.

  • Wait, is this a joke about Lambda?

    • I don't think it's a joke about left-pad, but the idea that the complexity increases tremendously when you take a cloud of "small" things all communicating with each other. You've just pushed the complexity elsewhere. Claude can easily crunch the small microservice, but you're pushing the complexity to communications issues, race conditions, etc.

      1 reply →

In my experience, the result is just more crawling across the separate microservices and additional reasoning to confirm how it all fits together.

The monolithic codebases are easier to crawl for any problem that can't be conveniently isolated to a single microservice.

  • A good API should be documented, and AI should not have to read the internal code to understand how to use it.

    • Like I said, if your work is already contained neatly inside one microservice then it doesn't matter.

      The same would be true in a monolith: The context to understand what's happening would be contained to a few files.

      When the work starts crossing through domains and potentially requiring insight into how other pieces work, fail, scale, etc. then the microservice model blows up complexity faster than anything, even if you have the API documented.

Ironically this is accidentally begging the question - that breaking them up into LLM context windows would be good because it would be to fit them in LLM context windows.

Maybe you're right but I'm aghast at how much of engineering over the last 15 years has been breaking up working monoliths to fit better within the budget of an external provider (first it was AWS). Those prices can change.

There are good reasons to use microservices but so often they're used for the wrong reasons.

I've done the opposite, moving multiple tightly coupled repos into a single monorepo. Saves the step of the llm realizing there's a bigger context, finding the repo, then also scanning/searching it. Especially for fixes that are simply one line each in two repos.

  • I'm a fan of the monorepo in general, even before LLMs. If using git it leverages git's best feature IMO, the commit as a snapshot of the entire repo. I've worked on so many projects where tightly coupled things are split across repos because it's thought of as a best practice, and it just makes it more difficult to figure out what code you are running.

Generally speaking no. Treat your IP (the code that runs your business, makes your business competitive or special) as precious and don't make it subservient to infra. It should be in the format (code, architecture, structure) that best serves it.

  • And yet so many companies spent the last decade doing it to fit into AWS pricing models

Orchestration between those services and the integration testing for any reasonably complex change can still be quite large.

The whole service might fit in a context window but the details of the system around it will still be relevant.