Comment by closeparen
19 hours ago
Exactly. There should be no problem having tens of thousands of stateless, I/O bound database-transaction-wrapper endpoints in the same service. You're not going to run out of memory to hold the binary or something. If you want to partition physical capacity between groups of endpoints, you can probably accomplish that at the load balancer.
Having the latent capability to serve endpoint A in the binary is not interfering with endpoint B's QPS unless it implies some kind of crazy background job or huge in-memory dataset. Even in this case, monoliths normally have a few different components according to function: API, DB, cache, queue, background worker, etc. You can group workloads by their structure even if their business purposes are diverse.
No comments yet
Contribute on Hacker News ↗