← Back to context

Comment by adra

3 years ago

Most of our cloud hosted request/responses are within the realm of 1-10ms, and that's with the actual request being processed on the other side. Unless there's a poorly performing O(N) stinker in the works, most requests can be served with most latency being recorded user->datacenter, not machine to machine overhead. This article is a lot bonkers.