Comment by kiitos
1 month ago
database on the same machine as the application server, RPS limits enforced via
var issuedRequests = i + 1;
if (issuedRequests % REQUESTS_PER_SECOND == 0 && issuedRequests < REQUESTS) {
System.out.println("%s, %d/%d requests were issued, waiting 1s before sending next batch..."
.formatted(LocalDateTime.now(), issuedRequests, REQUESTS));
Thread.sleep(1000);
}
don't take any conclusions away from this post, friends
That's by intention, I wanted to test REQUESTS_PER_SECOND max, in every test case.
Same with db - I wanted to see, what kind of load a system (not just app) deployed to a single machine can handle.
It can be obviously optimized even further, I didn't try to do that in the article
Based on that code snippet, and making some (possibly unjustified) assumptions about the rest of the code, your actual request rate could be as low as 50% of your claimed request rate:
Suppose it takes 0.99s to send REQUESTS_PER_SECOND requests. Then you sleep for 1s. Result: You send REQUESTS_PER_SECOND requests every 1.99s. (If sending the batch of requests could take longer than a second, then the situation gets even worse.)
The issue GP has with app and DB on the same box is a red herring -- that was explicitly the condition under test.
That's not true; it's sleeping while the thread pool is busy doing requests - analyze the code again:)
1 reply →
i mean the details are far beyond what can be effectively communicated in a HN comment but if your loadgen tool is doing anything like sleep(1000ms) it is definitely not making any kind of sound request-per-second load against its target
and, furthermore, if the application and DB are co-located on the same machine, you're co-mingling service loads, and definitely not measuring or capturing any kind of useful load numbers, in the end
tl;dr is that these benchmarks/results are ultimately unsound, it's not about optimization, it's about validity
if you want to benchmark the application, then either you (a) mock the DB at as close to 0 cost as you can, or (b) point all application endpoints to the same shared (separate-machine) DB instance, and make sure each benchmark run executes exactly the same set of queries against against a DB instance that is 100% equivalent to the other runs, resetting in-between each run
The point of the test was to test a SYSTEM on the same machine, not just the app - db and app are on the same machine by design, not mistake.
Tests on the other hand were executed on multiple different machines - it's all described in the article. Sleep works properly, because there's an unbounded thread pool that makes http request - each request has its own virtual thread.
1 reply →
you can conclude this may be optimized further and yet conclude his numbers are at least a baseline
it's not about optimization, it's about soundness, and the numbers aren't sound