0
I have read several times the autocannon documentation, but when I look at the result tables I cannot understand.
In the first table, the second column shows 2.5%-5ms. Does this mean that 2.5% of the 132k of requests that were made in 10.03s were with a latency of 5ms? And that 50% of the 132k were with 7ms? And so it goes to the first table.
Or, in the course of the time spent on the test, when it was in 2.5% of progress, the requests had latency of 5ms, when it was in 50%, the requests had latency of 7ms? If the answer is yes to that question, doesn’t the autocannon start with 100 competing connections? Or start with a few and go up to the 100 limit?
In the second table, the second column shows 1%-6891 req/sec. Do you mean that 6891 requests were processed in 1 second? And in the next column, 2.5 seconds were msm 6891 requests processed? And the logic continues for the rest of the table...
Once the autocannon makes the competing calls, it sends a request in each socket immediately after the complete completion of the previous one. Why the more time passes the more requests per second are processed (based on my understanding above)?