mirror of
http://git.haproxy.org/git/haproxy.git/
synced 2025-04-17 04:26:59 +00:00
haproxy public development tree
Measures with unbounded execution ratios under 40000 concurrent connections at 100 Gbps showed the following CPU bandwidth distribution between task classes depending on traffic scenarios: scenario TC0 TC1 TC2 observation -------------------+---+---+----+--------------------------- TCP conn rate : 29, 48, 23 221 kcps HTTP conn rate : 29, 47, 24 200 kcps TCP byte rate : 3, 5, 92 53 Gbps splicing byte rate: 5, 10, 85 70 Gbps H2 10k object : 10, 21, 74 client-limited mixed traffic : 4, 7, 89 2*1m+1*0: 11kcps, 36 Gbps Thus it seems that we always need a bit of bulk tasks even for short connections, which seems to imply a suboptimal processing somewhere, and that there are roughly twice as many tasks (TC1=normal) as regular tasklets (TC0=urgent). This ratio stands even when data forwarding increases. So at first glance it looks reasonable to enforce the following ratio by default: - 16% for TL_URGENT - 33% for TL_NORMAL - 50% for TL_BULK With this, the TCP conn rate climbs to ~225 kcps, and the mixed traffic pattern shows a more balanced 17kcps + 35 Gbps with 35ms CLI request time time instead of 11kcps + 36 Gbps and 400 ms response time. The byte rate tests (1M objects) are not affected at all. This setting looks "good enough" to allow immediate merging, and could be refined later. It's worth noting that it resists very well to massive increase of run queue depth and maxpollevents: with the run queue depth changed from 200 to 10000 and maxpollevents to 10000 as well, the CLI's request time is back to the previous ~400ms, but the mixed traffic test reaches 52 Gbps + 7500 CPS, which was never met with the previous scheduling model, while the CLI used to show ~1 minute response time. The reason is that in the bulk class it becomes possible to perform multiple rounds of recv+send and eliminate objects at once, increasing the L3 cache hit ratio, and keeping the connection count low, without degrading too much the latency. Another test with mixed traffic involving 2/3 splicing on huge objects and 1/3 on empty objects without touching any setting reports 51 Gbps + 5300 cps and 35ms CLI request time. |
||
---|---|---|
.github | ||
contrib | ||
doc | ||
ebtree | ||
examples | ||
include | ||
reg-tests | ||
scripts | ||
src | ||
tests | ||
.cirrus.yml | ||
.gitignore | ||
.travis.yml | ||
BRANCHES | ||
CHANGELOG | ||
CONTRIBUTING | ||
INSTALL | ||
LICENSE | ||
MAINTAINERS | ||
Makefile | ||
README | ||
ROADMAP | ||
SUBVERS | ||
VERDATE | ||
VERSION |
The HAProxy documentation has been split into a number of different files for ease of use. Please refer to the following files depending on what you're looking for : - INSTALL for instructions on how to build and install HAProxy - BRANCHES to understand the project's life cycle and what version to use - LICENSE for the project's license - CONTRIBUTING for the process to follow to submit contributions The more detailed documentation is located into the doc/ directory : - doc/intro.txt for a quick introduction on HAProxy - doc/configuration.txt for the configuration's reference manual - doc/lua.txt for the Lua's reference manual - doc/SPOE.txt for how to use the SPOE engine - doc/network-namespaces.txt for how to use network namespaces under Linux - doc/management.txt for the management guide - doc/regression-testing.txt for how to use the regression testing suite - doc/peers.txt for the peers protocol reference - doc/coding-style.txt for how to adopt HAProxy's coding style - doc/internals for developer-specific documentation (not all up to date)