mirror of
http://git.haproxy.org/git/haproxy.git/
synced 2025-01-20 04:30:46 +00:00
4cae3bf631
When using "http-reuse safe", which is the default, a new incoming connection does not automatically reuse an existing connection for the first request, as we don't want to risk to lose the contents if we know the client will not be able to replay the request. A side effect to this is that when dealing with mostly http-close traffic, the reuse rate is extremely low and we keep accumulating server-side connections that may even never be reused. At some point we're limited to a ratio of file descriptors, but when the system is configured with very high FD limits, we can still reach the limit of outgoing source ports and make the system significantly slow down trying to find an available port for outgoing connections. A simple test on my laptop with ulimit 100000 and with the following config results in the load immediately dropping after a few seconds : listen l1 bind :4445 mode http server s1 127.0.0.1:8000 As can be seen, the load falls from 38k cps to 400 cps during the first 200ms (in fact when the source port table is full and connect() takes ages to find a spare port for a new connection): $ injectl464 -p 4 -o 1 -u 10 -G 127.0.0.1:4445/ -F -c -w 100 hits ^hits hits/s ^h/s bytes kB/s last errs tout htime sdht ptime 2439 2439 39338 39338 356094 5743 5743 0 0 0.4 0.5 0.4 7637 5198 38185 37666 1115002 5575 5499 0 0 0.7 0.5 0.7 7719 82 25730 820 1127002 3756 120 0 0 21.8 18.8 21.8 7797 78 19492 780 1138446 2846 114 0 0 61.4 2.5 61.4 7877 80 15754 800 1150182 2300 117 0 0 58.6 0.5 58.6 7920 43 13200 430 1156488 1927 63 0 0 58.9 0.3 58.9 At this point, lots of connections are indeed in use, for only 10 connections on the frontend side: $ ss -ant state established | wc -l 39022 This patch makes sure we never keep more idle connections than we've ever had outstanding requests on a server. This way the total number of idle connections will never exceed the sum of maximum connections. Thus highly loaded servers will be able to get many connections and slightly loaded servers will keep less. Ideally we should apply similar limits per process and the per backend, but in practice this already addresses the issues pretty well: $ injectl464 -p 4 -o 1 -u 10 -G 127.0.0.1:4445/ -F -c -w 100 hits ^hits hits/s ^h/s bytes kB/s last errs tout htime sdht ptime 4423 4423 40209 40209 645758 5870 5870 0 0 0.2 0.4 0.2 8020 3597 40100 39966 1170920 5854 5835 0 0 0.2 0.4 0.2 12037 4017 40123 40170 1757402 5858 5864 0 0 0.2 0.4 0.2 16069 4032 40172 40320 2346074 5865 5886 0 0 0.2 0.4 0.2 20047 3978 40013 39386 2926862 5842 5750 0 0 0.3 0.4 0.3 24005 3958 40008 39979 3504730 5841 5837 0 0 0.2 0.4 0.2 $ ss -ant state established | wc -l 234 This patch must be backported to 2.0. It could be useful in 1.9 as well eventhough pools and reuse are not enabled by default there. |
||
---|---|---|
.. | ||
common | ||
import | ||
proto | ||
types |