task_kill() may be used by any thread to kill any task with less overhead
than a regular wakeup. In order to achieve this, it bypasses the priority
tree and inserts the task directly into the shared tasklets list, cast as
a tasklet. The task_list_size is updated to make sure it is properly
decremented after execution of this task. The task will thus be picked by
process_runnable_tasks() after checking the tree and sent to the TL_URGENT
list, where it will be processed and killed.
If the task is bound to more than one thread, its first thread will be the
one notified.
If the task was already queued or running, nothing is done, only the flag
is added so that it gets killed before or after execution. Of course it's
the caller's responsibility to make sur any resources allocated by this
task were already cleaned up or taken over.
This flag, when set, will be used to indicate that the task must die.
At the moment this may only be placed by the task itself or by the
scheduler when placing it into the TL_NORMAL queue.
The next thread walking algorithm in commit 566df309c ("MEDIUM:
connections: Attempt to get idle connections from other threads.")
proved to be sufficient for most cases, but it still has some rough
edges when threads are unevenly loaded. If one thread wakes up with
10 streams to process in a burst, it will mainly take over connections
from the next one until it doesn't have anymore.
This patch implements a rotating index that is stored into the server
list and that any thread taking over a connection is responsible for
updating. This way it starts mostly random and avoids always picking
from the same place. This results in a smoother distribution overall
and a slightly lower takeover rate.
There's a tricky behavior that was lost when the idle connections were
made sharable between thread in commit 566df309c ("MEDIUM: connections:
Attempt to get idle connections from other threads."), it is the ability
to retry from the safe list when looking for any type of idle connection
and not finding one in the idle list.
It is already important when dealing with long-lived connections since
they ultimately all become safe, but that case is already covered by
the fact that safe conns not being used end up closing and are not
looked up anymore since connect_server() sees there are none.
But it's even more important when using server-side connections which
periodically close, because the new connections may spend half of their
time in safe state and the other half in the idle state, and failing
to grab one such connection from the right list results in establishing
a new connection.
This patch makes sure that a failure to find an idle connection results
in a new attempt at finding one from the safe list if available. In order
to avoid locking twice, connections are attempted alternatively from the
idle then safe list when picking from siblings. Tests have shown a ~2%
performance increase by avoiding to lock twice.
A typical test with 10000 connections over 16 threads with 210 servers
having a 1 millisecond response time and closing every 5 requests shows
a degrading performance starting at 120k req/s down to 60-90k and an
average reuse rate of 44%. After the fix, the reuse rate raises to 79%
and the performance becomes stable at 254k req/s. Similarly the previous
test with full keep-alive has now increased from 96% reuse rate to 99%
and from 352k to 375k req/s.
No backport is needed as this is 2.2-only.
The problem with the way idle connections currently work is that it's
easy for a thread to steal all of its siblings' connections, then release
them, then it's done by another one, etc. This happens even more easily
due to scheduling latencies, or merged events inside the same pool loop,
which, when dealing with a fast server responding in sub-millisecond
delays, can really result in one thread being fully at work at a time.
In such a case, we perform a huge amount of takeover() which consumes
CPU and requires quite some locking, sometimes resulting in lower
performance than expected.
In order to fight against this problem, this patch introduces a new server
setting "pool-low-conn", whose purpose is to dictate when it is allowed to
steal connections from a sibling. As long as the number of idle connections
remains at least as high as this value, it is permitted to take over another
connection. When the idle connection count becomes lower, a thread may only
use its own connections or create a new one. By proceeding like this even
with a low number (typically 2*nbthreads), we quickly end up in a situation
where all active threads have a few connections. It then becomes possible
to connect to a server without bothering other threads the vast majority
of the time, while still being able to use these connections when the
number of available FDs becomes low.
We also use this threshold instead of global.nbthread in the connection
release logic, allowing to keep more extra connections if needed.
A test performed with 10000 concurrent HTTP/1 connections, 16 threads
and 210 servers with 1 millisecond of server response time showed the
following numbers:
haproxy 2.1.7: 185000 requests per second
haproxy 2.2: 314000 requests per second
haproxy 2.2 lowconn 32: 352000 requests per second
The takeover rate goes down from 300k/s to 13k/s. The difference is
further amplified as the response time shrinks.
There was a logic bug in commit ddfe0743d ("MEDIUM: server: use the two
thresholds for the connection release algorithm"): instead of keeping
only our first idle connection when FDs become scarce, the condition was
inverted resulting in enforcing this constraint unless FDs are scarce.
This results in less idle connections than permitted to be kept under
normal condition.
No backport needed.
In conn_backend_get() we can avoid locking other servers when trying
to steal their connections when we know for sure they will not have
one, so let's do it to lower the contention on the lock.
This command reuses the existing "show servers state" to also dump the
state of active and idle connections. The main use is to serve as a
debugging tool to troubleshot connection reuse issues.
Actually the cleanup in commit 6ff8143f7 ("BUG/MINOR: proxy: fix
dump_server_state()'s misuse of the trash") allowed to spot that the
trash is never reset when dumping a servers state. I couldn't manage
to make it dump garbage even with large setups but didn't find either
where it's cleared between successive calls while other handlers do
explicitly invoke chunk_reset(), so it seems to happen a bit by luck.
Let's use chunk_printf() here for each turn, it makes things clearer.
This could be backported along with previous patch, especially if any
user reports occasional garbage appearing in the show servers output.
dump_server_state() claims to dump into a buffer but instead it writes
into a buffer then dumps the trash into the channel, so it only supports
being called with buf=&trash and doesn't need this buffer. There doesn't
seem to be any current impact of this mistake since the function is called
from one location only.
A backport may be performed if it helps fixing other bugs but it will not
fix an existing bug by itself.
This patch adds a missing break to end the loop in case when '%[' is not
properly closed with ']'.
The issue has been introduced with commit cd0d2ed ("MEDIUM: log-format:
make the LF parser aware of sample expressions' end").
In pat_match_str() and pat_math_beg() functions, a trailing zero is
systematically added at the end of the string, even if the buffer is not large
enough to accommodate it. It is a possible buffer overflow. For instance, when
the alpn is matched against a list of strings, the sample fetch is filled with a
non-null terminated string returned by the SSL library. No trailing zero must be
added at the end of this string, because it is outside the buffer.
So, to fix the bug, a trailing zero is added only if the buffer is large enough
to accommodate it. Otherwise, the sample fetch is duplicated. smp_dup() function
adds a trailing zero to the duplicated string, truncating it if it is too long.
This patch should fix the issue #718. It must be backported to all supported
versions.
When not sharing pools (i.e. when building with -DDEBUG_DONT_SHARE_POOLS)
we have about 47 pools right now, while MAX_BASE_POOLS is only 32, meaning
that only the first 32 ones will benefit from a per-thread cache entry.
This totally kills performance when pools are not shared (roughly -20%).
Let's double the limit to gain some margin, and make it possible to set
it as a build option.
It might be useful to backport this to stable versions as they're likely
to be affected as well.
If the owning task is already dying (context was destroyed by fcgi_takeover)
there's no point taking the lock then removing it later since all the code
in between is conditionned by a non-null context. Let's simplify this.
If the owning task is already dying (context was destroyed by h2_takeover)
there's no point taking the lock then removing it later since all the code
in between is conditionned by a non-null context. Let's simplify this.
If the owning task is already dying (context was destroyed by h1_takeover)
there's no point taking the lock then removing it later since all the code
in between is conditionned by a non-null context. Let's simplify this.
In commit 3ef7a190b ("MEDIUM: tasks: apply a fair CPU distribution
between tasklet classes") we compute a total weight to be used to
split the CPU time between queues. There is a mention that the
total cannot be null, wihch is based on the fact that we only get
there if thread_has_task() returns non-zero. But there is a very
small race which can break this assumption: if two threads conflict
on MT_LIST_ADDQ() on an empty shared list and both roll back before
trying again, there is the possibility that a first call to
MT_LIST_ISEMPTY() sees the first thread install itself, then the
second call will see the list empty when both roll back. Thus we
could proceed with the queue while it's temporarily empty and
compute max lengths using a divide by zero. This case is very
hard to trigger, it seldom happens on 16 threads at 400k req/s.
Let's simply test for max_total and leave the loop when we've not
found any work.
No backport is needed, that's 2.2-only.
The parsing of http deny rules with no argument or only the deny_status argument
is buggy if followed by an ACLs expression (starting with "if" or "unless"
keyword). Instead of using the proxy errorfiles, a dummy error is used. To fix
the bug, the parsing function must also check for "if" or "unless" keyword in
such cases.
This patch should fix the issue #720. No backport is needed.
The algorithm improvement in bdb86bd ("MEDIUM: server: improve estimate
of the need for idle connections") is still not enough because there's
a hard limit between below and above the FD count, so it continues to
end up with many killed connections.
Here we're proceeding differently. Given that there are two configured
limits, a low and a high one, what we do is that we drop connections
when the high limit is reached (what's already done by the killing task
anyway), when we're between the low and the high threshold, we only keep
the connection if our idle entries are empty (with a preference for safe
ones), and below the low threshold, we keep any connection so as to give
them a chance of being reused or taken over by another thread.
Proceeding like this results in much less dropped connections, we
typically see a 99.3% reuse rate (76k conns for 10M requests over 200
servers and 4 threads, with 335k takeovers or 3%), and much less CPU
usage variations because there are no more bursts to try to kill extra
connections.
It should be possible to further improve this by counting the number
of threads exploiting a server and trying to optimize the amount of
per-thread idle connections so that it is approximately balanced among
the threads.
The idle server connection estimates brought in commit bdb86bd ("MEDIUM:
server: improve estimate of the need for idle connections") were committed
without the minimum of 1 idle conn needed for the current thread. The net
effect is that there are bursts of dropped connections when the load varies
because there's no provision for the last connection.
No backport needed, this is 2.2-dev.
Commit d645574 ("MINOR: soft-stop: let the first stopper only signal
other threads") introduced a minor mistake which is that when a stopping
thread signals all other threads, it also signals itself. When
single-threaded, the process constantly wakes up while waiting for
last connections to exit. Let's reintroduce the lost mask to avoid
this.
No backport is needed, this is 2.2-dev only.
This reverts previous commit 347bbf79d20e1cff57075a8a378355dfac2475e2i.
The original code was correct. This patch resulted from a mistaken analysis
and breaks the scheduler:
########################## Starting vtest ##########################
Testing with haproxy version: 2.2-dev11-90b7d9-23
# top TEST reg-tests/lua/close_wait_lf.vtc TIMED OUT (kill -9)
# top TEST reg-tests/lua/close_wait_lf.vtc FAILED (10.008) signal=9
1 tests failed, 0 tests skipped, 88 tests passed
Program terminated with signal SIGABRT, Aborted.
[Current thread is 1 (Thread 0x7fb0dac2c700 (LWP 11292))]
(gdb) bt
#0 0x00007fb0e7c143f8 in raise () from /lib64/libc.so.6
#1 0x00007fb0e7c15ffa in abort () from /lib64/libc.so.6
#2 0x000000000053f5d6 in ha_panic () at src/debug.c:269
#3 0x00000000005a6248 in wdt_handler (sig=14, si=<optimized out>, arg=<optimized out>) at src/wdt.c:119
#4 <signal handler called>
#5 0x00000000004fbccd in tasklet_wakeup (tl=0x1b5abc0) at include/haproxy/task.h:351
#6 listener_accept (fd=<optimized out>) at src/listener.c:999
#7 0x00000000004262df in fd_update_events (evts=<optimized out>, fd=6) at include/haproxy/fd.h:418
#8 _do_poll (p=<optimized out>, exp=<optimized out>, wake=<optimized out>) at src/ev_epoll.c:251
#9 0x0000000000548d0f in run_poll_loop () at src/haproxy.c:2949
#10 0x000000000054908b in run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:3067
#11 0x00007fb0e902b684 in start_thread () from /lib64/libpthread.so.0
#12 0x00007fb0e7ce5eed in clone () from /lib64/libc.so.6
(gdb) up
#5 0x00000000004fbccd in tasklet_wakeup (tl=0x1b5abc0) at include/haproxy/task.h:351
351 if (MT_LIST_ADDQ(&task_per_thread[tl->tid].shared_tasklet_list, (struct mt_list *)&tl->list) == 1) {
If the commit above is ever backported, this one must be as well!
In MT_LIST_ADDQ() and MT_LIST_ADD() we can't just check if the element is
already in a list, because there's a small race condition, it could be added
between the time we checked, and the time we actually set its next and prev.
So we have to lock it first.
This should be backported to 2.1.
The max_used_conns value is used as an estimate of the needed number of
connections on a server to know how many to keep open. But this one is
not reported, making it hard to troubleshoot reuse issues. Let's export
it in the sessions/current column.
Starting with commit 079cb9a ("MEDIUM: connections: Revamp the way idle
connections are killed") we started to improve the way to compute the
need for idle connections. But the condition to keep a connection idle
or drop it when releasing it was not updated. This often results in
storms of close when certain thresholds are met, and long series of
takeover() when there aren't enough connections left for a thread on
a server.
This patch tries to improve the situation this way:
- it keeps an estimate of the number of connections needed for a server.
This estimate is a copy of the max over previous purge period, or is a
max of what is seen over current period; it differs from max_used_conns
in that this one is a counter that's reset on each purge period ;
- when releasing, if the number of current idle+used connections is
lower than this last estimate, then we'll keep the connection;
- when releasing, if the current thread's idle conns head is empty,
and we don't exceed the estimate by the number of threads, then
we'll keep the connection.
- when cleaning up connections, we consider the max of the last two
periods to avoid killing too many idle conns when facing bursty
traffic.
Thanks to this we can better converge towards a situation where, provided
there are enough FDs, each active server keeps at least one idle connection
per thread all the time, with a total number close to what was needed over
the previous measurement period (as defined by pool-purge-delay).
On tests with large numbers of concurrent connections (30k) and many
servers (200), this has quite smoothed the CPU usage pattern, increased
the reuse rate and roughly halved the takeover rate.
There's a minor glitch with the way idle connections start to be evicted.
The lookup always goes from thread 0 to thread N-1. This causes depletion
of connections on the first threads and abundance on the last ones. This
is visible with the takeover() stats below:
$ socat - /tmp/sock1 <<< "show activity"|grep ^fd ; \
sleep 10 ; \
socat -/tmp/sock1 <<< "show activity"|grep ^fd
fd_takeover: 300144 [ 91887 84029 66254 57974 ]
fd_takeover: 359631 [ 111369 99699 79145 69418 ]
There are respectively 19k, 15k, 13k and 11k takeovers for only 4 threads,
indicating that the first thread needs a foreign FD twice more often than
the 4th one.
This patch changes this si that all threads are scanned in round robin
starting with the current one. The takeovers now happen in a much more
distributed way (about 4 times 9k) :
fd_takeover: 1420081 [ 359562 359453 346586 354480 ]
fd_takeover: 1457044 [ 368779 368429 355990 363846 ]
There is no need to backport this, as this happened along a few patches
that were merged during 2.2 development.
The FD takeover operation might have certain impacts explaining
unexpected activities, so it's important to report such a counter
there. We thus count the number of times a thread has stolen an
FD from another thread.
The servers have internal states describing the status of idle connections,
unfortunately these were not exported in the stats. This patch adds the 3
following gauges:
- idle_conn_cur : Current number of unsafe idle connections
- safe_conn_cur : Current number of safe idle connections
- used_conn_cur : Current number of connections in use
DEBUG_FD was added by commit 38e8a1c in 2.2-dev, and "show fd" was
slightly modified to still allow to print orphaned/closed FDs if their
count is non-null. But bypassing the existing test made it possible
to dereference fdt.owner which can be null. Let's adjust the condition
to avoid this.
No backport is needed.
The LRU cache head was an array of list, which causes false sharing
between 4 to 8 threads in the same cache line. Let's move it to the
thread_info structure instead. There's no need to do the same for the
pool_cache[] array since it's already quite large (32 pointers each).
By doing this the request rate increased by 1% on a 16-thread machine.
pool-t.h was mistakenly including the full-blown includes for threads,
lists and api instead of the types, and as such, CONFIG_HAP_LOCAL_POOLS
and CONFIG_HAP_LOCKLESS_POOLS were not visible everywhere.
The thread_info struct is convenient to store various per-thread info
without having to resort to a painful thread_local storage which is
slow and painful to initialize.
The problem is, by having this one in thread.h it's very difficult to
add more entries there because everyone already includes thread.h so
conversely thread.h cannot reference certain types.
There's no point in having this there, instead let's create a new pair
of files, tinfo{,-t}.h, which declare the structure. This way it will
become possible to extend them with other includes and have certain
files store their own types there.
In tcpcheck_eval_connect(), if we're targetting a server, increase its
curr_used_conns when creating a new connection, as the counter will be
decreased later when the connection is destroyed and conn_free() is called.
In connect_server(), we want to increase curr_used_conns only if the
connection is new, or if it comes from an idle_pool, otherwise it means
the connection is already used by at least one another stream, and it is
already accounted for.
We used to have 3 thread-based arrays for toremove_lock, idle_cleanup,
and toremove_connections. The problem is that these items are small,
and that this creates false sharing between threads since it's possible
to pack up to 8-16 of these values into a single cache line. This can
cause real damage where there is contention on the lock.
This patch creates a new array of struct "idle_conns" that is aligned
on a cache line and which contains all three members above. This way
each thread has access to its variables without hindering the other
ones. Just doing this increased the HTTP/1 request rate by 5% on a
16-thread machine.
The definition was moved to connection.{c,h} since it appeared a more
natural evolution of the ongoing changes given that there was already
one of them declared in connection.h previously.
It looked strange to see pool_evict_from_cache() always very present
on "perf top", but there was actually a reason to this: while b_free()
uses pool_free() which properly disposes the buffer into the local cache
and b_alloc_fast() allocates using pool_get_first() which considers the
local cache, b_alloc_margin() does not consider the local cache as it
only uses __pool_get_first() which only allocates from the shared pools.
The impact is that basically everywhere a buffer is allocated (muxes,
streams, applets), it's always picked from the shared pool (hence
involves locking) and is released to the local one and makes it grow
until it's required to trigger a flush using pool_evict_from_cache().
Buffers usage are thus not thread-local at all, and cause eviction of
a lot of possibly useful objects from the local caches.
Just fixing this results in a 10% request rate increase in an HTTP/1 test
on a 16-thread machine.
This bug was caused by recent commit ed891fd ("MEDIUM: memory: make local
pools independent on lockless pools") merged into 2.2-dev9, so not backport
is needed.
"show sess" and particularly "show sess all" can be very slow when dumping
lots of information, and while dumping, new sessions might appear, making
the output really endless. When threads are used, this causes a double
problem:
- all threads are paused during the dump, so an overly long dump degrades
the quality of service ;
- since all threads are paused, more events get postponed, possibly
resulting in more streams to be dumped on next invocation of the dump
function.
This patch addresses this long-lasting issue by doing something simple:
the CLI's stream is moved at the end of the steams list, serving as an
identifiable marker to end the dump, because all entries past it were
added after the command was entered. As a result, the CLI's stream always
appears as the last one.
It may make sense to backport this to stable branches where dumping live
streams is difficult as well.
Commit cd4159f ("MEDIUM: mux_h2: Implement the takeover() method.")
added a return in the middle of the function, and as usual with such
stray return statements, some unrolling was lost. Here it's only the
TRACE_LEAVE() call, so it's mostly harmless. That's 2.2 only, no
backport is needed.
Released version 2.2-dev11 with the following main changes :
- REGTEST: Add a simple script to tests errorfile directives in proxy sections
- BUG/MEDIUM: fcgi-app: Resolve the sink if a fcgi-app logs in a ring buffer
- BUG/MINOR: spoe: correction of setting bits for analyzer
- BUG/MINOR: cfgparse: Support configurations without newline at EOF
- MINOR: cfgparse: Warn on truncated lines / files
- BUG/MINOR: http_ana: clarify connection pointer check on L7 retry
- MINOR: debug: add a new DEBUG_FD build option
- BUG/MINOR: tasks: make sure never to exceed max_processed
- MINOR: task: add a new pointer to current tasklet queue
- BUG/MEDIUM: task: be careful not to run too many tasks at TL_URGENT
- BUG/MINOR: cfgparse: Fix argument reference in PARSE_ERR_TOOMANY message
- BUG/MINOR: cfgparse: Fix calculation of position for PARSE_ERR_TOOMANY message
- BUG/MEDIUM: ssl: fix ssl_bind_conf double free
- MINOR: ssl: free bind_conf_node in crtlist_free()
- MINOR: ssl: free the crtlist and the ckch during the deinit()
- BUG/MINOR: ssl: fix build with ckch_deinit() and crtlist_deinit()
- BUG/MINOR: ssl/cli: certs added from the CLI can't be deleted
- MINOR: ssl: move the ckch/crtlist deinit to ssl_sock.c
- MEDIUM: tasks: apply a fair CPU distribution between tasklet classes
- MINOR: tasks: make current_queue an index instead of a pointer
- MINOR: tasks: add a mask of the queues with active tasklets
- MINOR: tasks: pass the queue index to run_task_from_list()
- MINOR: tasks: make run_tasks_from_lists() scan the queues itself
- MEDIUM: tasks: add a tune.sched.low-latency option
- BUG/MEDIUM: ssl/cli: 'commit ssl cert' crashes when no private key
- BUG/MINOR: cfgparse: don't increment linenum on incomplete lines
- MINOR: tools: make parse_line() always terminate the args list
- BUG/MINOR: cfgparse: report extraneous args *after* the string is allocated
- MINOR: cfgparse: sanitize the output a little bit
- MINOR: cli/ssl: handle trailing slashes in crt-list commands
- MINOR: ssl: add the ssl_s_* sample fetches for server side certificate
- BUG/MEDIUM: http-ana: Don't loop trying to generate a malformed 500 response
- BUG/MINOR: stream-int: Don't wait to send truncated HTTP messages
- BUG/MINOR: http-ana: Set CF_EOI on response channel for generated responses
- BUG/MINOR: http-ana: Don't wait to send 1xx responses generated by HAProxy
- MINOR: spoe: Don't systematically create new applets if processing rate is low
- DOC: fix some typos in the ssl_s_{s|i}_dn documentation
- BUILD: fix ssl_sample.c when building against BoringSSL
- CI: travis-ci: switch BoringSSL builds to ninja
- CI: extend spellchecker whitelist
- DOC: assorted typo fixes in the documentation
- CLEANUP: assorted typo fixes in the code and comments
- MINOR: http: Add support for http 413 status
- REGTEST: ssl: tests the ssl_f_* sample fetches
- REGTEST: ssl: add some ssl_c_* sample fetches test
- DOC: ssl: update the documentation of "commit ssl cert"
- BUG/MINOR: cfgparse: correctly deal with empty lines
- BUG/MEDIUM: fetch: Fix hdr_ip misparsing IPv4 addresses due to missing NUL
The IPv4 code did not take into account that the header value might not
contain the trailing NUL byte, possibly reading stray data after the header
value, failing the parse and testing the IPv6 branch. That one adds the
missing NUL, but fails to parse IPv4 addresses.
Fix this issue by always adding the trailing NUL.
The bug was reported on GitHub as issue #715.
It's not entirely clear when this bug started appearing, possibly earlier
versions of smp_fetch_hdr guaranteed the NUL termination. However the
addition of the NUL in the IPv6 case was added together with IPv6 support,
hinting that at that point in time the NUL was not guaranteed.
The commit that added IPv6 support was 69fa99292e
which first appeared in HAProxy 1.5. This patch should be backported to
1.5+, taking into account the various buffer / chunk changes and the movement
across different files.
Issue 23653 in oss-fuzz reports a heap overflow bug which is in fact a
bug introduced by commit 9e1758efb ("BUG/MEDIUM: cfgparse: use
parse_line() to expand/unquote/unescape config lines") to address
oss-fuzz issue 22689, which was only partially fixed by commit 70f58997f
("BUG/MINOR: cfgparse: Support configurations without newline at EOF").
Actually on an empty line, end == line so we cannot dereference end-1
to check for a trailing LF without first being sure that end is greater
than line.
No backport is needed, this is 2.2 only.
Test the following ssl sample fetches:
ssl_c_der, ssl_c_sha1,hex, ssl_c_notafter, ssl_c_notbefore,
ssl_c_sig_alg, ssl_c_i_dn, ssl_c_s_dn, ssl_c_serial,hex, ssl_c_key_alg,
ssl_c_version
This reg-test could be used as far as haproxy 1.6.
Test the following ssl sample fetches:
ssl_f_der, ssl_f_sha1,hex, ssl_f_notafter, ssl_f_notbefore,
ssl_f_sig_alg, ssl_f_i_dn, ssl_f_s_dn, ssl_f_serial,hex, ssl_f_key_alg,
ssl_f_version
This reg-test could be used as far as haproxy 1.5.