This macro is similar to LIST_INLIST() except that it is guaranteed to
perform the test atomically, so that even if LIST_INLIST() is intrumented
with debugging code to perform extra consistency checks, it will not fail
when used in the context of barriers and atomic ops.
perf top shows that we spend a lot of time trying to read item->type in
the lookup loop, because the node is placed after the very long name,
so when the node is found, no data is in the cache yet. Let's simply
move the node upper in the struct. This results in the CPU usage of
resolv_validate_dns_response() to drop by 4 points.
A bogus test in b_get_varint(), b_put_varint(), b_peek_varint() shifts
the end of the buffer by one byte. Since the bug is the same in the read
and write functions, the buffer contents remain compatible, which explains
why this bug was not detected earlier. But if the buffer ends on an
aligned address or page, it can result in a one-byte overflow which will
typically cause a crash or an inconsistent behavior.
This API is only used by rings (e.g. for traces and boot messages) and
by DNS responses, so the probability to hit it is extremely low, but a
crash on boot was observed.
This must be backported to 2.2.
We're using an XXH32() on the record to insert it into or look it up from
the tree. This way we don't change the rest of the code, the comparisons
are still made on all fields and the next node is visited on mismatch. This
also allows to continue to use roundrobin between identical nodes.
Just doing this is sufficient to see the CPU usage go down from ~60-70% to
4% at ~2k DNS requests per second for farm with 300 servers. A larger
config with 12 backends of 2000 servers each shows ~8-9% CPU for 6-10000
DNS requests per second.
It would probably be possible to go further with multiple levels of indexing
but it's not worth it, and it's important to remember that tree nodes take
space (the struct answer_list went back from 576 to 600 bytes).
With SRV records, a huge amount of time is spent looking for records
by walking long lists. It is possible to reduce this by indexing values
in trees instead. However the whole code relies a lot on the list
ordering, and even implements some round-robin on it to distribute IP
addresses to servers.
This patch starts carefully by replacing the list with a an eb32 tree
that is still used like a list, with a constant key 0. Since ebtrees
preserve insertion order for duplicates, the tree walk visits the nodes
in the exact same order it did with the lists. This allows to implement
the required infrastructure without changing the behavior.
When cleaning up the code to remove most explicit task masks in commit
beeabf531 ("MINOR: task: provide 3 task_new_* wrappers to simplify the
API"), a mistake was done with the external checks where the call does
task_new_on(1) instead of task_new_on(0) due to the confusion with the
previous mask 1.
No backport is needed as that's only 2.5-dev.
This one was used to indicate whether the callee had to follow particularly
safe code path when removing resolutions. Since the code now uses a kill
list, this is not needed anymore.
When scanning resolution.curr it's possible to try to free some
resolutions which will themselves result in freeing other ones. If
one of these other ones is exactly the next one in the list, the list
walk visits deleted nodes and causes memory corruption, double-frees
and so on. The approach taken using the "safe" argument to some
functions seems to work but it's extremely brittle as it is required
to carefully check all call paths from process_ressolvers() and pass
the argument to 1 there to refrain from deleting entries, so the bug
is very likely to come back after some tiny changes to this code.
A variant was tried, checking at various places that the current task
corresponds to process_resolvers() but this is also quite brittle even
though a bit less.
This patch uses another approach which consists in carefully unlinking
elements from the list and deferring their removal by placing it in a
kill list instead of deleting them synchronously. The real benefit here
is that the complexity only has to be placed where the complications
are.
A thread-local list is fed with elements to be deleted before scanning
the resolutions, and it's flushed at the end by picking the first one
until the list is empty. This way we never dereference the next element
and do not care about its presence or not in the list. One function,
resolv_unlink_resolution(), is exported and used outside, so it had to
be modified to use this list as well. Internal code has to use
_resolv_unlink_resolution() instead.
The code as it is uses crossed lists between many elements, and at
many places the code relies on list iterators or emptiness checks,
which does not work with only LIST_DELETE. Further, it is quite
difficult to place debugging code and checks in the current situation,
and gdb is helpless.
This code replaces all LIST_DELETE calls with LIST_DEL_INIT so that
it becomes possible to trust the lists.
This function allocates requesters by hand for each and every type. This
is complex and error-prone, and it doesn't even initialize the list part,
leaving dangling pointers that complicate debugging.
This patch introduces a new function resolv_get_requester() that either
returns the current pointer if valid or tries to allocate a new one and
links it to its destination. Then it makes use of it in the function
above to clean it up quite a bit. This allows to remove complicated but
unneeded tests.
Similar to the previous patch, the answer's list was only initialized the
first time it was added to a list, leading to bogus outdated pointer to
appear when debugging code is added around it to watch it. Let's make
sure it's always initialized upon allocation.
The query_list is physically stored in the struct resolution itself,
so we have a list that contains a list to items stored in itself (and
there is a single item). But the list is first initialized in
resolv_validate_dns_response(), while it's scanned in
resolv_process_responses() later after calling the former. First,
this results in crashes as soon as the code is instrumented a little
bit for debugging, as elements from a previous incarnation can appear.
But in addition to this, the presence of an element is checked by
verifying that the return of LIST_NEXT() is not NULL, while it may
never be NULL even for an empty list, resulting in bugs or crashes
if the number of responses does not match the list's contents. This
is easily triggered by testing for the list non-emptiness outside of
the function.
Let's make sure the list is always correct, i.e. it's initialized to
an empty list when the structure is allocated, elements are checked by
first verifying the list is not empty, they are deleted once checked,
and in any case at end so that there are no dangling pointers.
This should be backported, but only as long as the patch fits without
modifications, as adaptations can be risky there given that bugs tend
to hide each other.
Depending on the code that precedes the loop, gcc may emit this warning:
src/resolvers.c: In function 'resolv_process_responses':
src/resolvers.c:1009:11: warning: potential null pointer dereference [-Wnull-dereference]
1009 | if (query->type != DNS_RTYPE_SRV && flags & DNS_FLAG_TRUNCATED) {
| ~~~~~^~~~~~
However after carefully checking, r_res->header.qdcount it exclusively 1
when reaching this place, which forces the for() loop to enter for at
least one iteration, and <query> to be set. Thus there's no code path
leading to a null deref. It's possibly just because the assignment is
too far and the compiler cannot figure that the condition is always OK.
Let's just mark it to please the compiler.
This code is dangerous enough that we certainly don't want external code
to ever approach it, let's not export unnecessary functions like this one.
It was made static and a comment was added about its purpose.
There is a fundamental design bug in the resolvers code which is that
a list of active resolutions is being walked to try to delete outdated
entries, and that the code responsible for removing them also removes
other elements, including the next one which will be visited by the
list iterator. This randomly causes a use-after-free condition leading
to crashes, infinite loops and various other issues such as random memory
corruption.
A first fix for the memory fix for this was brought by commit 0efc0993e
("BUG/MEDIUM: resolvers: Don't release resolution from a requester
callbacks"). While preparing for more fixes, some code was factored by
commit 11c6c3965 ("MINOR: resolvers: Clean server in a dedicated function
when removing a SRV item"), which inadvertently passed "0" as the "safe"
argument all the time, missing one case of removal protection, instead
of always using "safe". This patch reintroduces the correct argument.
This must be backported with all fixes above.
Cc: Christopher Faulet <cfaulet@haproxy.com>
A few places have been caught triggering late bugs recently, always cases
of use-after-free because a freed element was still found in one of the
lists. This patch adds a few checks for such elements in dns_session_free()
before the final pool_free() and dns_session_io_handler() before adding
elements to lists to make sure they remain consistent. They do not trigger
anymore now.
When dns_session_release() calls dns_session_free(), it was shown that
it might still be attached there:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00000000006437d7 in dns_session_free (ds=0x7f895439e810) at src/dns.c:768
768 BUG_ON(!LIST_ISEMPTY(&ds->ring.waiters));
[Current thread is 1 (Thread 0x7f895bbe2700 (LWP 31792))]
(gdb) bt
#0 0x00000000006437d7 in dns_session_free (ds=0x7f895439e810) at src/dns.c:768
#1 0x0000000000643ab8 in dns_session_release (appctx=0x7f89545a4ff0) at src/dns.c:805
#2 0x000000000062e35a in si_applet_release (si=0x7f89545a5550) at include/haproxy/stream_interface.h:236
#3 0x000000000063150f in stream_int_shutw_applet (si=0x7f89545a5550) at src/stream_interface.c:1697
#4 0x0000000000640ab8 in si_shutw (si=0x7f89545a5550) at include/haproxy/stream_interface.h:437
#5 0x0000000000643103 in dns_session_io_handler (appctx=0x7f89545a4ff0) at src/dns.c:725
#6 0x00000000006d776f in task_run_applet (t=0x7f89545a5100, context=0x7f89545a4ff0, state=81924) at src/applet.c:90
#7 0x000000000068b82b in run_tasks_from_lists (budgets=0x7f895bbbf5c0) at src/task.c:611
#8 0x000000000068c258 in process_runnable_tasks () at src/task.c:850
#9 0x0000000000621e61 in run_poll_loop () at src/haproxy.c:2636
#10 0x0000000000622328 in run_thread_poll_loop (data=0x8d7440 <ha_thread_info+64>) at src/haproxy.c:2807
#11 0x00007f895c54a06b in start_thread () from /lib64/libpthread.so.0
#12 0x00007f895bf3772f in clone () from /lib64/libc.so.6
(gdb) p &ds->ring.waiters
$1 = (struct list *) 0x7f895439e8a8
(gdb) p ds->ring.waiters
$2 = {
n = 0x7f89545a5078,
p = 0x7f89545a5078
}
(gdb) p ds->ring.waiters->n
$3 = (struct list *) 0x7f89545a5078
(gdb) p *ds->ring.waiters->n
$4 = {
n = 0x7f895439e8a8,
p = 0x7f895439e8a8
}
Let's always detach it before freeing so that it remains possible to
check the dns_session's ring before releasing it, and possibly catch
bugs.
The barrier is insufficient here to protect the waiters list as we can
definitely catch situations where ds->waiter shows an inconsistency
whereby the element is not attached when entering the "if" block and
is already attached when attaching it later.
This patch uses a larger lock to maintain consistency. Without it the
code would crash in 30-180 minutes under heavy stress, always showing
the same problem (ds->waiter->n->p != &ds->waiter). Now it seems to
always resist, suggesting that this was indeed the problem.
This will have to be backported to 2.4.
Using tcp, after a session release and free, the session can remain
attached to the list of sessions with a response message waiting for
a commit (ds->waiter). This results to a use after free of this
session.
Also, on some error path and after free, a session could remain attached
to the lists of available idle/free sessions (ds->list).
This patch ensure to remove the session from those external lists
before a free.
This patch should be backported to all version including
the dns over tcp (2.4)
When an HTTP response is parsed, early parsing errors are not properly
handled. When this error is reported by the multiplexer, nothing is copied
into the input buffer. The HTX message remains empty but the
HTX_FL_PARSING_ERROR flag is set. In addition CS_FL_EOI is set on the
conn-stream. This last flag must be handled to prevent subscription for
receive events. Otherwise, in the best case, a L7 timeout error is
reported. But a transient loop is also possible if a shutdown is received
because the multiplexer notifies the check of the event while the check
never handles it and waits for more data.
Now, if CS_FL_EOI flag is set on the conn-stream, expect rules are
evaluated. Any error must be handled there.
Thanks to @kazeburo for his valuable report.
This patch should fix the issue #1420. It must be backported at least to
2.4. On 2.3 and 2.2, there is no loop but the wrong error is reported (empty
response instead of invalid one). Thus it may also be backported as far as
2.2.
If a channel error (READ_ERRO|READ_TIMEOUT|WRITE_ERROR|WRITE_TIMEOUT) is
detected by the stream, in process_stream(), FLT_END analyers must be
preserved. It is important to be sure to ends filter analysis and be able to
release the stream.
First, filters may release some ressources when FLT_END analyzers are
called. Then, the CF_FL_ANALYZE flag is used to sync end of analysis for the
request and the response. If FLT_END analyzer is ignored on a channel, this
may block the other side and freeze the stream.
This patch must be backported to all stable versions
Replace the test based on the enum value of the algorithm by an explicit
switch statement in case someone reorders it for some reason (while
still managing not to break the regtest).
In order for all the error return values to be distributed on the same
side (instead of surrounding the success error code), the return values
for errors other than a simple verification failure are switched to
negative values. This way the result of the jwt_verify converter can be
compared strictly to 1 as well relative to 0 (any <= 0 return value is
an error).
The documentation was also modified to discourage conversion of the
return value into a boolean (which would definitely not work).
resolv_hostname_cmp() is bogus, it is applied on labels and not plain
names, but doesn't make any distinction between length prefixes and
characters, so it compares the labels lengths via tolower() as well.
The only reason for which it doesn't break is because labels cannot
be larger than 63 bytes, and that none of the common encoding systems
have upper case letters in the lower 63 bytes, that could be turned
into a different value via tolower().
Now that all labels are stored in lower case, we don't need to burn
CPU cycles in tolower() at run time and can use memcmp() instead of
resolv_hostname_cmp(). This results in a ~22% lower CPU usage on large
farms using SRV records:
before:
18.33% haproxy [.] resolv_validate_dns_response
10.58% haproxy [.] process_resolvers
10.28% haproxy [.] resolv_hostname_cmp
7.50% libc-2.30.so [.] tolower
46.69% total
after:
24.73% haproxy [.] resolv_validate_dns_response
7.78% libc-2.30.so [.] __memcmp_avx2_movbe
3.65% haproxy [.] process_resolvers
36.16% total
The whole code relies on performing case-insensitive comparison on
lookups, which is extremely inefficient.
Let's make sure that all labels to be looked up or sent are first
converted to lower case. Doing so is also the opportunity to eliminate
an inefficient memcpy() in resolv_dn_label_to_str() that essentially
runs over a few unaligned bytes at once. As a side note, that call
was dangerous because it relied on a sign-extended size taken from a
string that had to be sanitized first.
This is tagged medium because while this is 100% safe, it may cause
visible changes on the wire at the packet level and trigger bugs in
test programs.
Released version 2.5-dev10 with the following main changes :
- MINOR: initcall: Rename __GLOBL and __GLOBL1.
- MINOR: rules: add a new function new_act_rule() to allocate act_rules
- MINOR: rules: add a file name and line number to act_rules
- MINOR: stream: report the current rule in "show sess all" when known
- MINOR: stream: report the current filter in "show sess all" when known
- CLEANUP: stream: Properly indent current_rule line in "show sess all"
- BUG/MINOR: lua: Fix lua error handling in `hlua_config_prepend_path()`
- CI: github: switch to OpenSSL 3.0.0
- REGTESTS: ssl: Fix references to removed option in test description
- MINOR: ssl: Add ssllib_name_startswith precondition
- REGTESTS: ssl: Fix ssl_errors test for OpenSSL v3
- REGTESTS: ssl: Reenable ssl_errors test for OpenSSL only
- REGTESTS: ssl: Use mostly TLSv1.2 in ssl_errors test
- MEDIUM: mux-quic: rationalize tx buffers between qcc/qcs
- MEDIUM: h3: properly manage tx buffers for large data
- MINOR: mux-quic: standardize h3 settings sending
- CLEANUP: h3: remove dead code
- MINOR: mux-quic: implement standard method to detect if qcc is dead
- MEDIUM: mux-quic: defer stream shut if remaining tx data
- MINOR: mux: remove last occurences of qcc ring buffer
- MINOR: quic: handle CONNECTION_CLOSE frame
- REGTESTS: ssl: re-enable set_ssl_cert_bundle.vtc
- MINOR: ssl: add ssl_fc_is_resumed to "option httpslog"
- MINOR: http: Add http_auth_bearer sample fetch
- MINOR: jwt: Parse JWT alg field
- MINOR: jwt: JWT tokenizing helper function
- MINOR: jwt: Insert public certificates into dedicated JWT tree
- MINOR: jwt: jwt_header_query and jwt_payload_query converters
- MEDIUM: jwt: Add jwt_verify converter to verify JWT integrity
- REGTESTS: jwt: Add tests for the jwt_verify converter
- BUILD: jwt: fix declaration of EVP_KEY in jwt-h.h
- MINOR: proto_tcp: use chunk_appendf() to ouput socket setup errors
- MINOR: proto_tcp: also report the attempted MSS values in error message
- MINOR: inet: report the faulty interface name in "bind" errors
- MINOR: protocol: report the file and line number for binding/listening errors
- MINOR: protocol: uniformize protocol errors
- MINOR: resolvers: fix the resolv_str_to_dn_label() API about trailing zero
- BUG/MEDIUM: resolver: make sure to always use the correct hostname length
- BUG/MINOR: resolvers: do not reject host names of length 255 in SRV records
- MINOR: resolvers: fix the resolv_dn_label_to_str() API about trailing zero
- MEDIUM: listeners: split the thread mask between receiver and bind_conf
- MINOR: listeners: add clone_listener() to duplicate listeners at boot time
- MEDIUM: listener: add the "shards" bind keyword
- BUG/MEDIUM: resolvers: use correct storage for the target address
- MINOR: resolvers: merge address and target into a union "data"
- BUG/MEDIUM: resolvers: fix truncated TLD consecutive to the API fix
- BUG/MEDIUM: jwt: fix base64 decoding error detection
- BUG/MINOR: jwt: use CRYPTO_memcmp() to compare HMACs
- DOC: jwt: fix a typo in the jwt_verify() keyword description
- BUG/MEDIUM: sample/jwt: fix another instance of base64 error detection
- BUG/MINOR: http-ana: Don't eval front after-response rules if stopped on back
- BUG/MINOR: sample: Fix 'fix_tag_value' sample when waiting for more data
- DOC: config: Move 'tcp-response content' at the right place
- BUG/MINOR: proxy: Use .disabled field as a bitfield as documented
- MINOR: proxy: Introduce proxy flags to replace disabled bitfield
- MINOR: sample/arg: Be able to resolve args found in defaults sections
- MEDIUM: proxy: Warn about ambiguous use of named defaults sections
- MINOR: proxy: Be able to reference the defaults section used by a proxy
- MINOR: proxy: Add PR_FL_READY flag on fully configured and usable proxies
- MINOR: config: Finish configuration for referenced default proxies
- MINOR: config: No longer remove previous anonymous defaults section
- MINOR: tcpcheck: Support 2-steps args resolution in defaults sections
- MEDIUM: rules/acl: Parse TCP/HTTP rules and acls defined in defaults sections
- MEDIUM: tcp-rules: Eval TCP rules defined in defaults sections
- MEDIUM: http-ana: Eval HTTP rules defined in defaults sections
- BUG/MEDIUM: sample: Cumulate frontend and backend sample validity flags
- REGTESTS: Add scripts to test support of TCP/HTTP rules in defaults sections
- DOC: config: Add documentation about TCP/HTTP rules in defaults section
- DOC: config: Rework and uniformize how TCP/HTTP rules are documented
- BUG/MINOR: proxy: Release ACLs and TCP/HTTP rules of default proxies
- BUG/MEDIUM: cpuset: fix cpuset size for FreeBSD
- BUG/MINOR: sample: fix backend direction flags consecutive to last fix
- BUG/MINOR: listener: fix incorrect return on out-of-memory
- BUG/MINOR: listener: add an error check for unallocatable trash
- CLEANUP: listeners: remove unreachable code in clone_listener()
Coverity reported in issue #1416 that label oom3 is not reachable in
function close_listener() added by commit 59a877dfd ("MINOR: listeners:
add clone_listener() to duplicate listeners at boot time"). The code
leading to it was removed during the development of the function, but
not the label itself.
Coverity noticed in issue #1416 that a missing allocation error was
introduced in tcp_bind_listener() with the rework of error messages by
commit ed1748553 ("MINOR: proto_tcp: use chunk_appendf() to ouput socket
setup errors"). In practice nobody will ever face it but better address
it anyway.
No backport is needed.
When the clone_listener() function was added in commit 59a877dfd
("MINOR: listeners: add clone_listener() to duplicate listeners at
boot time"), a stupid bug was introduced when splitting the error
path because while the first case where calloc fails will leave NULL
in the output value, the other cases will return the pointer to a
freed area. This was reported by Coverity in issue #1416.
In practice nobody will face it (out-of-memory while checking config),
but let's fix it.
No backport is needed.
Commit 7a06ffb85 ("BUG/MEDIUM: sample: Cumulate frontend and backend
sample validity flags") introduced a typo confusing the request and
the response direction when checking for validity of a rule applied
to a backend. This was reported by Coverity in issue #1417.
This needs to be backported where the patch above is backported.
Fix the macro used to retrieve the max number of cpus on FreeBSD. The
MAXCPU is not properly defined in userspace and always set to 1 despite
the machine architecture. Replace it with CPU_SETSIZE.
See https://freebsd-hackers.freebsd.narkive.com/gw4BeLum/smp-in-machine-params-h#post6
Without this, the following config file is rejected on FreeBSD even if
the machine is SMP :
global
cpu-map 1-2 0-1
This must be backported up to 2.4.
It is now possible to have TCP/HTTP rules and ACLs defined in defaults
sections. So we must try to release corresponding lists when a default proxy
is destroyed.
No backport needed.
Now all these rules are documented using the same structure. First there is
a general description with the list of all supported actions. Then all
actions are described in details. Thus, it is easy to have a quick list of
all supported actions and this avoids to have a huge description with all
info about these actions. In addition, when it is possible, we make a
reference to already documented parts.
Documentation of each directive that can now be used in defaults section was
updated to explain how it works. A special mark was added to specify when a
keyword is supported by defaults sections with a name but not anonymous
ones. In this case an exclamation mark is added.
3 scripts are added:
* startup/default_rules.vtc to check configuration parsing
* http-rules/default_rules.vtc to check evaluation of HTTP rules
* tcp-rules/default_rules.vtc to check evaluation of TCP rules
When the sample validity flags are computed to check if a sample is used in
a valid scope, the flags depending on the proxy capabilities must be
cumulated. Historically, for a sample on the request, only the frontend
capability was used to set the sample validity flags while for a sample on
the response only the backend was used. But it is a problem for listen or
defaults proxies. For those proxies, all frontend and backend samples should
be valid. However, at many place, only frontend ones are possible.
For instance, it is impossible to set the backend name (be_name) into a
variable from a listen proxy.
This bug exists on all stable versions. Thus this patch should probably be
backported. But with some caution because the code has probably changed
serveral times. Note that nobody has ever noticed this issue. So the need to
backport this patch must be evaluated for each branch.
As for TCP rules, HTTP rules from defaults section are now evaluated. These
rules are evaluated before those of the proxy. The same default ruleset
cannot be attached to the frontend and the backend. However, at this stage,
we take care to not execute twice the same ruleset. So, in theory, a
frontend and a backend could use the same defaults section. In this case,
the default ruleset is executed before all others and only once.
TCP rules from defaults section are now evaluated. These rules are evaluated
before those of the proxy. For L7 TCP rules, the same default ruleset cannot
be attached to the frontend and the backend. However, at this stage, we take
care to not execute twice the same ruleset. So, in theory, a frontend and a
backend could use the same defaults section. In this case, the default
ruleset is executed before all others and only once.
TCP and HTTP rules can now be defined in defaults sections, but only those
with a name. Because these rules may use conditions based on ACLs, ACLs can
also be defined in defaults sections.
However there are some limitations:
* A defaults section defining TCP/HTTP rules cannot be used by a defaults
section
* A defaults section defining TCP/HTTP rules cannot be used bu a listen
section
* A defaults sections defining TCP/HTTP rules cannot be used by frontends
and backends at the same time
* A defaults sections defining 'tcp-request connection' or 'tcp-request
session' rules cannot be used by backends
* A defaults sections defining 'tcp-response content' rules cannot be used
by frontends
The TCP request/response inspect-delay of a proxy is now inherited from the
defaults section it uses. For now, these rules are only parsed. No evaluation is
performed.
With the commit eaba25dd9 ("BUG/MINOR: tcpcheck: Don't use arg list for
default proxies during parsing"), we restricted the use of sample fetch in
tcpcheck rules defined in a defaults section to those depending on explicit
arguments only. This means a tcpcheck rules defined in a defaults section
cannot rely on argument unresolved during the configuration parsing.
Thanks to recent changes, it is now possible again.
This patch is mandatory to support TCP/HTTP rules in defaults sections.