xprt_quic module was too large and did not reflect the true architecture
by contrast to the other protocols in haproxy.
Extract code related to XPRT layer and keep it under xprt_quic module.
This code should only contains a simple API to communicate between QUIC
lower layer and connection/MUX.
The vast majority of the code has been moved into a new module named
quic_conn. This module is responsible to the implementation of QUIC
lower layer. Conceptually, it overlaps with TCP kernel implementation
when comparing QUIC and HTTP1/2 stacks of haproxy.
This should be backported up to 2.6.
There was some identical code between xprt_quic and quic_enc modules.
This concerns helper on QUIC varint type. Keep only the version in
quic_enc file : this should help to reduce dependency on xprt_quic
module.
Note that quic_max_int_by_size() has been removed and is replaced by the
identical quic_max_int().
This should be backported up to 2.6.
Clean up quic sources by adjusting headers list included depending
on the actual dependency of each source file.
On some occasion, xprt_quic.h was removed from included list. This is
useful to help reducing the dependency on this single file and cleaning
up QUIC haproxy architecture.
This should be backported up to 2.6.
Two prototypes in quic_tls module were not identical to the actual
function definition.
* quic_tls_decrypt2() : the second argument const attribute is not
present, to be able to use it with EVP_CIPHER_CTX_ctlr(). As a
consequence of this change, token field of quic_rx_packet is now
declared as non-const.
* quic_tls_generate_retry_integrity_tag() : the second argument type
differ between the two. Adjust this by fixing it to as unsigned char
to match EVP_EncryptUpdate() SSL function.
This situation did not seem to have any visible effect. However, this is
clearly an undefined behavior and should be treated as a bug.
This should be backported up to 2.6.
Some variables related to QUIC TLS were defined in a header file : their
definitions are now moved properly in the implementation file, with only
declarations in the header.
This should be backported up to 2.6.
In github issue #1878, Bart Butler reported observing turn-around states
(1 second pause) after connection retries going to different servers,
while this ought not happen.
In fact it does happen because back_handle_st_cer() enforces the TAR
state for any algo that's not round-robin. This means that even leastconn
has it, as well as hashes after the number of servers changed.
Prior to doing that, the call to stream_choose_redispatch() has already
had a chance to perform the correct choice and to check the algo and
the number of retries left. So instead we should just let that function
deal with the algo when needed (and focus on deterministic ones), and
let the former just obey. Bart confirmed that the fixed version works
as expected (no more delays during retries).
This may be backported to older releases, though it doesn't seem very
important. At least Bart would like to have it in 2.4 so let's go there
for now after it has cooked a few weeks in 2.6.
Idle connections do not work on 32-bit machines due to an alignment issue
causing the connection nodes to be indexed with their lower 32-bits set to
zero and the higher 32 ones containing the 32 lower bitss of the hash. The
cause is the use of ebmb_node with an aligned data, as on this platform
ebmb_node is only 32-bit aligned, leaving a hole before the following hash
which is a uint64_t:
$ pahole -C conn_hash_node ./haproxy
struct conn_hash_node {
struct ebmb_node node; /* 0 20 */
/* XXX 4 bytes hole, try to pack */
int64_t hash; /* 24 8 */
struct connection * conn; /* 32 4 */
/* size: 40, cachelines: 1, members: 3 */
/* sum members: 32, holes: 1, sum holes: 4 */
/* padding: 4 */
/* last cacheline: 40 bytes */
};
Instead, eb64 nodes should be used when it comes to simply storing a
64-bit key, and that is what this patch does.
For backports, a variant consisting in simply marking the "hash" member
with a "packed" attribute on the struct also does the job (tested), and
might be preferable if the fix is difficult to adapt. Only 2.6 and 2.5
are affected by this.
Correct a commentary in in include/haproxy/global-t.h and include/haproxy/tools.h
Replace the CLI command 'set global-key <key>' by 'set anon global-key <key>' in
order to find it easily when you don't remember it, the recommandation can guide
you when you just tap 'set anon'.
No backport needed, except if anonymization mechanism is backported.
Add an option to dump the number lines of the configuration file when
it's dumped. Other options can be easily added. Options are separated
by ',' when tapping the command line:
'./haproxy -dC[key],line -f [file]'
No backport needed, except if anonymization mechanism is backported.
Add a parameter hasport to return a simple hash or ipstring when
ipstring has no port. Doesn't hash if scramble is null. Add
option PA_O_PORT_RESOLVE to str2sa_range. Add a case UNIX.
Those modification permit to use hash_ipanon in cli section
in order to dump the same anonymization of address in the
configuration file and with CLI.
No backport needed, except if anonymization mechanism is backported.
While reading the recent changes around mt_list_for_each_entry_safe() I
noticed a spurious "q" at the beginning of a line introduced by commit
455843721 ("CLEANUP: list: Fix mt_list_for_each_entry_safe indentation")
and that visually confusing multi-line comments missing the trailing '\'
character were introduced by previous commit 60cffbaca ("MINOR: list:
documenting mt_list_for_each_entry_safe() macro"), which at first glance
made the macro look broken. In addition, multi-line comments must end
with a "*/" on its own line to instantly spot where it ends without
having to read the whole line, like this:
/* we know from the above that foo is always valid
* here so it's safe to end the string:
*/
*(unsigned char *)foo = 0;
Not like this:
/* we know from the above that foo is always valid
* here so it's safe to end the string: */
*(unsigned char *)foo = 0;
Finally, macro's main comment mentionned the wrong macro name and types,
and was randomly indented.
Upon a reload with the master CLI, the FD of the master CLI session is
received by the internal socketpair listener.
This session is used to display the status of the reload and then will
close.
- Adding some comments in mt_list_for_each_entry_safe() macro to make it
somehow understandable.
The macro is performing critical stuff but was not documented at all.
Moreover, nested loops with conditional tricks are used,
making it even harder to understand the steps performed in it.
- Updating mt_list_for_each_entry_safe usage example.
- Added a "FIXME:" comment in a specific condition that seems to
never be reached even when deeply stress-testing mt_lists
(using test_list binary provided in the repository).
Pollers that support busy polling spend a lot of time (and cause
contention) updating the global date when they're looping over themselves
while it serves no purpose: what's needed is only an update on the local
date to know when to stop looping.
This patch splits clock_pudate_date() into a pair of local and global
update functions, so that pollers can be easily improved.
Patrick Hemmer reported an improper log behavior when using
log-format to escape log data (+E option):
Some bytes were truncated from the output:
- escape_string() function now takes an extra parameter that
allow the caller to specify input string stop pointer in
case the input string is not guaranteed to be zero-terminated.
- Minors checks were added into lf_text_len() to make sure dst
string will not overflow.
- lf_text_len() now makes proper use of escape_string() function.
This should be backported as far as 1.8.
MUX QUIC snd_buf operation whill return early if a qcs instance is
resetted. In this case, HTX is left untouched and the callback returns
the whole bufer size. This lead to an undefined behavior as the stream
layer is notified about a transfer but does not see its HTX buffer
emptied. In the end, the transfer may stall which will lead to a leak on
session.
To fix this, HTX buffer is now resetted when snd_buf is short-circuited.
This should fix the issue as now the stream layer can continue the
transfer until its completion.
This patch has already been tested by Tristan and is reported to solve
the github issue #1801.
This should be backported up to 2.6.
Factorize common code between h3 and hq-interop snd_buf operation. This
is inserted in MUX QUIC snd_buf own callback.
The h3/hq-interop API has been adjusted to directly receive a HTX
message instead of a plain buf. This led to extracting part of MUX QUIC
snd_buf in qmux_http module.
This should be backported up to 2.6.
Extract function dealing with HTX outside of MUX QUIC. For the moment,
only rcv_buf stream operation is concerned.
The main objective is to be able to support both TCP and HTTP proxy mode
with a common base and add specialized modules on top of it.
This should be backported up to 2.6.
QUIC MUX implements several APIs to interface with stream, quic-conn and
app-ops layers. It is planified to better separate this roles, possibly
by using several files.
The first step is to extract QUIC MUX traces in a dedicated source
files. This will allow to reuse traces in multiple files.
The main objective is to be
able to support both TCP and HTTP proxy mode with a common base and add
specialized modules on top of it.
This should be backported up to 2.6.
nb_hreq is a counter on qcc for active HTTP requests. It is incremented
for each qcs where a full HTTP request was received. It is decremented
when the stream is closed locally :
- on HTTP response fully transmitted
- on stream reset
A bug will occur if a stream is resetted without having processed a full
HTTP request. nb_hreq will be decremented whereas it was not
incremented. This will lead to a crash when building with
DEBUG_STRICT=2. If BUG_ON_HOT are not active, nb_hreq counter will wrap
which may break the timeout logic for the connection.
This bug was triggered on haproxy.org. It can be reproduced by
simulating the reception of a STOP_SENDING frame instead of a STREAM one
by patching qc_handle_strm_frm() :
+ if (quic_stream_is_bidi(strm_frm->id))
+ qcc_recv_stop_sending(qc->qcc, strm_frm->id, 0);
+ //ret = qcc_recv(qc->qcc, strm_frm->id, strm_frm->len,
+ // strm_frm->offset.key, strm_frm->fin,
+ // (char *)strm_frm->data);
To fix this bug, a qcs is now flagged with a new QC_SF_HREQ_RECV. This
is set when the full HTTP request is received. When the stream is closed
locally, nb_hreq will be decremented only if this flag was set.
This must be backported up to 2.6.
This commit adds a new command line option -dC to dump the configuration
file. An optional key may be appended to -dC in order to produce an
anonymized dump using this key. The anonymizing process uses the same
algorithm as the CLI so that the same key will produce the same hashes
for the same identifiers. This way an admin may share an anonymized
extract of a configuration to match against live dumps. Note that key 0
will not anonymize the output. However, in any case, the configuration
is dumped after tokenizing, thus comments are lost.
In order to allow users to dump internal states using a specific key
without changing the global one, we're introducing a key in the CLI's
appctx. This key is preloaded from the global one when "set anon on"
is used (and if none exists, a random one is assigned). And the key
can optionally be assigned manually for the whole CLI session.
A "show anon" command was also added to show the anon state, and the
current key if the users has sufficient permissions. In addition, a
"debug dev hash" command was added to test the feature.
Add a uint32_t key in global to hash words with it. A new CLI command
'set global-key <key>' was added to change the global anonymizing key.
The global may also be set in the configuration using the global
"anonkey" directive. For now this key is not used.
These macros and functions will be used to anonymize strings by producing
a short hash. This will allow to match config elements against dump elements
without revealing the original data. This will later be used to anonymize
configuration parts and CLI commands output. For now only string, identifiers
and addresses are supported, but the model is easily extensible.
Small cleanup on snd_buf for application protocol layer.
* do not export h3_snd_buf
* replace stconn by a qcs argument. This is better as h3/hq-interop only
uses the qcs instance.
This should be backported up to 2.6.
The new functions h1c_show_flags() and h1s_show_flags() decode the flags
state into a string, and are used by dev/flags:
$ /dev/flags/flags h1c 0x2200
h1c->flags = H1C_F_ST_READY | H1C_F_ST_ATTACHED
./dev/flags/flags h1s 0x190
h1s->flags = H1S_F_BODYLESS_RESP | H1S_F_NOT_FIRST | H1S_F_WANT_KAL
The same was performed for the H2 multiplexer. H1C and H1S flags are moved
in a dedicated header file. It will be mainly used to be able to decode
mux-h1 flags from the flags utility.
In this patch, we only move the flags to mux_h1-t.h.
H3 SETTINGS emission has recently been delayed. The idea is to send it
with the first STREAM to reduce sendto syscall invocation. This was
implemented in the following patch :
3dd79d378c
MINOR: h3: Send the h3 settings with others streams (requests)
This patch works fine under nominal conditions. However, it will cause a
crash if a HTTP/3 connection is released before having sent any data,
for example when receiving an invalid first request. In this case,
qc_release will first free qcc.app_ops HTTP/3 application protocol layer
via release callback. Then qc_send is called to emit any closing frames
built by app_ops release invocation. However, in qc_send, as no data has
been sent, it will try to complete application layer protocol
intialization, with a SETTINGS emission for HTTP/3. Thus, qcc.app_ops is
reused, which is invalid as it has been just freed. This will cause a
crash with h3_finalize in the call stack.
This bug can be reproduced artificially by generating incomplete HTTP/3
requests. This will in time trigger http-request timeout without any
data send. This is done by editing qc_handle_strm_frm function.
- ret = qcc_recv(qc->qcc, strm_frm->id, strm_frm->len,
+ ret = qcc_recv(qc->qcc, strm_frm->id, strm_frm->len - 1,
strm_frm->offset.key, strm_frm->fin,
(char *)strm_frm->data);
To fix this, application layer closing API has been adjusted to be done
in two-steps. A new shutdown callback is implemented : it is used by the
HTTP/3 layer to generate GOAWAY frame in qc_release prologue.
Application layer context qcc.app_ops is then freed later in qc_release
via the release operation which is now only used to liberate app layer
ressources. This fixes the problem as the intermediary qc_send
invocation will be able to reuse app_ops before it is freed.
This patch fixes the crash, but it would be better to adjust H3 SETTINGS
emission in case of early connection closing : in this case, there is no
need to send it. This should be implemented in a future patch.
This should fix the crash recently experienced by Tristan in github
issue #1801.
This must be backported up to 2.6.
With quicTLS the set_encruption_secrets callback is always called with
the read_secret and the write_secret.
However this is not the case with libreSSL, which uses the
set_read_secret()/set_write_secret() mecanism. It still provides the
set_encryption_secrets() callback, which is called with a NULL
parameter for the write_secret during the read, and for the read_secret
during the write.
The exchange key was not designed in haproxy to be called separately for
read and write, so this patch allow calls with read or write key to
NULL.
httpclient_new_from_proxy() is a variant of httpclient_new() which
allows to create the requests from a different proxy.
The proxy and its 2 servers are now stored in the httpclient structure.
The proxy must have been created with httpclient_create_proxy() to be
used.
The httpclient_postcheck() callback will finish the initialization of
all proxies created with PR_CAP_HTTPCLIENT.
httpclient_create_proxy() is a function which creates a proxy that could
be used for the httpclient. It will allocate a proxy, a raw server and
an ssl server.
This patch moves most of the code from httpclient_precheck() into a
generic function httpclient_create_proxy().
The proxy will have the PR_CAP_HTTPCLIENT capability.
This could be used for specifics httpclient instances that needs
different proxy settings.
The init of tcp sink, particularly for SSL, was done
too early in the code, during parsing, and this can cause
a crash specially if nbthread was not configured.
This was detected by William using ASAN on a new regtest
on log forward.
This patch adds the 'struct proxy' created for a sink
to a list and this list is now submitted to the same init
code than the main proxies list or the log_forward's proxies
list. Doing this, we are assured to use the right init sequence.
It also removes the ini code for ssl from post section parsing.
This patch should be backported as far as v2.2
Note: this fix uses 'goto' labels created by commit
'BUG/MAJOR: log-forward: Fix log-forward proxies not fully initialized'
but this code didn't exist before v2.3 so this patch needs to be
adapted for v2.2.
The new functions h2c_show_flags() and h2s_show_flags() decode the flags
state into a string, and are used by dev/flags:
$ ./dev/flags/flags h2c 0x0600
h2c->flags = H2_CF_DEM_IN_PROGRESS | H2_CF_DEM_SHORT_READ
$ ./dev/flags/flags h2s 0x7003
h2s->flags = H2_SF_HEADERS_RCVD | H2_SF_OUTGOING_DATA | H2_SF_HEADERS_SENT \
| H2_SF_ES_SENT | H2_SF_ES_RCVD
Originally in 1.8 we wanted to have an independent mux that could possibly
be disabled and would not impose dependencies on the outside. Everything
would fit into a single C file and that was fine.
Nowadays muxes are unavoidable, and not being able to easily inspect them
from outside is sometimes a bit of a pain. In particular, the flags utility
still cannot be used to decode their flags.
As a first step towards this, this patch moves the flags and enums to
mux_h2-t.h, as well as the two state decoding inline functions. It also
dropped the H2_SS_*_BIT defines that nobody uses. The mux_h2.c file remains
the only one to include that for now.
The new function is fd_show_flags() and it reports known FD flags:
$ ./dev/flags/flags fd 0x000121
fd->flags = FD_POLL_IN | FD_EV_READY_W | FD_EV_ACTIVE_R
The fallback macros for when stdio is not there didn't have the "..."
and were causing build issues on platforms with stricter dependencies
between includes.
This patch is a prerequisite for #1626.
Adding PAUSED state to the list of available proxy states.
The flag is set when the proxy is paused at runtime (pause_listener()).
It is cleared when the proxy is resumed (resume_listener()).
It should be backported to 2.6, 2.5 and 2.4
A minor API change was performed in listener(.c/.h) to restore consistency
between stop_listener() and (resume/pause)_listener() functions.
LISTENER_LOCK was never locked prior to calling stop_listener():
lli variable hint is thus not useful anymore.
Added PROXY_LOCK locking in (resume/pause)_listener() functions
with related lpx variable hint (prerequisite for #1626).
It should be backported to 2.6, 2.5 and 2.4
The new function is strm_et_show_flags(). Only the error type is
handled at the moment, as a bit more complex logic is needed to
mix the values and enums present in some fields.
The two new functions are se_show_flags() and sc_show_flags().
Maybe something could be done for SC_ST_* values but as it's a
small enum, a simple switch/case should work fine.
The two new functions are chn_show_analysers() and chn_show_flags().
They work on an existing buffer so one was declared in flags.c for this
purpose. File flags.c does not have to know about channel flags anymore.
Some of our flags have enums inside a mask. The new macro __APPEND_ENUM
is able to deal with that by comparing the flag's value against an exact
one under the mask. One needs to take care of eliminating the zero value
though, otherwise delimiters will not always be properly placed (e.g. if
some flags were dumped before and what remains is exactly zero). The
bits of the mask are cleared only upon exact matches.
The "flags" utility is useful but painful to maintain up to date. This
commit aims at providing a low-maintenance solution to keep flags up to
date, by proposing some macros that build a string from a set of flags
in a way that requires the least possible verbosity.
The idea will be to add an inline function dedicated to this just after
the flags declaration, and enumerate the flags one is interested in, and
that function will fill a string based on them.
Placing this inside the type files allows both haproxy and external tools
like "flags" to use it, but comes with a few constraints. First, the
files will be slightly less readable if these functions are huge, so they
need to stay as compact as possible. Second, the function will need
anprintf() and we don't want to include stdio.h in type files as it
proved to be particularly heavy and to cause definition headaches in
the past.
As such the file here only contains a macro enclosed in #ifdef EOF (that
is defined in stdio), and provides an alternate empty one when no stdio
is defined. This way it's the caller that has to include stdio first or
it won't get anything back, and in practice the locations relying on
this always have it.
The macro has to be used in 3 steps:
- prologue: dumps 0 and exits if the value is zero
- flags: the macro can be recursively called and it will push the
flag from bottom to top so that they appear in the same order as
today without requiring to be declared the other way around
- epilogue: dump remaining flags that were not identified
The macro was arranged so that a single character can be used with no
other argument to declare all flags at once. Example:
#define _(n, ...) __APPEND_FLAG(buf, len, del, flg, n, #n, __VA_ARGS__)
_(0);
_(X_FLAG1, _(X_FLAG2, _(X_FLAG3, _(X_FLAG4))));
_(~0);
#undef _
Existing files will have to be updated to rely on it, and more files
could come soon.
This is the ->finalize application callback which prepares the unidirectional STREAM
frames for h3 settings and wakeup the mux I/O handler to send them. As haproxy is
at the same time always waiting for the client request, this makes haproxy
call sendto() to send only about 20 bytes of stream data. Furthermore in case
of heavy loss, this give less chances to short h3 requests to succeed.
Drawback: as at this time the mux sends its streams by their IDs ascending order
the stream 0 is always embedded before the unidirectional stream 3 for h3 settings.
Nevertheless, as these settings may be lost and received after other h3 request
streams, this is permitted by the RFC.
Perhaps there is a better way to do. This will have to be checked with Amaury.
Must be backported to 2.6.
It is possible to speed up the handshake completion but only one time
by connection as mentionned in RFC 9002 "6.2.3. Speeding up Handshake Completion".
Add a flag to prevent this process to be run several times
(see https://www.rfc-editor.org/rfc/rfc9002#name-speeding-up-handshake-compl).
Must be backported to 2.6.
This removes all the hard-coded 8-bit and 256 entries to use a pair of
macros instead so that we can more easily experiment with larger table
sizes if needed.
There used to be one tid for tasklets and a thread_mask for tasks. Since
2.7, both tasks and tasklets now use a tid (albeit with a very slight
semantic difference for the negative value), to in order to limit code
duplication and to ease debugging it makes sense to move tid into the
common part. One limitation is that it will leave a hole in the structure,
but we now have the wake_date that is always present and can move there as
well to plug the hole.
This results in something overall pretty clean (and cleaner than before),
with the low-level stuff (state,tid,process,context) appearing first, then
the caller stuff (caller,wake_date,calls,debug) next, and finally the
type-specific stuff (rq/wq/expire/nice).
Instead of storing an index that's swapped at every call, let's use the
two pointers as a shifting history. Now we have a permanent "caller"
field that records the last caller, and an optional prev_caller in the
debug section enabled by DEBUG_TASK that keeps a copy of the previous
caller one. This way, not only it's much easier to follow what's
happening during debugging, but it saves 8 bytes in the struct task in
debug mode and still keeps it under 2 cache lines in nominal mode, and
this will finally be usable everywhere and later in profiling.
The caller_idx was also used as a hint that the entry was freed, in order
to detect wakeup-after-free. This was changed by setting caller to -1
instead and preserving its value in caller[1].
Finally, the operations were made atomic. That's not critical but since
it's used for debugging and race conditions represent a significant part
of the issues in multi-threaded mode, it seems wise to at least eliminate
some possible factors of faulty analysis.
appctx_wakeup() relies on task_wakeup(), but since it calls it from a
function, the calling place is always appctx_wakeup() itself, which is
not very useful.
Let's turn it to a macro so that we can log the location of the caller
instead. As an example, the cli_io_handler() which used to be seen as
this:
(gdb) p *appctx->t.debug.caller[0]
$10 = {
func = 0x9ffb78 <__func__.37996> "appctx_wakeup",
file = 0x9b336a "include/haproxy/applet.h",
line = 110,
what = 1 '\001',
arg8 = 0 '\000',
arg32 = 0
}
Now shows the more useful:
(gdb) p *appctx->t.debug.caller[0]
$6 = {
func = 0x9ffe80 <__func__.38641> "sc_app_chk_snd_applet",
file = 0xa00320 "src/stconn.c",
line = 996,
what = 6 '\006',
arg8 = 0 '\000',
arg32 = 0
}
This reduces the task struct by 8 bytes, reduces the code size a little
bit by simplifying the calling convention (one argument dropped), and
as a bonus provides the function name in the caller.
The memstats code currently defines its own file/function/line number,
type and extra pointer. We don't need to keep them separate and we can
easily replace them all with just a struct ha_caller. Note that the
extra pointer could be converted to a pool ID stored into arg8 or
arg32 and be dropped as well, but this would first require to define
IDs for pools (which we currently do not have).
The purpose of this structure is to assemble all constant parts of a
generic calling point for a specific event. These ones are created by
the compiler as a static const element outside of the code path, so
they cost nothing in terms of CPU, and a pointer to that descriptor
can be passed to the place that needs it. This is very similar to what
is being done for the mem_stat stuff.
This will be useful to simplify and improve DEBUG_TASK.
There are a few places where it's convenient to hash a pointer to compute
a statistics bucket. Here we're basically reusing the hash that was used
by memory profiling with a minor update that the multiplier was corrected
to be prime and stand by its promise to have equal numbers of 1 and 0,
and that 32-bit platforms won't lose range anymore.
A two-pointer variant was also added.
It was a mistake to put these two fields in the struct task. This
was added in 1.9 via commit 9efd7456e ("MEDIUM: tasks: collect per-task
CPU time and latency"). These fields are used solely by streams in
order to report the measurements via the lat_ns* and cpu_ns* sample
fetch functions when task profiling is enabled. For the rest of the
tasks, this is pure CPU waste when profiling is enabled, and memory
waste 100% of the time, as the point where these latencies and usages
are measured is in the profiling array.
Let's move the fields to the stream instead, and have process_stream()
retrieve the relevant info from the thread's context.
The struct task is now back to 120 bytes, i.e. almost two cache lines,
with 32 bit still available.
The profile entry that corresponds to the current task/tasklet being
profiled is now stored into the thread's context. This will allow it
to be accessed from the tasks themselves. This is needed for an upcoming
fix.
When task profiling is enabled, the scheduler can measure and report
the cumulated time spent in each task and their respective latencies. But
this was wrong for tasks with few wakeups as well as for self-waking ones,
because the call date needed to measure how long it takes to process the
task is retrieved in the task itself (->wake_date was turned to the call
date), and we could face two conditions:
- a new wakeup while the task is executing would reset the ->wake_date
field before returning and make abnormally low values being reported;
that was likely the case for taskrun_applet for self-waking applets;
- when the task dies, NULL is returned and the call date couldn't be
retrieved, so that CPU time was not being accounted for. This was
particularly visible with process_stream() which is usually called
only twice per request, and whose time was systematically halved.
The cleanest solution here is to keep in mind that the scheduler already
uses quite a bit of local context in th_ctx, and place the intermediary
values there so that they cannot vanish. The wake_date has to be reset
immediately once read, and only its copy is used along the function. Note
that this must be done both for tasks and tasklet, and that until recently
tasklets were also able to report wrong values due to their sole dependency
on TH_FL_TASK_PROFILING between tests.
One nice benefit for future improvements is that such information will now
be available from the task without having to be stored into the task itself
anymore.
Since the tasklet part was computed on wrapping 32-bit arithmetics and
the task one was on 64-bit, the values were now consistently moved to
32-bit as it's already largely sufficient (4s spent in a task is more
than twice what the watchdog would tolerate). Some further cleanups might
be necessary, but the patch aimed at staying minimal.
Task profiling output after 1 million HTTP request previously looked like
this:
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
h1_io_cb 2012338 4.850s 2.410us 12.91s 6.417us
process_stream 2000136 9.594s 4.796us 34.26s 17.13us
sc_conn_io_cb 2000135 1.973s 986.0ns 30.24s 15.12us
h1_timeout_task 137 - - 2.649ms 19.34us
accept_queue_process 49 152.3us 3.107us 321.7yr 6.564yr
main+0x146430 7 5.250us 750.0ns 25.92us 3.702us
srv_cleanup_idle_conns 1 559.0ns 559.0ns 918.0ns 918.0ns
task_run_applet 1 - - 2.162us 2.162us
Now it looks like this:
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
h1_io_cb 2014194 4.794s 2.380us 13.75s 6.826us
process_stream 2000151 20.01s 10.00us 36.04s 18.02us
sc_conn_io_cb 2000148 2.167s 1.083us 32.27s 16.13us
h1_timeout_task 198 54.24us 273.0ns 3.487ms 17.61us
accept_queue_process 52 158.3us 3.044us 409.9us 7.882us
main+0x1466e0 18 16.77us 931.0ns 63.98us 3.554us
srv_cleanup_toremove_conns 8 282.1us 35.26us 546.8us 68.35us
srv_cleanup_idle_conns 3 149.2us 49.73us 8.131us 2.710us
task_run_applet 3 268.1us 89.38us 11.61us 3.871us
Note the two-fold difference on process_stream().
This feature is essentially used for debugging so it has extremely limited
impact. However it's used quite a bit more in bug reports and it would be
desirable that at least 2.6 gets this fix backported. It depends on at least
these two previous patches which will then also have to be backported:
MINOR: task: permanently enable latency measurement on tasklets
CLEANUP: task: rename ->call_date to ->wake_date
This field is misnamed because its real and important content is the
date the task was woken up, not the date it was called. It temporarily
holds the call date during execution but this remains confusing. In
fact before the latency measurements were possible it was indeed a call
date. Thus is will now be called wake_date.
This change is necessary because a subsequent fix will require the
introduction of the real call date in the thread ctx.
When tasklet latency measurement was enabled in 2.4 with commit b2285de04
("MINOR: tasks: also compute the tasklet latency when DEBUG_TASK is set"),
the feature was conditionned on DEBUG_TASK because the field would add 8
bytes to the struct tasklet.
This approach was not a very good idea because the struct ends on an int
anyway thus it does finish with a 32-bit hole regardless of the presence
of this field. What is true however is that adding it turned a 64-byte
struct to 72-byte when caller debugging is enabled.
This patch revisits this with a minor change. Now only the lowest 32
bits of the call date are stored, so they always fit in the remaining
hole, and this allows to remove the dependency on DEBUG_TASK. With
debugging off, we're now seeing a 48-byte struct, and with debugging
on it's exactly 64 bytes, thus still exactly one cache line. 32 bits
allow a latency of 4 seconds on a tasklet, which already indicates a
completely dead process, so there's no point storing the upper bits at
all. And even in the event it would happen once in a while, the lost
upper bits do not really add any value to the debug reports. Also, now
one tasklet wakeup every 4 billion will not be sampled due to the test
on the value itself. Similarly we just don't care, it's statistics and
the measurements are not 9-digit accurate anyway.
There's a subtle (harmless) bug in task_instant_wakeup(). As it uses
some tasklet code instead of some task code, the debug part also acts
on the tasklet equivalent, and the call_date is only set when DEBUG_TASK
is set instead of inconditionally like with tasks. As such, without this
debugging macro, call dates are not updated for tasks woken this way.
There isn't any impact yet because this function was introduced in 2.6 to
solve certain classes of issues and is not used yet, and in the worst case
it would only affect the reported latency time.
This may be backported to 2.6 in case a future fix would depend on it but
currently will not fix existing code.
The tasklet's call date was not reset, so if profiling was enabled while
some tasklets were in the run queue, their initial random value could be
used to preload a bogus initial latency value into the task profiling bin.
Let's just zero the initial value.
This should be backported to 2.4 as it was brought with initial commit
b2285de04 ("MINOR: tasks: also compute the tasklet latency when DEBUG_TASK
is set"). The impact is very low though.
To work, quic_pin_cid_to_tid() must set cid[0] to a value with <target_id>
as <global.nbthread> modulo. For each integer n, (n - (n % m)) + d has always
d as modulo m (with d < m).
So, this statement seemed correct:
cid[0] = cid[0] - (cid[0] % global.nbthread) + target_tid;
except when n wraps or when another modulo is applied to the addition result.
Here, for 8bit modulo arithmetic, if m does not divides 256, this cannot
works for values which wraps when we increment them by d.
For instance n=255 m=3 and d=1 the formula result is 0 (should be d).
To fix this, we first limit c[0] to 255 - <target_id> to prevent c[0] from wrapping.
Thank you to @esb for having reported this issue in GH #1855.
Must be backported to 2.6
LibreSSL does not implement EVP_chacha20_poly1305() with EVP_CIPHER but
uses the EVP_AEAD API instead:
https://man.openbsd.org/EVP_AEAD_CTX_init
This patch disables this cipher for libreSSL for now.
When building HAProxy with USE_QUIC and libressl 3.6.0, the
ssl_sock_switchtx_cbk symbol is not found because libressl does not
implement the client_hello_cb.
A ssl_sock_switchtx_cbk version for the servername callback is available
but wasn't exported correctly.
This helper will be called for muxes that provide it and will be used
to let the mux provide extra information about the stream attached to
a stream descriptor. A line prefix is passed in argument so that the
mux is free to break long lines without breaking indent. No prefix
means no line breaks should be produced (e.g. for short dumps).
Some recent traces started to show confusing stream pointers ending with
0xe. The reason was that the stream's obj_type was almost unused in the
past and was stuffed in a hole in the structure. But now it's present in
all "show sess all" outputs and having to mentally match this value against
another one that's 0x17e lower is painful. The solution here is to move the
obj_type at the top, like in almost every other structure, but without
breaking the efficient layout.
This patch moves a few fields around and manages to both plug some holes
(16 bytes saved, 976 to 960) and avoid channels needlessly crossing cache
boundaries (res was spread over 3 lines vs 2 now).
Nothing else was changed. It would be desirable to backport this to 2.6
since it's where dumps are currently being processed the most.
As outlined in commit f7ebe584d7 ("BUILD: debug: Add braces to if
statement calling only CHECK_IF()"), the BUG_ON() family of macros
is incorrectly defined to be empty when debugging is disabled, and
that can lead to trouble. Make sure they always fall back to the
usual "do { } while (0)". This may be backported to 2.6 if needed,
though no such issue was met there to date.
This bug arrived with this commit:
"MINOR: quic: Add reusable cipher contexts for header protection"
haproxy could crash because of missing cipher contexts initializations for
the header protection and draft-v2 Initial secrets. This was due to the fact
that these initialization both for RX and TX secrets were done outside of
qc_new_isecs(). The role of this function is definitively to initialize these
cipher contexts in addition to the derived secrets. Indeed this function is called
by qc_new_conn() which initializes the connection but also by qc_conn_finalize()
which also calls qc_new_isecs() in case of a different QUIC version was negotiated
by the peers from the one used by the client for its first Initial packet.
This was reported by "v2" QUIC interop test with at least picoquic as client.
Must be backported to 2.6.
Shut the connect() warning of resolvers_finalize_config() when the
configuration was not emitted manually.
This shuts the warning for the "default" resolvers which is created
automatically for the httpclient.
Must be backported in 2.6.
It is only a real problem for agent-checks when there is no agent string to
send. The condition to disable TCP_QUICKACK was only based on the action
type following the connect one. But it is not always accurate. indeed, for
agent-checks, there is always a SEND action. But if there is no "agent-send"
string defined, nothing is sent. In this case, this adds 200ms of latency
with no reason.
To fix the bug, a flag is now used on the CONNECT action to instruct there
are data that should be sent after the connect. For health-checks, this flag
is set if the action following the connect is a SEND action. For
agent-checks, it is set if an "agent-send" string is defined.
This patch should fix the issue #1836. It must be backported as far as 2.2.
Replace ->rx.pqpkts quic_enc_level struct member MT_LIST by an LIST.
Same thing for ->list quic_rx_packet struct member MT_LIST.
Update the code consequently. This was a reminisence of the multithreading
support (several threads by connection).
Must be backported to 2.6
Some clients send CONNECTION_CLOSE frame without acknowledging the STREAM
data haproxy has sent. In this case, when closing the connection if
there were remaining data in QUIC stream buffers, they were not released.
Add a <closing> boolean option to qc_stream_desc_free() to force the
stream buffer memory releasing upon closing connection.
Thank you to Tristan for having reported such a memory leak issue in GH #1801.
Must be backported to 2.6.
In ticket #1805 an user is impacted by the limitation of size of the CLI
buffer when updating a ca-file.
This patch allows a user to append new certificates to a ca-file instead
of trying to put them all with "set ssl ca-file"
The implementation use a new function ssl_store_dup_cafile_entry() which
duplicates a cafile_entry and its X509_STORE.
ssl_store_load_ca_from_buf() was modified to take an apped parameter so
we could share the function for "set" and "add".
In order to be able to append new CA in a cafile_entry,
ssl_store_load_ca_from_buf() was reworked and a "append" parameter was
added.
The function is able to keep the previous X509_STORE which was already
present in the cafile_entry.
Implement quic_tls_rx_hp_ctx_init() and quic_tls_tx_hp_ctx_init() to initiliaze
such header protection cipher contexts for each RX and TX parts and for each
packet number spaces, only one time by connection.
Make qc_new_isecs() call these two functions to initialize the cipher contexts
of the Initial secrets. Same thing for ha_quic_set_encryption_secrets() to
initialize the cipher contexts of the subsequent derived secrets (ORTT, 1RTT,
Handshake).
Modify qc_do_rm_hp() and quic_apply_header_protection() to reuse these
cipher contexts.
Note that there is no need to modify the key update for the header protection.
The header protection secrets are never updated.
The CLI needs to reset the svcctx between commands, and there was nothing
done to handle this. Let's add appctx_reset_svcctx() to do that, it's the
closing equivalent of appctx_reserve_svcctx().
This will have to be backported to 2.6 as it will be used by a subsequent
patch to fix a bug.
As specified by RFC 7540, multiple cookie headers are merged in a single
entry before passing it to a HTTP/1.1 connection. This step is
implemented during headers parsing in h2 module.
Extract this code in the generic http_htx module. This will allow to
reuse it quickly for HTTP/3 implementation which has the same
requirement for cookie headers.
MUX notification on TX has been edited recently : it will be notified
only when sending its own data, and not for example on retransmission by
the quic-conn layer. This is subject of the patch :
b29a1dc2f4
BUG/MINOR: quic: do not notify MUX on frame retransmit
A new flag QUIC_FL_CONN_RETRANS_LOST_DATA has been introduced to
differentiate qc_send_app_pkts invocation by MUX and directly by the
quic-conn layer in quic_conn_app_io_cb(). However, this is a first
problem as internal quic-conn layer usage is not limited to
retransmission. For example for NEW_CONNECTION_ID emission.
Another problem much important is that send functions are also called
through quic_conn_io_cb() which has not been protected from MUX
notification. This could probably result in crash when trying to notify
the MUX.
To fix both problems, quic-conn flagging has been inverted : when used
by the MUX, quic-conn is flagged with QUIC_FL_CONN_TX_MUX_CONTEXT. To
improve the API, MUX must now used qc_send_mux which ensure the flag is
set. qc_send_app_pkts is now static and can only be used by the
quic-conn layer.
This must be backported wherever the previously mentionned patch is.
On STREAM emission, quic-conn notifies MUX through a callback named
qcc_streams_sent_done(). This also happens on retransmission : in this
case offset are examined and notification is ignored if already seen.
However, this behavior has slightly changed since
e53b489826
BUG/MEDIUM: mux-quic: fix server chunked encoding response
Indeed, if offset diff is NULL, frame is now not ignored. This is to
support FIN notification with a final empty STREAM frame. A side-effect
of this is that if the last stream frame is retransmitted, it won't be
ignored in qcc_streams_sent_done().
In most cases, this side-effect is harmless as qcs instance will soon be
freed after being closed. But if qcs is still alive, this will cause a
BUG_ON crash as it is considered as locally closed.
This bug depends on delay condition and seems to be extremely rare. But
it might be the reason for a crash seen on interop with s2n client on
http3 testcase :
FATAL: bug condition "qcs->st == QC_SS_CLO" matched at src/mux_quic.c:372
call trace(16):
| 0x558228912b0d [b8 01 00 00 00 c6 00 00]: main-0x1c7878
| 0x558228917a70 [48 8b 55 d8 48 8b 45 e0]: qcc_streams_sent_done+0xcf/0x355
| 0x558228906ff1 [e9 29 05 00 00 48 8b 05]: main-0x1d3394
| 0x558228907cd9 [48 83 c4 10 85 c0 0f 85]: main-0x1d26ac
| 0x5582289089c1 [48 83 c4 50 85 c0 75 12]: main-0x1d19c4
| 0x5582288f8d2a [48 83 c4 40 48 89 45 a0]: main-0x1e165b
| 0x5582288fc4cc [89 45 b4 83 7d b4 ff 74]: qc_send_app_pkts+0xc6/0x1f0
| 0x5582288fd311 [85 c0 74 12 eb 01 90 48]: main-0x1dd074
| 0x558228b2e4c1 [48 c7 c0 d0 60 ff ff 64]: run_tasks_from_lists+0x4e6/0x98e
| 0x558228b2f13f [8b 55 80 29 c2 89 d0 89]: process_runnable_tasks+0x7d6/0x84c
| 0x558228ad9aa9 [8b 05 75 16 4b 00 83 f8]: run_poll_loop+0x80/0x48c
| 0x558228ada12f [48 8b 05 aa c5 20 00 48]: main-0x256
| 0x7ff01ed2e609 [64 48 89 04 25 30 06 00]: libpthread:+0x8609
| 0x7ff01e8ca163 [48 89 c7 b8 3c 00 00 00]: libc:clone+0x43/0x5e
To reproduce it locally, code was artificially patched to produce
retransmission and avoid qcs liberation.
In order to fix this and avoid future class of similar problem, the best
way is to not call qcc_streams_sent_done() to notify MUX for
retranmission. To implement this, we test if any of
QUIC_FL_CONN_RETRANS_OLD_DATA or the new flag
QUIC_FL_CONN_RETRANS_LOST_DATA is set. A new wrapper
qc_send_app_retransmit() has been added to set the new flag as a
complement to already existing qc_send_app_probing().
This must be backported up to 2.6.
Adjust qc_send_app_pkts function : remove <old_data> arg and provide a
new wrapper function qc_send_app_probing() which should be used instead
when probing with old data.
This simplifies the interface of the default function, most notably for
the MUX which does not interfer with retransmission.
QUIC_FL_CONN_RETRANS_OLD_DATA flag is set/unset directly in the wrapper
qc_send_app_probing().
At the same time, function documentation has been updated to clarified
arguments and return values.
This commit will be useful for the next patch to differentiate MUX and
retransmission send context. As a consequence, the current patch should
be backported wherever the next one will be.
Emit STREAM_LIMIT_ERROR if a client tries to open an unidirectional
stream with an ID greater than the value specified by our flow-control
limit. The code is similar to the bidirectional stream opening.
MAX_STREAMS_UNI emission is not implement for the moment and is left as
a TODO. This should not be too urgent for the moment : in HTTP/3, a
client has only a limited use for unidirectional streams (H3 control
stream + 2 QPACK streams). This is covered by the value provided by
haproxy in transport parameters.
This patch has been tagged with BUG as it should have prevented last
crash reported on github issue #1808 when opening a new unidirectional
streams with an invalid ID. However, it is probably not the main cause
of the bug contrary to the patch
commit 11a6f4007b
BUG/MINOR: quic: Wrong status returned by qc_pkt_decrypt()
This must be backported up to 2.6.
As specified by RFC 9204, encoder and decoder streams must not be
closed. If the peer behaves incorrectly and closes one of them, emit a
H3_CLOSED_CRITICAL_STREAM connection error.
To implement this, QPACK stream decoding API has been slightly adjusted.
Firstly, fin parameter is passed to notify about FIN STREAM bit.
Secondly, qcs instance is passed via unused void* context. This allows
to use qcc_emit_cc_app() function to report a CONNECTION_CLOSE error.
This function is responsible for all calls to pool_alloc(trash), whose
total size can be huge. As such it's quite a pain that it doesn't provide
more hints about its users. However, since the function is tiny, it fully
makes sense to inline it, the code is less than 0.1% larger with this.
This way we can now detect where the callers are via "show profiling",
e.g.:
0 1953671 0 32071463136| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
0 1 0 16416| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
1953672 0 32071479552 0| 0x599561 main+0x1066c1 p_alloc(16416) [pool=trash]
0 976835 0 16035723360| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
0 1 0 16416| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
976835 0 16035723360 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
1 0 16416 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
Storing the pointer to the pool along with the stats is quite useful as
it allows to report the name. That's what we're doing here. We could
store it in place of another field but that's not convenient as it would
require to change all functions that manipulate counters. Thus here we
store one extra field, as well as some padding because the struct turns
56 bytes long, thus better go to 64 directly. Example of output from
"show profiling memory":
2 0 48 0| 0x4bfb2c ha_quic_set_encryption_secrets+0xcc/0xb5e p_alloc(24) [pool=quic_tls_iv]
0 55252 0 10608384| 0x4bed32 main+0x2beb2 free(-192)
15 0 2760 0| 0x4be855 main+0x2b9d5 p_alloc(184) [pool=quic_frame]
1 0 1048 0| 0x4be266 ha_quic_add_handshake_data+0x2b6/0x66d p_alloc(1048) [pool=quic_crypto]
3 0 552 0| 0x4be142 ha_quic_add_handshake_data+0x192/0x66d p_alloc(184) [pool=quic_frame]
31276 0 6755616 0| 0x4bb8f9 quic_sock_fd_iocb+0x689/0x69b p_alloc(216) [pool=quic_dgram]
0 31424 0 6787584| 0x4bb7f3 quic_sock_fd_iocb+0x583/0x69b p_free(-216) [pool=quic_dgram]
152 0 32832 0| 0x4bb4d9 quic_sock_fd_iocb+0x269/0x69b p_alloc(216) [pool=quic_dgram]
Pools are being used so well that it becomes difficult to profile their
usage via the regular memory profiling. Let's add new entries for pools
there, named "p_alloc" and "p_free" that correspond to pool_alloc() and
pool_free(). Ideally it would be nice to only report those that fail
cache lookups but that's complicated, particularly on the free() path
since free lists are released in clusters to the shared pools.
It's worth noting that the alloc_tot/free_tot fields can easily be
determined by multiplying alloc_calls/free_calls by the pool's size, and
could be better used to store a pointer to the pool itself. However it
would require significant changes down the code that sorts output.
If this were to cause a measurable slowdown, an alternate approach could
consist in using a different value of USE_MEMORY_PROFILING to enable pools
profiling. Also, this profiler doesn't depend on intercepting regular malloc
functions, so we could also imagine enabling it alone or the other one alone
or both.
Tests show that the CPU overhead on QUIC (which is already an extremely
intensive user of pools) jumps from ~7% to ~10%. This is quite acceptable
in most deployments.
Right now it's not possible to feed memory profiling info from outside
activity.c, so let's export the function and move the enum and struct
to the include file.
This mmaps a file which will serve as the backing-store for the ring's
contents. The idea is to provide a way to retrieve sensitive information
(last logs, debugging traces) even after the process stops and even after
a possible crash. Right now this was possible by connecting to the CLI
and dumping the contents of the ring live, but this is not handy and
consumes quite a bit of resources before it is needed.
With a backing file, the ring is effectively RAM-mapped file, so that
contents stored there are the same as those found in the file (the OS
doesn't guarantee immediate sync but if the process dies it will be OK).
Note that doing that on a filesystem backed by a physical device is a
bad idea, as it will induce slowdowns at high loads. It's really
important that the device is RAM-based.
Also, this may have security implications: if the file is corrupted by
another process, the storage area could be corrupted, causing haproxy
to crash or to overwrite its own memory. As such this should only be
used for debugging.
Instead of allocating two parts, one for the ring struct itself and
one for the storage area, ring_make_from_area() will arrange the two
inside the same memory area, with the storage starting immediately
after the struct. This will allow to store a complete ring state in
shared memory areas for example.
This commit was not complete:
"BUG/MEDIUM: quic: Possible use of uninitialized <odcid>
variable in qc_lstnr_params_init()"
<token_odcid> should have been directly passed to qc_lstnr_params_init()
without dereferencing it to prevent haproxy to have new chances to crash!
Must be backported to 2.6.
When receiving a token into a client Initial packet without a cluster secret defined
by configuration, the <odcid> variable used to parse the ODCID from the token
could be used without having been initialized. Such a packet must be dropped. So
the sufficient part of this patch is this check:
+ }
+ else if (!global.cluster_secret && token_len) {
+ /* Impossible case: a token was received without configured
+ * cluster secret.
+ */
+ TRACE_PROTO("Packet dropped", QUIC_EV_CONN_LPKT,
+ NULL, NULL, NULL, qv);
+ goto drop;
}
Take the opportunity of this patch to rework and make it more readable this part
of code where such a packet must be dropped removing the <check_token> variable.
When an ODCID is parsed from a token, new <token_odcid> new pointer variable
is set to the address of the parsed ODCID. This way, is not set but used it will
make crash haproxy. This was not always the case with an uninitialized local
variable.
Adapt the API to used such a pointer variable: <token> boolean variable is removed
from qc_lstnr_params_init() prototype.
This must be backported to 2.6.
Add new traces to help debugging on QUIC MUX. Most notable, the
following functions are now traced :
* qcc_emit_cc
* qcs_free
* qcs_consume
* qcc_decode_qcs
* qcc_emit_cc_app
* qcc_install_app_ops
* qcc_release_remote_stream
* qcc_streams_sent_done
* qc_init
This lock was there be able to handle the RX packets for a connetion
from several threads. This is no more needed since a QUIC connection
is always handled by the same thread.
May be backported to 2.6
Add a least as much as possible TRACE_ENTER() and TRACE_LEAVE() calls
to any function. Note that some functions do not have any access to the
a quic_conn argument when receiving or parsing datagram at very low level.
While testing the fix for the previous issue related to reloads with
hard_stop_after, I've met another one which could spuriously produce:
FATAL: bug condition "t->tid >= 0 && t->tid != tid" matched at include/haproxy/task.h:266
In 2.3-dev2, we've added more consistency checks for a number of bug-
inducing programming errors related to the tasks, via commit e5d79bccc
("MINOR: tasks/debug: add a few BUG_ON() to detect use of wrong timer
queue"), and this check comes from there.
The problem that happens here is that when hard-stop-after is set, we
can abort the current thread even if there are still ongoing checks
(or connections in fact). In this case some tasks are present in a
thread's wait queue and are thus bound exclusively to this thread.
During deinit(), the collect and cleanup of all memory areas also
stops servers and kills their check tasks. And calling task_destroy()
does in turn call task_unlink_wq()... except that it's called from
thread 0 which doesn't match the initially planned thread number.
Several approaches are possible. One of them would consist in letting
threads perform their own cleanup (tasks, pools, FDs, etc). This would
possibly be even faster since done in parallel, but some corner cases
might be way more complicated (e.g. who will kill a check's task, or
what to do with a task found in a local wait queue or run queue, and
what about other consistency checks this could violate?).
Thus for now this patches takes an easier and more conservative
approach consisting in admitting that when the process is stopping,
this rule is not necessarily valid, and to let thread 0 collect all
other threads' garbage.
As such this patch can be backpoted to 2.4.
On first prototype version of QUIC, emission was multithreaded. To
support this, a custom thread-safe ring-buffer has been implemented with
qring/cbuf.
Now the thread model has been adjusted : a quic-conn is always used on
the same thread and emission is not multi-threaded. Thus, qring/cbuf
usage can be replace by a standard struct buffer.
The code has been simplified even more as for now buffer is always
drained after a prepare/send invocation. This is the case since a
datagram is always considered as sent even on sendto() error. BUG_ON
statements guard are here to ensure that this model is always valid.
Thus, code to handle data wrapping and consume too small contiguous
space with a 0-length datagram is removed.
Right now the free() call is not intercepted since all this is done
using macros and that would break a lot of stuff. Instead a __free()
macro was provided but never used. In addition it used to only report
a zero size, which is not very convenient.
With this patch comes a better solution. Instead it provides a new
will_free() macro that can be prepended before a call to free(). It
only keeps the counters up to date, and also supports being passed a
size. The pool_free_area() command now uses it, which finally allows
the stats to look correct:
pool-os.h:38 MALLOC size: 5802127832 calls: 3868044 size/call: 1500
pool-os.h:47 FREE size: 5800041576 calls: 3867444 size/call: 1499
The few other places directly calling free() could now be instrumented to
use this and to pass the correct sizeof() when known.
The calling function name is now stored in the structure, and it's
reported when the "all" argument is passed. The first column is
significantly enlarged because some names are really wide :-(
Not specifying the alignment will let the linker choose it, and it turns
out that it will not necessarily be the same size as the one chosen for
struct mem_stats, as can be seen if any new fields are added there. Let's
enforce an alignment to void* both for the section and for the structure.
MAX_THREADS was not changed when setting MAX_TGROUPS, which still limits
some possibilities. Let's preset it to 4 * LONGBITS when MAX_TGROUPS is
larger than 1, or LONGBITS when it's set to 1. This means that the new
default value is 256 threads.
The rationale behind this is that the main use of thread groups is
mostly to address NUMA issues and that we don't necessarily need large
thread counts when using many groups, and 256 threads is already plenty
even on quite large systems.
For now it's important not to go too far because some internal structs
are arrays of MAX_THREADS entries, for example accept_queue_ring, which
is around 8kB per thread. Such structures will need to become dynamic
before defaulting to large thread counts (at 4096 threads max the
accept queues would require 32 MB RAM alone).
qc_snd_buf returned a size_t which means that it was never negative
despite its documentation. Thus the caller who checked for this was
never informed of a sendto error.
Clean this by changing the return value of qc_snd_buf() to an integer.
A 0 is returned on success. Every other values are considered as an
error.
This commit should be backported up to 2.6. Note that to not cause
malfunctions, it must be backported after the previous patch :
906b058954
MINOR: quic: explicitely ignore sendto error
This is to ensure that a sendto error does not cause send to be
interrupted which may cause a stalled transfer without a proper retry
mechanism.
The impact of this bug seems null as caller explicitely ignores sendto
error. However this part of code seems to be subject to strange issues
and it may fix them in part. It may be of interest for github issue #1808.
Implement http-request timeout for QUIC MUX. It is used when the
connection is opened and is triggered if no HTTP request is received in
time. By HTTP request we mean at least a QUIC stream with a full header
section. Then qcs instance is attached to a sedesc and upper layer is
then responsible to wait for the rest of the request.
This timeout is also used when new QUIC streams are opened during the
connection lifetime to wait for full HTTP request on them. As it's
possible to demux multiple streams in parallel with QUIC, each waiting
stream is registered in a list <opening_list> stored in qcc with <start>
as timestamp in qcs for the stream opening. Once a qcs is attached to a
sedesc, it is removed from <opening_list>. When refreshing MUX timeout,
if <opening_list> is not empty, the first waiting stream is used to set
MUX timeout.
This is efficient as streams are stored in the list in their creation
order so CPU usage is minimal. Also, the size of the list is
automatically restricted by flow control limitation so it should not
grow too much.
Streams are insert in <opening_list> by application protocol layer. This
is because only application protocol can differentiate streams for HTTP
messaging from internal usage. A function qcs_wait_http_req() has been
added to register a request stream by app layer. QUIC MUX can then
remove it from the list in qc_attach_sc().
As a side-note, it was necessary to implement attach qcc_app_ops
callback on hq-interop module to be able to insert a stream in waiting
list. Without this, a BUG_ON statement would be triggered when trying to
remove the stream on sedesc attach. This is to ensure that every
requests streams are registered for http-request timeout.
MUX timeout is explicitely refreshed on MAX_STREAM_DATA and STOP_SENDING
frame parsing to schedule http-request timeout if a new stream has been
instantiated. It was already done on STREAM parsing due to a previous
patch.
Store the current step of HTTP message in h3s stream. This reports if we
are in the parsing of headers, content or trailers section. A new enum
h3s_st_req is defined for this.
This field is stored in h3s struct but only used for request stream. It
is left undefined for other streams (control or QPACK streams).
h3_is_frame_valid() has been extended to take into account this state
information. A connection error H3_FRAME_UNEXPECTED is reported if an
invalid frame according to the current state is received; for example a
DATA frame at the beginning of a stream.
The decrement was missing in quic_pktns_tx_pkts_release() called each time a
packet number space is discarded. This is not sure this bug could have an impact
during handshakes. This counter is used to cancel the timer used both for packet
detection and PTO, setting its value to null. So there could be retransmissions
or probing which could be triggered for nothing.
Must be backported to 2.6.
Complete QUIC MUX timeout refresh function by using http-keep-alive
timeout. It is used when the connection is idle after having handle at
least one request.
To implement this a new member <idle_start> has been defined in qcc
structure. This is used as timestamp for when the connection became idle
and is used as base time for http keep-alive timeout
Add a new qcc member named <nb_hreq>. Its purpose is close to <nb_sc>
which represents the number of attached stream connectors. Both are
incremented inside qc_attach_sc().
The difference is on the decrement operation. While <nb_cs> is
decremented on sedesc detach callback, <nb_hreq> is decremented when the
qcs is locally closed.
In most cases, <nb_hreq> will be decremented before <nb_cs>. However, it
will be the reverse if a stream must be kept alive after detach callback.
The main purpose of this field is to implement http-keep-alive timeout.
Both <nb_sc> and <nb_hreq> must be null to activate the http-keep-alive
timeout.
Store a reference to proxy in the qcc structure. This will be useful to
access to proxy members outside of qcc_init().
Most notably, this change is required to implement timeout refreshing by
using the various timeouts configured at the proxy level.
Timeout in QUIC MUX has evolved from the simple first implementation. At
the beginning, a connection was considered dead unless bidirectional
streams were opened. This was abstracted through an app callback
is_active().
Now this paradigm has been reversed and a connection is considered alive
by default, unless an error has been reported or a timeout has already
been fired. The callback is_active() is thus not used anymore and can be
safely removed to simplify qcc_is_dead().
This commit should be backported to 2.6.
This function is designed to enlarge the scope of a lookup performed
by a caller via ebmb_lookup_longest() that was not satisfied with the
result. It will first visit next duplicates, and if none are found,
it will go up in the tree to visit similar keys with shorter prefixes
and will return them if they match. We only use the starting point's
value to perform the comparison since it was expected to be valid for
the looked up key, hence it has all bits in common with its own length.
The algorithm is a bit complex because when going up we may visit nodes
that are located beneath the level we just come from. However it is
guaranteed that keys having a shorter prefix will be present above the
current location, though they may be attached to the left branch of a
cover node, so we just visit all nodes as long as their prefix is too
large, possibly go down along the left branch on cover nodes, and stop
when either there's a match, or there's a non-matching prefix anymore.
The following tricky case now works fine and properly finds 10.0.0.0/7
when looking up 11.0.0.1 from tree version 1 though both belong to
different sub-trees:
prepare map #1
add map @1 #1 10.0.0.0/7 10.0.0.0/7
add map @1 #1 10.0.0.0/7 10.0.0.0/7
commit map @1 #1
prepare map #1
add map @2 #1 11.0.0.0/8 11.0.0.0/8
add map @2 #1 11.0.0.0/8 11.0.0.0/8
prepare map #1
add map @1 #1 10.0.0.0/7 10.0.0.0/7
commit map @1 #1
prepare map #1
add map @2 #1 10.0.0.0/7 10.0.0.0/7
add map @2 #1 11.0.0.0/8 11.0.0.0/8
add map @2 #1 11.0.0.0/8 11.0.0.0/8
It's convenient for debugging IP trees. However we're not dumping the
full keys, for the sake of simplicity, only the 4 first bytes are dumped
as a u32 hex value. In practice this is sufficient for debugging. As a
reminder since it seems difficult to recover the command each time it's
needed, the output is converted to an image using dot from Graphviz:
dot -o a.png -Tpng dump.txt
The plock code hasn't been been updated since 2017 and didn't benefit
from the exponential back-off improvements that were added in 2018.
Simply updating the file shows a massive performance gain on large
thread count (>=48) with dequeuing going from 113k RPS to 300k RPS and
round robin from 229k RPS to 1020k RPS. It was about time to update.
In addition, some recent improvements to the code will be useful with
thread groups.
An interesting improvement concerns EPYC CPUs. This one alone increased
fairness and was sufficient to avoid crashes in process_srv_queue() there,
when hammering two servers with maxconn 200 under 1k connections.
As it could be interesting to be able to choose the QUIC control congestion
algorithm to be used by listener, add "quic-cc-algo" new keyword to do so.
Update the documentation consequently.
Must be backported to 2.6.
Cubic is the congestion control algorithm used by default by the Linux kernel
since 2.6.15 version. This algorithm is supposed to achieve good scalability and
fairness between flows using the same network path, it should also be used by QUIC
by default. This patch implements this algorithm and select it as default algorithm
for the congestion control.
Must be backported to 2.6.
Ease the integration of new congestion control algorithm to come.
Move the congestion controller state to a private array of uint32_t
to stop using a union. We do not want to continue using such long
paths cc->algo_state.<algo>.<var> to modify the internal state variable
for each algorithm.
Must be backported to 2.6
Since the API is still a bit young, let's make sure nobody tries to
assign and FD to a group not strictly 1..MAX_TGROUPS as that would
indicate a bug.
Note: some of these might be relaxed to BUG_ON_HOT() in the future
When a new fd is inserted in the fdtab array, its state is initialized. The
"newstate" variable is used to compute the right state (0 by default, but
FD_ET_POSSIBLE flag is set if edge-triggered is supported for the fd).
However, this variable is never used and the fd state is always set to 0.
Now, the fd state is initialized with "newstate" variable.
This bug was introduced by commit ddedc1662 ("MEDIUM: fd: make
fd_insert/fd_delete atomically update fd.tgid"). No backport needed.
This function was added by commit 84ebfabf7 ("MINOR: tools: add
statistical_prng_range() to get a random number over a range") but it
contains a bug on the range, since mul32hi() covers the whole input
range, we must pass it range-1. For now it didn't have any impact, but
if used to find an array's index it will cause trouble.
This should be backported to 2.4.
The first approach in commit 288dc1d8e ("BUG/MEDIUM: tools: avoid calling
dlsym() in static builds") relied on dlopen() but on certain configs (at
least gcc-4.8+ld-2.27+glibc-2.17) it used to catch situations where it
ought not fail.
Let's have a second try on this using dladdr() instead. The variable was
renamed "build_is_static" as it's exactly what's being detected there.
We could even take it for reporting in -vv though that doesn't seem very
useful. At least the variable was made global to ease inspection via the
debugger, or in case it's useful later.
Now it properly detects a static build even with gcc-4.4+glibc-2.11.1 and
doesn't crash anymore.
This will allows nbtgroups > 1 to be declared in the config without
recompiling. The theoretical limit is 64, though we'd rather not push
it too far for now as some structures might be enlarged to be indexed
per group. Let's start with 16 groups max, allowing to experiment with
dual-socket machines suffering from up to 8 loosely coupled L3 caches.
It's a good start and doesn't engage us too far.
Since these are not used anymore, let's now remove them. Given the
number of places where we're using ti->ldit_bit, maybe an equivalent
might be useful though.
At boot the pollers are allocated for each thread and they need to
reprogram updates for all FDs they will manage. This code is not
trivial, especially when trying to respect thread groups, so we'd
rather avoid duplicating it.
Let's centralize this into fd.c with this function. It avoids closed
FDs, those whose thread mask doesn't match the requested one or whose
thread group doesn't match the requested one, and performs the update
if required under thread-group protection.
These functions need to set/reset the FD's tgid but when they're called
there may still be wakeups on other threads that discover late updates
and have to touch the tgid at the same time. As such, it is not possible
to just read/write the tgid there. It must only be done using operations
that are compatible with what other threads may be doing.
As we're using inc/dec on the refcount, it's safe to AND the area to zero
the lower part when resetting the value. However, in order to set the
value, there's no other choice but fd_claim_tgid() which will assign it
only if possible (via a CAS). This is convenient in the end because it
protects the FD's masks from being modified by late threads, so while
we hold this refcount we can safely reset the thread_mask and a few other
elements. A debug test for non-null masks was added to fd_insert() as it
must not be possible to face this situation thanks to the protection
offered by the tgid.
With the change that was started on other masks, the thread mask was
still not fully converted, sometimes being used as a global mask and
sometimes as a local one. This finishes the code modifications so that
the mask is always considered as a group-local mask. This doesn't
change anything as long as there's a single group, but is necessary
for groups 2 and above since it's used against running_mask and so on.
It's an AND so it destroys information and due to this there's a call
place where we have to perform two reads to know the previous value
then to change it. With a fetch-and-and instead, in a single operation
we can know if the bit was previously present, which is more efficient.
From now on, the FD's running_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
The running mask is only valid if the tgid is the expected one. This
function takes a reference on the tgid before reading the running mask,
so that both are checked at once. It returns either the mask or zero if
the tgid differs, thus providing a simple way for a caller to check if
it still holds the FD.
The FD's tgid is refcounted and must be atomically manipulated. Function
fd_grab_tgid() will increase the refcount but only if the tgid matches the
one in argument (likely the current one). fd_claim_tgid() will be used to
self-assign the tgid after waiting for its refcount to reach zero.
fd_drop_tgid() will be used to drop a temporarily held tgid. All of these
are needed to prevent an FD from being reassigned to another group, either
when inspecting/modifying the running_mask, or when checking for updates,
in order to be certain that the mask being seen corresponds to the desired
group. Note that once at least one bit is set in the running mask of an
active FD, it cannot be closed, thus not migrated, thus the reference does
not need to be held long.
The file descriptors will need to know the thread group ID in addition
to the mask. This extends fd_insert() to take the tgid, and will store
it into the FD.
In the FD, the tgid is stored as a combination of tgid on the lower 16
bits and a refcount on the higher 16 bits. This allows to know when it's
really possible to trust the tgid and the running mask. If a refcount is
higher than 1 it indeed indicates another thread else might be in the
process of updating these values.
Since a closed FD must necessarily have a zero refcount, a test was
added to fd_insert() to make sure that it is the case.
It's a bit ugly to see that half of the callers of fd_insert() have to
apply all_threads_mask themselves to the bit field they're passing,
because usually it comes from a listener that may have other bits set.
Let's make the function apply the mask itself.
The update-list needs to be per-group because its inspection is based
on a mask and we need to be certain when scanning it if a mask is for
the same thread or another one. Once per-group there's no doubt about
it, even if the FD's polling changes, the entry remains valid. It will
be needed to check the tgid though.
Note that a soft-stop or pause/resume might not necessarily work here
with tgroups>1, because the operation might be delivered to a thread
that doesn't belong to the group and whoe update mask will not reflect
one that is interesting here. We can't do better at this stage.
This one is only used as a hint to improve scheduling latency, so there
is no more point in keeping it global since each thread group handles
its own run q
Their migration was postponed for convenience only but now's time for
having the shared wait queues per thread group and not just per process,
otherwise the WQ lock uses a huge amount of CPU alone.
Since commit d2494e048 ("BUG/MEDIUM: peers/config: properly set the
thread mask") there must not remain any single case of a receiver that
is bound nowhere, so there's no need anymore for thread_mask().
We're adding a test in fd_insert() to make sure this doesn't happen by
accident though, but the function was removed and its rare uses were
replaced with the original value of the bind_thread msak.
The principle remains the same, but instead of having a single process
and ignoring extra ones, now we set the affinity masks for the respective
threads of all groups.
The doc was updated with a few extra examples.
Since we have to use masks to verify owners/waiters, we have no other
option but to have them per group. This definitely inflates the size
of the locks, but this is only used for extreme debugging anyway so
that's not dramatic.
Thus as of now, all masks in the lock stats are local bit masks, derived
from ti->ltid_bit. Since at boot ltid_bit might not be set, we just take
care of this situation (since some structs are initialized under look
during boot), and use bit 0 from group 0 only.
They were initially made to deal with both the cache and the update list
but there's no cache anymore and keeping them for the update list adds a
lot of obfuscation that is really not desired. Let's get rid of them now.
Their purpose was simply to get a pointer to fdtab[fd].update.{,next,prev}
in order to perform atomic tests and modifications. The offset passed in
argument to the functions (fd_add_to_fd_list() and fd_rm_from_fd_list())
was the offset of the ->update field in fdtab, and as it's not used anymore
it was removed. This also removes a number of casts, though those used by
the atomic ops have to remain since only scalars are supported.
The "ctx" and "st2" parts in the appctx were marked for removal in 2.7
and were emulated using memcpy/memset etc for possible external code.
Let's remove this now.
Adjust qcc_emit_cc_app() to allow the delay of emission of a
CONNECTION_CLOSE. This will only set the error code but the quic-conn
layer is not flagged for immediate close. The quic-conn will be
responsible to shut the connection when deemed suitable.
This change will allow to implement application graceful shutdown, such
as HTTP/3 with GOAWAY emission. This will allow to emit closing frames
on MUX release. Once all work is done at the lower layer, the quic-conn
should emit a CONNECTION_CLOSE with the registered error code.
Define a new structure quic_err to abstract a QUIC error type. This
allows to easily differentiate a transport and an application error
code. This simplifies error transmission from QUIC MUX and H3 layers.
This new type is defined in quic_frame module. It is used to replace
<err_code> field in <quic_conn>. QUIC_FL_CONN_APP_ALERT flag is removed
as it is now useless.
Utility functions are defined to be able to quickly instantiate
transport, tls and application errors.
quic_frame-t.h and xprt_quic-t.h include themselves mutually. This may
cause some troubles later.
In fact, xprt_quic does not need to include quic_frame so remove this.
And as quic_frame is a generic source file which is included in multiple
places, it is useful to also remove the xprt_quic include in it. Use
forward declaration for this.
Implement support for STOP_SENDING frame parsing. The stream is resetted
as specified by RFC 9000. This will automatically interrupt all future
send operation in qc_send(). A RESET_STREAM will be sent with the code
extracted from the original STOP_SENDING frame.
Implement functions to be able to reset a stream via RESET_STREAM.
If needed, a qcs instance is flagged with QC_SF_TO_RESET to schedule a
stream reset. This will interrupt all future send operations.
On stream emission, if a stream is flagged with QC_SF_TO_RESET, a
RESET_STREAM frame is generated and emitted to the transport layer. If
this operation succeeds, the stream is locally closed. If upper layer is
instantiated, error flag is set on it.
Implement a basic state machine to represent stream lifecycle. By
default a stream is idle. It is marked as open when sending or receiving
the first data on a stream.
Bidirectional streams has two states to represent the closing on both
receive and send channels. This distinction does not exists for
unidirectional streams which passed automatically from open to close
state.
This patch is mostly internal and has a limited visible impact. Some
behaviors are slightly updated :
* closed streams are garbage collected at the start of io handler
* send operation is interrupted if a stream is close locally
Outside of this, there is no functional change. However, some additional
BUG_ON guards are implemented to ensure that we do not conduct invalid
operation on a stream. This should strengthen the code safety. Also,
stream states are displayed on trace which should help debugging.
Rename both qcc_open_stream_local/remote() functions to
qcc_init_stream_local/remote(). This change is purely cosmetic. It will
reduces the ambiguity with the soon to be implemented OPEN states for
QCS instances.
In 2.6, 8a0fd3a36 ("BUILD: debug: work around gcc-12 excessive
-Warray-bounds warnings") disabled some warnings that were reported
around the the BUG() statement. But the -Wnull-dereference warning
isn't known from gcc-5, it only arrived in gcc-6, hence makes gcc-5
complain loudly that it doesn't know this directive. Let's just
condition this one to gcc-6.
http_is_default_port() can be used to test if a port is a default HTTP/HTTPS
port. A scheme may be specified. In this case, it is used to detect defaults
ports, 80 for "http://" and 443 for "https://". Otherwise, with no scheme, both
are considered as default ports.
http_get_host_port() function can be used to get the port part of a host. It
will be used to get the port of an uri authority or a host header
value. This function only look for a port starting from the end of the
host. It is the caller responsibility to call it with a valid host value. An
indirect string is returned.
Rename QC_SF_FIN_RECV to the more generic name QC_SF_SIZE_KNOWN. This
better align with the QUIC RFC 9000 which uses the "Size Known" state
definition. This change is purely cosmetic.
Review the whole API used to access/instantiate qcs.
A public function qcc_open_stream_local() is available to the
application protocol layer. It allows to easily opening a local stream.
The ID is automatically attributed to the next one available.
For remote streams, qcc_open_stream_remote() has been implemented. It
will automatically take care of allocating streams in a linear way
according to the ID. This function is called via qcc_get_qcs() which can
be used for each qcc_recv*() operations. For the moment, it is only used
for STREAM frames via qcc_recv(), but soon it will be implemented for
other frames types which can also be used to open a new stream.
qcs_new() and qcs_free() has been restricted to the MUX QUIC only as
they are now reserved for internal usage.
This change is a pure refactoring and should not have any noticeable
impact. It clarifies the developer intent and help to ensure that a
stream is not automatically opened when not desired.
<qcc.cl_bidi_r> is used to implement STREAM ID flow control enforcement.
Move it with all fields related to this operation and separated from MAX
STREAM DATA calcul.
This patch adds two BUG_ON on fd_insert() into the fdtab checking
if the fd has been correctly re-initialized into the fdtab
before a new insert.
It will raise a BUG if we try to insert the same fd multiple times
without an intermediate fd_delete().
First one checks that the owner for this fd in fdtab was reset to NULL.
Second one checks that the state flags for this fd in fdtab was reset
to 0.
This patch could be backported on version >= 2.4
This flag is not needed anymore as we're already marking the waiting
threads as harmless, thus the thread's bit is already covered by this
information. The variable was unexported.
The harmless status is not re-entrant, so sometimes for signal handling
it can be useful to know if we're already harmless or not. Let's add a
function doing that, and make the debugger use it instead of manipulating
the harmless mask.
thread_isolate() and thread_isolate_full() were relying on a set of thread
masks for all threads in different states (rdv, harmless, idle). This cannot
work anymore when the number of threads increases beyond LONGBITS so we need
to change the mechanism.
What is done here is to have a counter of requesters and the number of the
current isolated thread. Threads which want to isolate themselves increment
the request counter and wait for all threads to be marked harmless (or idle)
by scanning all groups and watching the respective masks. This is possible
because threads cannot escape once they discover this counter, unless they
also want to isolate and possibly pass first. Once all threads are harmless,
the requesting thread tries to self-assign the isolated thread number, and
if it fails it loops back to checking all threads. If it wins it's guaranted
to be alone, and can drop its harmless bit, so that other competing threads
go back to the loop waiting for all threads to be harmless. The benefit of
proceeding this way is that there's very little write contention on the
thread number (none during work), hence no cache line moves between caches,
thus frozen threads do not slow down the isolated one.
Once it's done, the isolated thread resets the thread number (hence lets
another thread take the place) and decrements the requester count, thus
possibly releasing all harmless threads.
With this change there's no more need for any global mask to synchronize
any thread, and we only need to loop over a number of groups to check
64 threads at a time per iteration. As such, tinfo's threads_want_rdv
could be dropped.
This was tested with 64 threads spread into 2 groups, running 64 tasks
(from the debug dev command), 20 "show sess" (thread_isolate()), 20
"add server blah/blah" (thread_isolate()), and 20 "del server blah/blah"
(thread_isolate_full()). The load remained very low (limited by external
socat forks) and no stuck nor starved thread was found.
Stopping threads need a mask to figure who's still there without scanning
everything in the poll loop. This means this will have to be per-group.
And we also need to have a global stopping groups mask to know what groups
were already signaled. This is used both to figure what thread is the first
one to catch the event, and which one is the first one to detect the end of
the last job. The logic isn't changed, though a loop is required in the
slow path to make sure all threads are aware of the end.
Note that for now the soft-stop still takes time for group IDs > 1 as the
poller is not yet started on these threads and needs to expire its timeout
as there's no way to wake it up. But all threads are eventually stopped.
The thread group info is not sufficient to represent a thread group's
current state as it's read-only. We also need something comparable to
the thread context to represent the aggregate state of the threads in
that group. This patch introduces ha_tgroup_ctx[] and tg_ctx for this.
It's indexed on the group id and must be cache-line aligned. The thread
masks that were global and that do not need to remain global were moved
there (want_rdv, harmless, idle).
Given that all the masks placed there now become group-specific, the
associated thread mask (tid_bit) now switches to the thread's local
bit (ltid_bit). Both are the same for nbtgroups 1 but will differ for
other values.
There's also a tg_ctx pointer in the thread so that it can be reached
from other threads.
This function was added in 2.0 when reworking the thread isolation
mechanism to make it more reliable. However it if fundamentally
incompatible with the full isolation mechanism provided by
thread_isolate_full() since that one will wait for all threads to
become idle while the former will wait for all threads to finish
waiting, causing a deadlock.
Given that it's not used, let's just drop it entirely before it gets
used by accident.
In order to kill all_threads_mask we'll need to have an equivalent for
the thread groups. The all_tgroups_mask does just this, it keeps one bit
set per enabled group.
In order to replace the global "all_threads_mask" we'll need to have an
equivalent per group. Take this opportunity for calling it threads_enabled
and make sure which ones are counted there (in case in the future we allow
to stop some).
Now that the tgid is accessible from the thread, it's pointless to have
it in the group, and it was only set but never used. However we'll soon
frequently need the mask corresponding to the group ID and the risk of
getting it wrong with the +1 or to shift 1 instead of 1UL is important,
so let's store the tgid_bit there.
At several places we're dereferencing the thread group just to catch
the group number, and this will become even more required once we start
to use per-group contexts. Let's just add the tgid in the thread_info
struct to make this easier.
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.
The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.
This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():
frontend test
bind :8888 maxconn 10 thread 1-2
rate-limit sessions 5
Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
Right now when an inter-thread wakeup happens, we preliminary check if the
thread was asleep, and if so we wake the poller up and remove its bit from
the sleeping mask. That's not very clean since the sleeping mask cannot be
entirely trusted since a thread that's about to wake up will already have
its sleeping bit removed.
This patch adds a new per-thread flag (TH_FL_NOTIFIED) to remember that a
thread was notified to wake up. It's cleared before checking the task lists
last, so that new wakeups can be considered again (since wake_thread() is
only used to notify about task wakeups and FD polling changes). This way
we do not need to modify a remote thread's sleeping mask anymore. As such
wake_thread() now only tests and sets the TH_FL_NOTIFIED flag but doesn't
clear sleeping anymore.
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
The thread flags were once believed to be local to the thread, but as
it stands, even the STUCK flag is shared since it's looked at by the
watchdog. As such we'll need to use atomic ops to manipulate them, and
likely to move them into the shared area.
This patch only moves the flag into the shared area so that we can later
decide whether it's best to leave them there or to move them back to the
local area. Interestingly, some tests have shown a 3% better performance
on dequeuing with this, while they're not used by other threads yet, so
there are definitely alignment effects that might change over time.
Almost every call place of wake_thread() checks for sleeping threads and
clears the sleeping mask itself, while the function is solely used for
there. Let's move the check and the clearing of the bit inside the function
itself. Note that updt_fd_polling() still performs the check because its
rules are a bit different.
Since we don't mix tasks from different threads in the run queues
anymore, we don't need to use the eb32sc_ trees and we can switch
to the regular eb32 ones. This uses cheaper lookup and insert code,
and a 16-thread test on the queues shows a performance increase
from 570k RPS to 585k RPS.
This bit field used to be a per-thread cache of the result of the last
lookup of the presence of a task for each thread in the shared cache.
Since we now know that each thread has its own shared cache, a test of
emptiness is now sufficient to decide whether or not the shared tree
has a task for the current thread. Let's just remove this mask.
grq_total was only used to know how many tasks were being queued in the
global runqueue for stats purposes, and that was transferred to the per
thread rq_total counter once assigned. We don't need this anymore since
we know where they are, so let's just directly update rq_total and drop
that one.
Since we only use the shared runqueue to put tasks only assigned to
known threads, let's move that runqueue to each of these threads. The
goal will be to arrange an N*(N-1) mesh instead of a central contention
point.
The global_rqueue_ticks had to be dropped (for good) since we'll now
use the per-thread rqueue_ticks counter for both trees.
A few points to note:
- the rq_lock stlil remains the global one for now so there should not
be any gain in doing this, but should this trigger any regression, it
is important to detect whether it's related to the lock or to the tree.
- there's no more reason for using the scope-based version of the ebtree
now, we could switch back to the regular eb32_tree.
- it's worth checking if we still need TASK_GLOBAL (probably only to
delete a task in one's own shared queue maybe).
The runqueue ticks counter is per-thread and wasn't initially meant to
be shared. We'll soon have to share it so let's make it atomic. It's
only updated when waking up a task, and no performance difference was
observed. It was moved in the thread_ctx struct so that it doesn't
pollute the local cache line when it's later updated by other threads.
This function stopped being used before 2.4 because either the task is
dequeued by the scheduler itself and it knows where to find it, or it's
killed by any thread, and task_kill() must be used for this as only this
one is safe.
It's difficult to say whether task_unlink_rq() is still safe, but once
the lock moves to a thread declared in the task itself, it will be even
more difficult to keep it safe.
Let's just remove it now before someone reuses it and causes trouble.
TASK_SHARED_WQ was set upon task creation and never changed afterwards.
Thus if a task was created to run anywhere (e.g. a check or a Lua task),
all its timers would always pass through the shared timers queue with a
lock. Now we know that tid<0 indicates a shared task, so we can use that
to decide whether or not to use the shared queue. The task might be
migrated using task_set_affinity() but it's always dequeued first so
the check will still be valid.
Not only this removes a flag that's difficult to keep synchronized with
the thread ID, but it should significantly lower the load on systems with
many checks. A quick test with 5000 servers and fast checks that were
saturating the CPU shows that the check rate increased by 20% (hence the
CPU usage dropped by 17%). It's worth noting that run_task_lists() almost
no longer appears in perf top now.
As previously advertised in comments, the mask-based task_new() is now
gone. The low-level function now is task_new_on() which takes a thread
number or a negative value for "any thread", which is turned to zero
for thread-less builds since there's no shared WQ in thiscase. The
task_new_here() and task_new_anywhere() functions were adjusted
accordingly.