This removes all the hard-coded 8-bit and 256 entries to use a pair of
macros instead so that we can more easily experiment with larger table
sizes if needed.
There used to be one tid for tasklets and a thread_mask for tasks. Since
2.7, both tasks and tasklets now use a tid (albeit with a very slight
semantic difference for the negative value), to in order to limit code
duplication and to ease debugging it makes sense to move tid into the
common part. One limitation is that it will leave a hole in the structure,
but we now have the wake_date that is always present and can move there as
well to plug the hole.
This results in something overall pretty clean (and cleaner than before),
with the low-level stuff (state,tid,process,context) appearing first, then
the caller stuff (caller,wake_date,calls,debug) next, and finally the
type-specific stuff (rq/wq/expire/nice).
Instead of storing an index that's swapped at every call, let's use the
two pointers as a shifting history. Now we have a permanent "caller"
field that records the last caller, and an optional prev_caller in the
debug section enabled by DEBUG_TASK that keeps a copy of the previous
caller one. This way, not only it's much easier to follow what's
happening during debugging, but it saves 8 bytes in the struct task in
debug mode and still keeps it under 2 cache lines in nominal mode, and
this will finally be usable everywhere and later in profiling.
The caller_idx was also used as a hint that the entry was freed, in order
to detect wakeup-after-free. This was changed by setting caller to -1
instead and preserving its value in caller[1].
Finally, the operations were made atomic. That's not critical but since
it's used for debugging and race conditions represent a significant part
of the issues in multi-threaded mode, it seems wise to at least eliminate
some possible factors of faulty analysis.
appctx_wakeup() relies on task_wakeup(), but since it calls it from a
function, the calling place is always appctx_wakeup() itself, which is
not very useful.
Let's turn it to a macro so that we can log the location of the caller
instead. As an example, the cli_io_handler() which used to be seen as
this:
(gdb) p *appctx->t.debug.caller[0]
$10 = {
func = 0x9ffb78 <__func__.37996> "appctx_wakeup",
file = 0x9b336a "include/haproxy/applet.h",
line = 110,
what = 1 '\001',
arg8 = 0 '\000',
arg32 = 0
}
Now shows the more useful:
(gdb) p *appctx->t.debug.caller[0]
$6 = {
func = 0x9ffe80 <__func__.38641> "sc_app_chk_snd_applet",
file = 0xa00320 "src/stconn.c",
line = 996,
what = 6 '\006',
arg8 = 0 '\000',
arg32 = 0
}
This reduces the task struct by 8 bytes, reduces the code size a little
bit by simplifying the calling convention (one argument dropped), and
as a bonus provides the function name in the caller.
The memstats code currently defines its own file/function/line number,
type and extra pointer. We don't need to keep them separate and we can
easily replace them all with just a struct ha_caller. Note that the
extra pointer could be converted to a pool ID stored into arg8 or
arg32 and be dropped as well, but this would first require to define
IDs for pools (which we currently do not have).
The purpose of this structure is to assemble all constant parts of a
generic calling point for a specific event. These ones are created by
the compiler as a static const element outside of the code path, so
they cost nothing in terms of CPU, and a pointer to that descriptor
can be passed to the place that needs it. This is very similar to what
is being done for the mem_stat stuff.
This will be useful to simplify and improve DEBUG_TASK.
There are a few places where it's convenient to hash a pointer to compute
a statistics bucket. Here we're basically reusing the hash that was used
by memory profiling with a minor update that the multiplier was corrected
to be prime and stand by its promise to have equal numbers of 1 and 0,
and that 32-bit platforms won't lose range anymore.
A two-pointer variant was also added.
It was a mistake to put these two fields in the struct task. This
was added in 1.9 via commit 9efd7456e ("MEDIUM: tasks: collect per-task
CPU time and latency"). These fields are used solely by streams in
order to report the measurements via the lat_ns* and cpu_ns* sample
fetch functions when task profiling is enabled. For the rest of the
tasks, this is pure CPU waste when profiling is enabled, and memory
waste 100% of the time, as the point where these latencies and usages
are measured is in the profiling array.
Let's move the fields to the stream instead, and have process_stream()
retrieve the relevant info from the thread's context.
The struct task is now back to 120 bytes, i.e. almost two cache lines,
with 32 bit still available.
The profile entry that corresponds to the current task/tasklet being
profiled is now stored into the thread's context. This will allow it
to be accessed from the tasks themselves. This is needed for an upcoming
fix.
When task profiling is enabled, the scheduler can measure and report
the cumulated time spent in each task and their respective latencies. But
this was wrong for tasks with few wakeups as well as for self-waking ones,
because the call date needed to measure how long it takes to process the
task is retrieved in the task itself (->wake_date was turned to the call
date), and we could face two conditions:
- a new wakeup while the task is executing would reset the ->wake_date
field before returning and make abnormally low values being reported;
that was likely the case for taskrun_applet for self-waking applets;
- when the task dies, NULL is returned and the call date couldn't be
retrieved, so that CPU time was not being accounted for. This was
particularly visible with process_stream() which is usually called
only twice per request, and whose time was systematically halved.
The cleanest solution here is to keep in mind that the scheduler already
uses quite a bit of local context in th_ctx, and place the intermediary
values there so that they cannot vanish. The wake_date has to be reset
immediately once read, and only its copy is used along the function. Note
that this must be done both for tasks and tasklet, and that until recently
tasklets were also able to report wrong values due to their sole dependency
on TH_FL_TASK_PROFILING between tests.
One nice benefit for future improvements is that such information will now
be available from the task without having to be stored into the task itself
anymore.
Since the tasklet part was computed on wrapping 32-bit arithmetics and
the task one was on 64-bit, the values were now consistently moved to
32-bit as it's already largely sufficient (4s spent in a task is more
than twice what the watchdog would tolerate). Some further cleanups might
be necessary, but the patch aimed at staying minimal.
Task profiling output after 1 million HTTP request previously looked like
this:
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
h1_io_cb 2012338 4.850s 2.410us 12.91s 6.417us
process_stream 2000136 9.594s 4.796us 34.26s 17.13us
sc_conn_io_cb 2000135 1.973s 986.0ns 30.24s 15.12us
h1_timeout_task 137 - - 2.649ms 19.34us
accept_queue_process 49 152.3us 3.107us 321.7yr 6.564yr
main+0x146430 7 5.250us 750.0ns 25.92us 3.702us
srv_cleanup_idle_conns 1 559.0ns 559.0ns 918.0ns 918.0ns
task_run_applet 1 - - 2.162us 2.162us
Now it looks like this:
Tasks activity:
function calls cpu_tot cpu_avg lat_tot lat_avg
h1_io_cb 2014194 4.794s 2.380us 13.75s 6.826us
process_stream 2000151 20.01s 10.00us 36.04s 18.02us
sc_conn_io_cb 2000148 2.167s 1.083us 32.27s 16.13us
h1_timeout_task 198 54.24us 273.0ns 3.487ms 17.61us
accept_queue_process 52 158.3us 3.044us 409.9us 7.882us
main+0x1466e0 18 16.77us 931.0ns 63.98us 3.554us
srv_cleanup_toremove_conns 8 282.1us 35.26us 546.8us 68.35us
srv_cleanup_idle_conns 3 149.2us 49.73us 8.131us 2.710us
task_run_applet 3 268.1us 89.38us 11.61us 3.871us
Note the two-fold difference on process_stream().
This feature is essentially used for debugging so it has extremely limited
impact. However it's used quite a bit more in bug reports and it would be
desirable that at least 2.6 gets this fix backported. It depends on at least
these two previous patches which will then also have to be backported:
MINOR: task: permanently enable latency measurement on tasklets
CLEANUP: task: rename ->call_date to ->wake_date
This field is misnamed because its real and important content is the
date the task was woken up, not the date it was called. It temporarily
holds the call date during execution but this remains confusing. In
fact before the latency measurements were possible it was indeed a call
date. Thus is will now be called wake_date.
This change is necessary because a subsequent fix will require the
introduction of the real call date in the thread ctx.
When tasklet latency measurement was enabled in 2.4 with commit b2285de04
("MINOR: tasks: also compute the tasklet latency when DEBUG_TASK is set"),
the feature was conditionned on DEBUG_TASK because the field would add 8
bytes to the struct tasklet.
This approach was not a very good idea because the struct ends on an int
anyway thus it does finish with a 32-bit hole regardless of the presence
of this field. What is true however is that adding it turned a 64-byte
struct to 72-byte when caller debugging is enabled.
This patch revisits this with a minor change. Now only the lowest 32
bits of the call date are stored, so they always fit in the remaining
hole, and this allows to remove the dependency on DEBUG_TASK. With
debugging off, we're now seeing a 48-byte struct, and with debugging
on it's exactly 64 bytes, thus still exactly one cache line. 32 bits
allow a latency of 4 seconds on a tasklet, which already indicates a
completely dead process, so there's no point storing the upper bits at
all. And even in the event it would happen once in a while, the lost
upper bits do not really add any value to the debug reports. Also, now
one tasklet wakeup every 4 billion will not be sampled due to the test
on the value itself. Similarly we just don't care, it's statistics and
the measurements are not 9-digit accurate anyway.
There's a subtle (harmless) bug in task_instant_wakeup(). As it uses
some tasklet code instead of some task code, the debug part also acts
on the tasklet equivalent, and the call_date is only set when DEBUG_TASK
is set instead of inconditionally like with tasks. As such, without this
debugging macro, call dates are not updated for tasks woken this way.
There isn't any impact yet because this function was introduced in 2.6 to
solve certain classes of issues and is not used yet, and in the worst case
it would only affect the reported latency time.
This may be backported to 2.6 in case a future fix would depend on it but
currently will not fix existing code.
The tasklet's call date was not reset, so if profiling was enabled while
some tasklets were in the run queue, their initial random value could be
used to preload a bogus initial latency value into the task profiling bin.
Let's just zero the initial value.
This should be backported to 2.4 as it was brought with initial commit
b2285de04 ("MINOR: tasks: also compute the tasklet latency when DEBUG_TASK
is set"). The impact is very low though.
To work, quic_pin_cid_to_tid() must set cid[0] to a value with <target_id>
as <global.nbthread> modulo. For each integer n, (n - (n % m)) + d has always
d as modulo m (with d < m).
So, this statement seemed correct:
cid[0] = cid[0] - (cid[0] % global.nbthread) + target_tid;
except when n wraps or when another modulo is applied to the addition result.
Here, for 8bit modulo arithmetic, if m does not divides 256, this cannot
works for values which wraps when we increment them by d.
For instance n=255 m=3 and d=1 the formula result is 0 (should be d).
To fix this, we first limit c[0] to 255 - <target_id> to prevent c[0] from wrapping.
Thank you to @esb for having reported this issue in GH #1855.
Must be backported to 2.6
LibreSSL does not implement EVP_chacha20_poly1305() with EVP_CIPHER but
uses the EVP_AEAD API instead:
https://man.openbsd.org/EVP_AEAD_CTX_init
This patch disables this cipher for libreSSL for now.
When building HAProxy with USE_QUIC and libressl 3.6.0, the
ssl_sock_switchtx_cbk symbol is not found because libressl does not
implement the client_hello_cb.
A ssl_sock_switchtx_cbk version for the servername callback is available
but wasn't exported correctly.
This helper will be called for muxes that provide it and will be used
to let the mux provide extra information about the stream attached to
a stream descriptor. A line prefix is passed in argument so that the
mux is free to break long lines without breaking indent. No prefix
means no line breaks should be produced (e.g. for short dumps).
Some recent traces started to show confusing stream pointers ending with
0xe. The reason was that the stream's obj_type was almost unused in the
past and was stuffed in a hole in the structure. But now it's present in
all "show sess all" outputs and having to mentally match this value against
another one that's 0x17e lower is painful. The solution here is to move the
obj_type at the top, like in almost every other structure, but without
breaking the efficient layout.
This patch moves a few fields around and manages to both plug some holes
(16 bytes saved, 976 to 960) and avoid channels needlessly crossing cache
boundaries (res was spread over 3 lines vs 2 now).
Nothing else was changed. It would be desirable to backport this to 2.6
since it's where dumps are currently being processed the most.
As outlined in commit f7ebe584d7 ("BUILD: debug: Add braces to if
statement calling only CHECK_IF()"), the BUG_ON() family of macros
is incorrectly defined to be empty when debugging is disabled, and
that can lead to trouble. Make sure they always fall back to the
usual "do { } while (0)". This may be backported to 2.6 if needed,
though no such issue was met there to date.
This bug arrived with this commit:
"MINOR: quic: Add reusable cipher contexts for header protection"
haproxy could crash because of missing cipher contexts initializations for
the header protection and draft-v2 Initial secrets. This was due to the fact
that these initialization both for RX and TX secrets were done outside of
qc_new_isecs(). The role of this function is definitively to initialize these
cipher contexts in addition to the derived secrets. Indeed this function is called
by qc_new_conn() which initializes the connection but also by qc_conn_finalize()
which also calls qc_new_isecs() in case of a different QUIC version was negotiated
by the peers from the one used by the client for its first Initial packet.
This was reported by "v2" QUIC interop test with at least picoquic as client.
Must be backported to 2.6.
Shut the connect() warning of resolvers_finalize_config() when the
configuration was not emitted manually.
This shuts the warning for the "default" resolvers which is created
automatically for the httpclient.
Must be backported in 2.6.
It is only a real problem for agent-checks when there is no agent string to
send. The condition to disable TCP_QUICKACK was only based on the action
type following the connect one. But it is not always accurate. indeed, for
agent-checks, there is always a SEND action. But if there is no "agent-send"
string defined, nothing is sent. In this case, this adds 200ms of latency
with no reason.
To fix the bug, a flag is now used on the CONNECT action to instruct there
are data that should be sent after the connect. For health-checks, this flag
is set if the action following the connect is a SEND action. For
agent-checks, it is set if an "agent-send" string is defined.
This patch should fix the issue #1836. It must be backported as far as 2.2.
Replace ->rx.pqpkts quic_enc_level struct member MT_LIST by an LIST.
Same thing for ->list quic_rx_packet struct member MT_LIST.
Update the code consequently. This was a reminisence of the multithreading
support (several threads by connection).
Must be backported to 2.6
Some clients send CONNECTION_CLOSE frame without acknowledging the STREAM
data haproxy has sent. In this case, when closing the connection if
there were remaining data in QUIC stream buffers, they were not released.
Add a <closing> boolean option to qc_stream_desc_free() to force the
stream buffer memory releasing upon closing connection.
Thank you to Tristan for having reported such a memory leak issue in GH #1801.
Must be backported to 2.6.
In ticket #1805 an user is impacted by the limitation of size of the CLI
buffer when updating a ca-file.
This patch allows a user to append new certificates to a ca-file instead
of trying to put them all with "set ssl ca-file"
The implementation use a new function ssl_store_dup_cafile_entry() which
duplicates a cafile_entry and its X509_STORE.
ssl_store_load_ca_from_buf() was modified to take an apped parameter so
we could share the function for "set" and "add".
In order to be able to append new CA in a cafile_entry,
ssl_store_load_ca_from_buf() was reworked and a "append" parameter was
added.
The function is able to keep the previous X509_STORE which was already
present in the cafile_entry.
Implement quic_tls_rx_hp_ctx_init() and quic_tls_tx_hp_ctx_init() to initiliaze
such header protection cipher contexts for each RX and TX parts and for each
packet number spaces, only one time by connection.
Make qc_new_isecs() call these two functions to initialize the cipher contexts
of the Initial secrets. Same thing for ha_quic_set_encryption_secrets() to
initialize the cipher contexts of the subsequent derived secrets (ORTT, 1RTT,
Handshake).
Modify qc_do_rm_hp() and quic_apply_header_protection() to reuse these
cipher contexts.
Note that there is no need to modify the key update for the header protection.
The header protection secrets are never updated.
The CLI needs to reset the svcctx between commands, and there was nothing
done to handle this. Let's add appctx_reset_svcctx() to do that, it's the
closing equivalent of appctx_reserve_svcctx().
This will have to be backported to 2.6 as it will be used by a subsequent
patch to fix a bug.
As specified by RFC 7540, multiple cookie headers are merged in a single
entry before passing it to a HTTP/1.1 connection. This step is
implemented during headers parsing in h2 module.
Extract this code in the generic http_htx module. This will allow to
reuse it quickly for HTTP/3 implementation which has the same
requirement for cookie headers.
MUX notification on TX has been edited recently : it will be notified
only when sending its own data, and not for example on retransmission by
the quic-conn layer. This is subject of the patch :
b29a1dc2f4
BUG/MINOR: quic: do not notify MUX on frame retransmit
A new flag QUIC_FL_CONN_RETRANS_LOST_DATA has been introduced to
differentiate qc_send_app_pkts invocation by MUX and directly by the
quic-conn layer in quic_conn_app_io_cb(). However, this is a first
problem as internal quic-conn layer usage is not limited to
retransmission. For example for NEW_CONNECTION_ID emission.
Another problem much important is that send functions are also called
through quic_conn_io_cb() which has not been protected from MUX
notification. This could probably result in crash when trying to notify
the MUX.
To fix both problems, quic-conn flagging has been inverted : when used
by the MUX, quic-conn is flagged with QUIC_FL_CONN_TX_MUX_CONTEXT. To
improve the API, MUX must now used qc_send_mux which ensure the flag is
set. qc_send_app_pkts is now static and can only be used by the
quic-conn layer.
This must be backported wherever the previously mentionned patch is.
On STREAM emission, quic-conn notifies MUX through a callback named
qcc_streams_sent_done(). This also happens on retransmission : in this
case offset are examined and notification is ignored if already seen.
However, this behavior has slightly changed since
e53b489826
BUG/MEDIUM: mux-quic: fix server chunked encoding response
Indeed, if offset diff is NULL, frame is now not ignored. This is to
support FIN notification with a final empty STREAM frame. A side-effect
of this is that if the last stream frame is retransmitted, it won't be
ignored in qcc_streams_sent_done().
In most cases, this side-effect is harmless as qcs instance will soon be
freed after being closed. But if qcs is still alive, this will cause a
BUG_ON crash as it is considered as locally closed.
This bug depends on delay condition and seems to be extremely rare. But
it might be the reason for a crash seen on interop with s2n client on
http3 testcase :
FATAL: bug condition "qcs->st == QC_SS_CLO" matched at src/mux_quic.c:372
call trace(16):
| 0x558228912b0d [b8 01 00 00 00 c6 00 00]: main-0x1c7878
| 0x558228917a70 [48 8b 55 d8 48 8b 45 e0]: qcc_streams_sent_done+0xcf/0x355
| 0x558228906ff1 [e9 29 05 00 00 48 8b 05]: main-0x1d3394
| 0x558228907cd9 [48 83 c4 10 85 c0 0f 85]: main-0x1d26ac
| 0x5582289089c1 [48 83 c4 50 85 c0 75 12]: main-0x1d19c4
| 0x5582288f8d2a [48 83 c4 40 48 89 45 a0]: main-0x1e165b
| 0x5582288fc4cc [89 45 b4 83 7d b4 ff 74]: qc_send_app_pkts+0xc6/0x1f0
| 0x5582288fd311 [85 c0 74 12 eb 01 90 48]: main-0x1dd074
| 0x558228b2e4c1 [48 c7 c0 d0 60 ff ff 64]: run_tasks_from_lists+0x4e6/0x98e
| 0x558228b2f13f [8b 55 80 29 c2 89 d0 89]: process_runnable_tasks+0x7d6/0x84c
| 0x558228ad9aa9 [8b 05 75 16 4b 00 83 f8]: run_poll_loop+0x80/0x48c
| 0x558228ada12f [48 8b 05 aa c5 20 00 48]: main-0x256
| 0x7ff01ed2e609 [64 48 89 04 25 30 06 00]: libpthread:+0x8609
| 0x7ff01e8ca163 [48 89 c7 b8 3c 00 00 00]: libc:clone+0x43/0x5e
To reproduce it locally, code was artificially patched to produce
retransmission and avoid qcs liberation.
In order to fix this and avoid future class of similar problem, the best
way is to not call qcc_streams_sent_done() to notify MUX for
retranmission. To implement this, we test if any of
QUIC_FL_CONN_RETRANS_OLD_DATA or the new flag
QUIC_FL_CONN_RETRANS_LOST_DATA is set. A new wrapper
qc_send_app_retransmit() has been added to set the new flag as a
complement to already existing qc_send_app_probing().
This must be backported up to 2.6.
Adjust qc_send_app_pkts function : remove <old_data> arg and provide a
new wrapper function qc_send_app_probing() which should be used instead
when probing with old data.
This simplifies the interface of the default function, most notably for
the MUX which does not interfer with retransmission.
QUIC_FL_CONN_RETRANS_OLD_DATA flag is set/unset directly in the wrapper
qc_send_app_probing().
At the same time, function documentation has been updated to clarified
arguments and return values.
This commit will be useful for the next patch to differentiate MUX and
retransmission send context. As a consequence, the current patch should
be backported wherever the next one will be.
Emit STREAM_LIMIT_ERROR if a client tries to open an unidirectional
stream with an ID greater than the value specified by our flow-control
limit. The code is similar to the bidirectional stream opening.
MAX_STREAMS_UNI emission is not implement for the moment and is left as
a TODO. This should not be too urgent for the moment : in HTTP/3, a
client has only a limited use for unidirectional streams (H3 control
stream + 2 QPACK streams). This is covered by the value provided by
haproxy in transport parameters.
This patch has been tagged with BUG as it should have prevented last
crash reported on github issue #1808 when opening a new unidirectional
streams with an invalid ID. However, it is probably not the main cause
of the bug contrary to the patch
commit 11a6f4007b
BUG/MINOR: quic: Wrong status returned by qc_pkt_decrypt()
This must be backported up to 2.6.
As specified by RFC 9204, encoder and decoder streams must not be
closed. If the peer behaves incorrectly and closes one of them, emit a
H3_CLOSED_CRITICAL_STREAM connection error.
To implement this, QPACK stream decoding API has been slightly adjusted.
Firstly, fin parameter is passed to notify about FIN STREAM bit.
Secondly, qcs instance is passed via unused void* context. This allows
to use qcc_emit_cc_app() function to report a CONNECTION_CLOSE error.
This function is responsible for all calls to pool_alloc(trash), whose
total size can be huge. As such it's quite a pain that it doesn't provide
more hints about its users. However, since the function is tiny, it fully
makes sense to inline it, the code is less than 0.1% larger with this.
This way we can now detect where the callers are via "show profiling",
e.g.:
0 1953671 0 32071463136| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
0 1 0 16416| 0x59960f main+0x10676f p_free(-16416) [pool=trash]
1953672 0 32071479552 0| 0x599561 main+0x1066c1 p_alloc(16416) [pool=trash]
0 976835 0 16035723360| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
0 1 0 16416| 0x576ca7 http_reply_to_htx+0x447/0x920 p_free(-16416) [pool=trash]
976835 0 16035723360 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
1 0 16416 0| 0x576a5d http_reply_to_htx+0x1fd/0x920 p_alloc(16416) [pool=trash]
Storing the pointer to the pool along with the stats is quite useful as
it allows to report the name. That's what we're doing here. We could
store it in place of another field but that's not convenient as it would
require to change all functions that manipulate counters. Thus here we
store one extra field, as well as some padding because the struct turns
56 bytes long, thus better go to 64 directly. Example of output from
"show profiling memory":
2 0 48 0| 0x4bfb2c ha_quic_set_encryption_secrets+0xcc/0xb5e p_alloc(24) [pool=quic_tls_iv]
0 55252 0 10608384| 0x4bed32 main+0x2beb2 free(-192)
15 0 2760 0| 0x4be855 main+0x2b9d5 p_alloc(184) [pool=quic_frame]
1 0 1048 0| 0x4be266 ha_quic_add_handshake_data+0x2b6/0x66d p_alloc(1048) [pool=quic_crypto]
3 0 552 0| 0x4be142 ha_quic_add_handshake_data+0x192/0x66d p_alloc(184) [pool=quic_frame]
31276 0 6755616 0| 0x4bb8f9 quic_sock_fd_iocb+0x689/0x69b p_alloc(216) [pool=quic_dgram]
0 31424 0 6787584| 0x4bb7f3 quic_sock_fd_iocb+0x583/0x69b p_free(-216) [pool=quic_dgram]
152 0 32832 0| 0x4bb4d9 quic_sock_fd_iocb+0x269/0x69b p_alloc(216) [pool=quic_dgram]
Pools are being used so well that it becomes difficult to profile their
usage via the regular memory profiling. Let's add new entries for pools
there, named "p_alloc" and "p_free" that correspond to pool_alloc() and
pool_free(). Ideally it would be nice to only report those that fail
cache lookups but that's complicated, particularly on the free() path
since free lists are released in clusters to the shared pools.
It's worth noting that the alloc_tot/free_tot fields can easily be
determined by multiplying alloc_calls/free_calls by the pool's size, and
could be better used to store a pointer to the pool itself. However it
would require significant changes down the code that sorts output.
If this were to cause a measurable slowdown, an alternate approach could
consist in using a different value of USE_MEMORY_PROFILING to enable pools
profiling. Also, this profiler doesn't depend on intercepting regular malloc
functions, so we could also imagine enabling it alone or the other one alone
or both.
Tests show that the CPU overhead on QUIC (which is already an extremely
intensive user of pools) jumps from ~7% to ~10%. This is quite acceptable
in most deployments.
Right now it's not possible to feed memory profiling info from outside
activity.c, so let's export the function and move the enum and struct
to the include file.
This mmaps a file which will serve as the backing-store for the ring's
contents. The idea is to provide a way to retrieve sensitive information
(last logs, debugging traces) even after the process stops and even after
a possible crash. Right now this was possible by connecting to the CLI
and dumping the contents of the ring live, but this is not handy and
consumes quite a bit of resources before it is needed.
With a backing file, the ring is effectively RAM-mapped file, so that
contents stored there are the same as those found in the file (the OS
doesn't guarantee immediate sync but if the process dies it will be OK).
Note that doing that on a filesystem backed by a physical device is a
bad idea, as it will induce slowdowns at high loads. It's really
important that the device is RAM-based.
Also, this may have security implications: if the file is corrupted by
another process, the storage area could be corrupted, causing haproxy
to crash or to overwrite its own memory. As such this should only be
used for debugging.
Instead of allocating two parts, one for the ring struct itself and
one for the storage area, ring_make_from_area() will arrange the two
inside the same memory area, with the storage starting immediately
after the struct. This will allow to store a complete ring state in
shared memory areas for example.
This commit was not complete:
"BUG/MEDIUM: quic: Possible use of uninitialized <odcid>
variable in qc_lstnr_params_init()"
<token_odcid> should have been directly passed to qc_lstnr_params_init()
without dereferencing it to prevent haproxy to have new chances to crash!
Must be backported to 2.6.
When receiving a token into a client Initial packet without a cluster secret defined
by configuration, the <odcid> variable used to parse the ODCID from the token
could be used without having been initialized. Such a packet must be dropped. So
the sufficient part of this patch is this check:
+ }
+ else if (!global.cluster_secret && token_len) {
+ /* Impossible case: a token was received without configured
+ * cluster secret.
+ */
+ TRACE_PROTO("Packet dropped", QUIC_EV_CONN_LPKT,
+ NULL, NULL, NULL, qv);
+ goto drop;
}
Take the opportunity of this patch to rework and make it more readable this part
of code where such a packet must be dropped removing the <check_token> variable.
When an ODCID is parsed from a token, new <token_odcid> new pointer variable
is set to the address of the parsed ODCID. This way, is not set but used it will
make crash haproxy. This was not always the case with an uninitialized local
variable.
Adapt the API to used such a pointer variable: <token> boolean variable is removed
from qc_lstnr_params_init() prototype.
This must be backported to 2.6.
Add new traces to help debugging on QUIC MUX. Most notable, the
following functions are now traced :
* qcc_emit_cc
* qcs_free
* qcs_consume
* qcc_decode_qcs
* qcc_emit_cc_app
* qcc_install_app_ops
* qcc_release_remote_stream
* qcc_streams_sent_done
* qc_init
This lock was there be able to handle the RX packets for a connetion
from several threads. This is no more needed since a QUIC connection
is always handled by the same thread.
May be backported to 2.6
Add a least as much as possible TRACE_ENTER() and TRACE_LEAVE() calls
to any function. Note that some functions do not have any access to the
a quic_conn argument when receiving or parsing datagram at very low level.
While testing the fix for the previous issue related to reloads with
hard_stop_after, I've met another one which could spuriously produce:
FATAL: bug condition "t->tid >= 0 && t->tid != tid" matched at include/haproxy/task.h:266
In 2.3-dev2, we've added more consistency checks for a number of bug-
inducing programming errors related to the tasks, via commit e5d79bccc
("MINOR: tasks/debug: add a few BUG_ON() to detect use of wrong timer
queue"), and this check comes from there.
The problem that happens here is that when hard-stop-after is set, we
can abort the current thread even if there are still ongoing checks
(or connections in fact). In this case some tasks are present in a
thread's wait queue and are thus bound exclusively to this thread.
During deinit(), the collect and cleanup of all memory areas also
stops servers and kills their check tasks. And calling task_destroy()
does in turn call task_unlink_wq()... except that it's called from
thread 0 which doesn't match the initially planned thread number.
Several approaches are possible. One of them would consist in letting
threads perform their own cleanup (tasks, pools, FDs, etc). This would
possibly be even faster since done in parallel, but some corner cases
might be way more complicated (e.g. who will kill a check's task, or
what to do with a task found in a local wait queue or run queue, and
what about other consistency checks this could violate?).
Thus for now this patches takes an easier and more conservative
approach consisting in admitting that when the process is stopping,
this rule is not necessarily valid, and to let thread 0 collect all
other threads' garbage.
As such this patch can be backpoted to 2.4.
On first prototype version of QUIC, emission was multithreaded. To
support this, a custom thread-safe ring-buffer has been implemented with
qring/cbuf.
Now the thread model has been adjusted : a quic-conn is always used on
the same thread and emission is not multi-threaded. Thus, qring/cbuf
usage can be replace by a standard struct buffer.
The code has been simplified even more as for now buffer is always
drained after a prepare/send invocation. This is the case since a
datagram is always considered as sent even on sendto() error. BUG_ON
statements guard are here to ensure that this model is always valid.
Thus, code to handle data wrapping and consume too small contiguous
space with a 0-length datagram is removed.
Right now the free() call is not intercepted since all this is done
using macros and that would break a lot of stuff. Instead a __free()
macro was provided but never used. In addition it used to only report
a zero size, which is not very convenient.
With this patch comes a better solution. Instead it provides a new
will_free() macro that can be prepended before a call to free(). It
only keeps the counters up to date, and also supports being passed a
size. The pool_free_area() command now uses it, which finally allows
the stats to look correct:
pool-os.h:38 MALLOC size: 5802127832 calls: 3868044 size/call: 1500
pool-os.h:47 FREE size: 5800041576 calls: 3867444 size/call: 1499
The few other places directly calling free() could now be instrumented to
use this and to pass the correct sizeof() when known.
The calling function name is now stored in the structure, and it's
reported when the "all" argument is passed. The first column is
significantly enlarged because some names are really wide :-(
Not specifying the alignment will let the linker choose it, and it turns
out that it will not necessarily be the same size as the one chosen for
struct mem_stats, as can be seen if any new fields are added there. Let's
enforce an alignment to void* both for the section and for the structure.
MAX_THREADS was not changed when setting MAX_TGROUPS, which still limits
some possibilities. Let's preset it to 4 * LONGBITS when MAX_TGROUPS is
larger than 1, or LONGBITS when it's set to 1. This means that the new
default value is 256 threads.
The rationale behind this is that the main use of thread groups is
mostly to address NUMA issues and that we don't necessarily need large
thread counts when using many groups, and 256 threads is already plenty
even on quite large systems.
For now it's important not to go too far because some internal structs
are arrays of MAX_THREADS entries, for example accept_queue_ring, which
is around 8kB per thread. Such structures will need to become dynamic
before defaulting to large thread counts (at 4096 threads max the
accept queues would require 32 MB RAM alone).
qc_snd_buf returned a size_t which means that it was never negative
despite its documentation. Thus the caller who checked for this was
never informed of a sendto error.
Clean this by changing the return value of qc_snd_buf() to an integer.
A 0 is returned on success. Every other values are considered as an
error.
This commit should be backported up to 2.6. Note that to not cause
malfunctions, it must be backported after the previous patch :
906b058954
MINOR: quic: explicitely ignore sendto error
This is to ensure that a sendto error does not cause send to be
interrupted which may cause a stalled transfer without a proper retry
mechanism.
The impact of this bug seems null as caller explicitely ignores sendto
error. However this part of code seems to be subject to strange issues
and it may fix them in part. It may be of interest for github issue #1808.
Implement http-request timeout for QUIC MUX. It is used when the
connection is opened and is triggered if no HTTP request is received in
time. By HTTP request we mean at least a QUIC stream with a full header
section. Then qcs instance is attached to a sedesc and upper layer is
then responsible to wait for the rest of the request.
This timeout is also used when new QUIC streams are opened during the
connection lifetime to wait for full HTTP request on them. As it's
possible to demux multiple streams in parallel with QUIC, each waiting
stream is registered in a list <opening_list> stored in qcc with <start>
as timestamp in qcs for the stream opening. Once a qcs is attached to a
sedesc, it is removed from <opening_list>. When refreshing MUX timeout,
if <opening_list> is not empty, the first waiting stream is used to set
MUX timeout.
This is efficient as streams are stored in the list in their creation
order so CPU usage is minimal. Also, the size of the list is
automatically restricted by flow control limitation so it should not
grow too much.
Streams are insert in <opening_list> by application protocol layer. This
is because only application protocol can differentiate streams for HTTP
messaging from internal usage. A function qcs_wait_http_req() has been
added to register a request stream by app layer. QUIC MUX can then
remove it from the list in qc_attach_sc().
As a side-note, it was necessary to implement attach qcc_app_ops
callback on hq-interop module to be able to insert a stream in waiting
list. Without this, a BUG_ON statement would be triggered when trying to
remove the stream on sedesc attach. This is to ensure that every
requests streams are registered for http-request timeout.
MUX timeout is explicitely refreshed on MAX_STREAM_DATA and STOP_SENDING
frame parsing to schedule http-request timeout if a new stream has been
instantiated. It was already done on STREAM parsing due to a previous
patch.
Store the current step of HTTP message in h3s stream. This reports if we
are in the parsing of headers, content or trailers section. A new enum
h3s_st_req is defined for this.
This field is stored in h3s struct but only used for request stream. It
is left undefined for other streams (control or QPACK streams).
h3_is_frame_valid() has been extended to take into account this state
information. A connection error H3_FRAME_UNEXPECTED is reported if an
invalid frame according to the current state is received; for example a
DATA frame at the beginning of a stream.
The decrement was missing in quic_pktns_tx_pkts_release() called each time a
packet number space is discarded. This is not sure this bug could have an impact
during handshakes. This counter is used to cancel the timer used both for packet
detection and PTO, setting its value to null. So there could be retransmissions
or probing which could be triggered for nothing.
Must be backported to 2.6.
Complete QUIC MUX timeout refresh function by using http-keep-alive
timeout. It is used when the connection is idle after having handle at
least one request.
To implement this a new member <idle_start> has been defined in qcc
structure. This is used as timestamp for when the connection became idle
and is used as base time for http keep-alive timeout
Add a new qcc member named <nb_hreq>. Its purpose is close to <nb_sc>
which represents the number of attached stream connectors. Both are
incremented inside qc_attach_sc().
The difference is on the decrement operation. While <nb_cs> is
decremented on sedesc detach callback, <nb_hreq> is decremented when the
qcs is locally closed.
In most cases, <nb_hreq> will be decremented before <nb_cs>. However, it
will be the reverse if a stream must be kept alive after detach callback.
The main purpose of this field is to implement http-keep-alive timeout.
Both <nb_sc> and <nb_hreq> must be null to activate the http-keep-alive
timeout.
Store a reference to proxy in the qcc structure. This will be useful to
access to proxy members outside of qcc_init().
Most notably, this change is required to implement timeout refreshing by
using the various timeouts configured at the proxy level.
Timeout in QUIC MUX has evolved from the simple first implementation. At
the beginning, a connection was considered dead unless bidirectional
streams were opened. This was abstracted through an app callback
is_active().
Now this paradigm has been reversed and a connection is considered alive
by default, unless an error has been reported or a timeout has already
been fired. The callback is_active() is thus not used anymore and can be
safely removed to simplify qcc_is_dead().
This commit should be backported to 2.6.
This function is designed to enlarge the scope of a lookup performed
by a caller via ebmb_lookup_longest() that was not satisfied with the
result. It will first visit next duplicates, and if none are found,
it will go up in the tree to visit similar keys with shorter prefixes
and will return them if they match. We only use the starting point's
value to perform the comparison since it was expected to be valid for
the looked up key, hence it has all bits in common with its own length.
The algorithm is a bit complex because when going up we may visit nodes
that are located beneath the level we just come from. However it is
guaranteed that keys having a shorter prefix will be present above the
current location, though they may be attached to the left branch of a
cover node, so we just visit all nodes as long as their prefix is too
large, possibly go down along the left branch on cover nodes, and stop
when either there's a match, or there's a non-matching prefix anymore.
The following tricky case now works fine and properly finds 10.0.0.0/7
when looking up 11.0.0.1 from tree version 1 though both belong to
different sub-trees:
prepare map #1
add map @1 #1 10.0.0.0/7 10.0.0.0/7
add map @1 #1 10.0.0.0/7 10.0.0.0/7
commit map @1 #1
prepare map #1
add map @2 #1 11.0.0.0/8 11.0.0.0/8
add map @2 #1 11.0.0.0/8 11.0.0.0/8
prepare map #1
add map @1 #1 10.0.0.0/7 10.0.0.0/7
commit map @1 #1
prepare map #1
add map @2 #1 10.0.0.0/7 10.0.0.0/7
add map @2 #1 11.0.0.0/8 11.0.0.0/8
add map @2 #1 11.0.0.0/8 11.0.0.0/8
It's convenient for debugging IP trees. However we're not dumping the
full keys, for the sake of simplicity, only the 4 first bytes are dumped
as a u32 hex value. In practice this is sufficient for debugging. As a
reminder since it seems difficult to recover the command each time it's
needed, the output is converted to an image using dot from Graphviz:
dot -o a.png -Tpng dump.txt
The plock code hasn't been been updated since 2017 and didn't benefit
from the exponential back-off improvements that were added in 2018.
Simply updating the file shows a massive performance gain on large
thread count (>=48) with dequeuing going from 113k RPS to 300k RPS and
round robin from 229k RPS to 1020k RPS. It was about time to update.
In addition, some recent improvements to the code will be useful with
thread groups.
An interesting improvement concerns EPYC CPUs. This one alone increased
fairness and was sufficient to avoid crashes in process_srv_queue() there,
when hammering two servers with maxconn 200 under 1k connections.
As it could be interesting to be able to choose the QUIC control congestion
algorithm to be used by listener, add "quic-cc-algo" new keyword to do so.
Update the documentation consequently.
Must be backported to 2.6.
Cubic is the congestion control algorithm used by default by the Linux kernel
since 2.6.15 version. This algorithm is supposed to achieve good scalability and
fairness between flows using the same network path, it should also be used by QUIC
by default. This patch implements this algorithm and select it as default algorithm
for the congestion control.
Must be backported to 2.6.
Ease the integration of new congestion control algorithm to come.
Move the congestion controller state to a private array of uint32_t
to stop using a union. We do not want to continue using such long
paths cc->algo_state.<algo>.<var> to modify the internal state variable
for each algorithm.
Must be backported to 2.6
Since the API is still a bit young, let's make sure nobody tries to
assign and FD to a group not strictly 1..MAX_TGROUPS as that would
indicate a bug.
Note: some of these might be relaxed to BUG_ON_HOT() in the future
When a new fd is inserted in the fdtab array, its state is initialized. The
"newstate" variable is used to compute the right state (0 by default, but
FD_ET_POSSIBLE flag is set if edge-triggered is supported for the fd).
However, this variable is never used and the fd state is always set to 0.
Now, the fd state is initialized with "newstate" variable.
This bug was introduced by commit ddedc1662 ("MEDIUM: fd: make
fd_insert/fd_delete atomically update fd.tgid"). No backport needed.
This function was added by commit 84ebfabf7 ("MINOR: tools: add
statistical_prng_range() to get a random number over a range") but it
contains a bug on the range, since mul32hi() covers the whole input
range, we must pass it range-1. For now it didn't have any impact, but
if used to find an array's index it will cause trouble.
This should be backported to 2.4.
The first approach in commit 288dc1d8e ("BUG/MEDIUM: tools: avoid calling
dlsym() in static builds") relied on dlopen() but on certain configs (at
least gcc-4.8+ld-2.27+glibc-2.17) it used to catch situations where it
ought not fail.
Let's have a second try on this using dladdr() instead. The variable was
renamed "build_is_static" as it's exactly what's being detected there.
We could even take it for reporting in -vv though that doesn't seem very
useful. At least the variable was made global to ease inspection via the
debugger, or in case it's useful later.
Now it properly detects a static build even with gcc-4.4+glibc-2.11.1 and
doesn't crash anymore.
This will allows nbtgroups > 1 to be declared in the config without
recompiling. The theoretical limit is 64, though we'd rather not push
it too far for now as some structures might be enlarged to be indexed
per group. Let's start with 16 groups max, allowing to experiment with
dual-socket machines suffering from up to 8 loosely coupled L3 caches.
It's a good start and doesn't engage us too far.
Since these are not used anymore, let's now remove them. Given the
number of places where we're using ti->ldit_bit, maybe an equivalent
might be useful though.
At boot the pollers are allocated for each thread and they need to
reprogram updates for all FDs they will manage. This code is not
trivial, especially when trying to respect thread groups, so we'd
rather avoid duplicating it.
Let's centralize this into fd.c with this function. It avoids closed
FDs, those whose thread mask doesn't match the requested one or whose
thread group doesn't match the requested one, and performs the update
if required under thread-group protection.
These functions need to set/reset the FD's tgid but when they're called
there may still be wakeups on other threads that discover late updates
and have to touch the tgid at the same time. As such, it is not possible
to just read/write the tgid there. It must only be done using operations
that are compatible with what other threads may be doing.
As we're using inc/dec on the refcount, it's safe to AND the area to zero
the lower part when resetting the value. However, in order to set the
value, there's no other choice but fd_claim_tgid() which will assign it
only if possible (via a CAS). This is convenient in the end because it
protects the FD's masks from being modified by late threads, so while
we hold this refcount we can safely reset the thread_mask and a few other
elements. A debug test for non-null masks was added to fd_insert() as it
must not be possible to face this situation thanks to the protection
offered by the tgid.
With the change that was started on other masks, the thread mask was
still not fully converted, sometimes being used as a global mask and
sometimes as a local one. This finishes the code modifications so that
the mask is always considered as a group-local mask. This doesn't
change anything as long as there's a single group, but is necessary
for groups 2 and above since it's used against running_mask and so on.
It's an AND so it destroys information and due to this there's a call
place where we have to perform two reads to know the previous value
then to change it. With a fetch-and-and instead, in a single operation
we can know if the bit was previously present, which is more efficient.
From now on, the FD's running_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
The running mask is only valid if the tgid is the expected one. This
function takes a reference on the tgid before reading the running mask,
so that both are checked at once. It returns either the mask or zero if
the tgid differs, thus providing a simple way for a caller to check if
it still holds the FD.
The FD's tgid is refcounted and must be atomically manipulated. Function
fd_grab_tgid() will increase the refcount but only if the tgid matches the
one in argument (likely the current one). fd_claim_tgid() will be used to
self-assign the tgid after waiting for its refcount to reach zero.
fd_drop_tgid() will be used to drop a temporarily held tgid. All of these
are needed to prevent an FD from being reassigned to another group, either
when inspecting/modifying the running_mask, or when checking for updates,
in order to be certain that the mask being seen corresponds to the desired
group. Note that once at least one bit is set in the running mask of an
active FD, it cannot be closed, thus not migrated, thus the reference does
not need to be held long.
The file descriptors will need to know the thread group ID in addition
to the mask. This extends fd_insert() to take the tgid, and will store
it into the FD.
In the FD, the tgid is stored as a combination of tgid on the lower 16
bits and a refcount on the higher 16 bits. This allows to know when it's
really possible to trust the tgid and the running mask. If a refcount is
higher than 1 it indeed indicates another thread else might be in the
process of updating these values.
Since a closed FD must necessarily have a zero refcount, a test was
added to fd_insert() to make sure that it is the case.
It's a bit ugly to see that half of the callers of fd_insert() have to
apply all_threads_mask themselves to the bit field they're passing,
because usually it comes from a listener that may have other bits set.
Let's make the function apply the mask itself.
The update-list needs to be per-group because its inspection is based
on a mask and we need to be certain when scanning it if a mask is for
the same thread or another one. Once per-group there's no doubt about
it, even if the FD's polling changes, the entry remains valid. It will
be needed to check the tgid though.
Note that a soft-stop or pause/resume might not necessarily work here
with tgroups>1, because the operation might be delivered to a thread
that doesn't belong to the group and whoe update mask will not reflect
one that is interesting here. We can't do better at this stage.
This one is only used as a hint to improve scheduling latency, so there
is no more point in keeping it global since each thread group handles
its own run q
Their migration was postponed for convenience only but now's time for
having the shared wait queues per thread group and not just per process,
otherwise the WQ lock uses a huge amount of CPU alone.
Since commit d2494e048 ("BUG/MEDIUM: peers/config: properly set the
thread mask") there must not remain any single case of a receiver that
is bound nowhere, so there's no need anymore for thread_mask().
We're adding a test in fd_insert() to make sure this doesn't happen by
accident though, but the function was removed and its rare uses were
replaced with the original value of the bind_thread msak.
The principle remains the same, but instead of having a single process
and ignoring extra ones, now we set the affinity masks for the respective
threads of all groups.
The doc was updated with a few extra examples.
Since we have to use masks to verify owners/waiters, we have no other
option but to have them per group. This definitely inflates the size
of the locks, but this is only used for extreme debugging anyway so
that's not dramatic.
Thus as of now, all masks in the lock stats are local bit masks, derived
from ti->ltid_bit. Since at boot ltid_bit might not be set, we just take
care of this situation (since some structs are initialized under look
during boot), and use bit 0 from group 0 only.
They were initially made to deal with both the cache and the update list
but there's no cache anymore and keeping them for the update list adds a
lot of obfuscation that is really not desired. Let's get rid of them now.
Their purpose was simply to get a pointer to fdtab[fd].update.{,next,prev}
in order to perform atomic tests and modifications. The offset passed in
argument to the functions (fd_add_to_fd_list() and fd_rm_from_fd_list())
was the offset of the ->update field in fdtab, and as it's not used anymore
it was removed. This also removes a number of casts, though those used by
the atomic ops have to remain since only scalars are supported.
The "ctx" and "st2" parts in the appctx were marked for removal in 2.7
and were emulated using memcpy/memset etc for possible external code.
Let's remove this now.
Adjust qcc_emit_cc_app() to allow the delay of emission of a
CONNECTION_CLOSE. This will only set the error code but the quic-conn
layer is not flagged for immediate close. The quic-conn will be
responsible to shut the connection when deemed suitable.
This change will allow to implement application graceful shutdown, such
as HTTP/3 with GOAWAY emission. This will allow to emit closing frames
on MUX release. Once all work is done at the lower layer, the quic-conn
should emit a CONNECTION_CLOSE with the registered error code.
Define a new structure quic_err to abstract a QUIC error type. This
allows to easily differentiate a transport and an application error
code. This simplifies error transmission from QUIC MUX and H3 layers.
This new type is defined in quic_frame module. It is used to replace
<err_code> field in <quic_conn>. QUIC_FL_CONN_APP_ALERT flag is removed
as it is now useless.
Utility functions are defined to be able to quickly instantiate
transport, tls and application errors.
quic_frame-t.h and xprt_quic-t.h include themselves mutually. This may
cause some troubles later.
In fact, xprt_quic does not need to include quic_frame so remove this.
And as quic_frame is a generic source file which is included in multiple
places, it is useful to also remove the xprt_quic include in it. Use
forward declaration for this.
Implement support for STOP_SENDING frame parsing. The stream is resetted
as specified by RFC 9000. This will automatically interrupt all future
send operation in qc_send(). A RESET_STREAM will be sent with the code
extracted from the original STOP_SENDING frame.
Implement functions to be able to reset a stream via RESET_STREAM.
If needed, a qcs instance is flagged with QC_SF_TO_RESET to schedule a
stream reset. This will interrupt all future send operations.
On stream emission, if a stream is flagged with QC_SF_TO_RESET, a
RESET_STREAM frame is generated and emitted to the transport layer. If
this operation succeeds, the stream is locally closed. If upper layer is
instantiated, error flag is set on it.
Implement a basic state machine to represent stream lifecycle. By
default a stream is idle. It is marked as open when sending or receiving
the first data on a stream.
Bidirectional streams has two states to represent the closing on both
receive and send channels. This distinction does not exists for
unidirectional streams which passed automatically from open to close
state.
This patch is mostly internal and has a limited visible impact. Some
behaviors are slightly updated :
* closed streams are garbage collected at the start of io handler
* send operation is interrupted if a stream is close locally
Outside of this, there is no functional change. However, some additional
BUG_ON guards are implemented to ensure that we do not conduct invalid
operation on a stream. This should strengthen the code safety. Also,
stream states are displayed on trace which should help debugging.
Rename both qcc_open_stream_local/remote() functions to
qcc_init_stream_local/remote(). This change is purely cosmetic. It will
reduces the ambiguity with the soon to be implemented OPEN states for
QCS instances.
In 2.6, 8a0fd3a36 ("BUILD: debug: work around gcc-12 excessive
-Warray-bounds warnings") disabled some warnings that were reported
around the the BUG() statement. But the -Wnull-dereference warning
isn't known from gcc-5, it only arrived in gcc-6, hence makes gcc-5
complain loudly that it doesn't know this directive. Let's just
condition this one to gcc-6.
http_is_default_port() can be used to test if a port is a default HTTP/HTTPS
port. A scheme may be specified. In this case, it is used to detect defaults
ports, 80 for "http://" and 443 for "https://". Otherwise, with no scheme, both
are considered as default ports.
http_get_host_port() function can be used to get the port part of a host. It
will be used to get the port of an uri authority or a host header
value. This function only look for a port starting from the end of the
host. It is the caller responsibility to call it with a valid host value. An
indirect string is returned.
Rename QC_SF_FIN_RECV to the more generic name QC_SF_SIZE_KNOWN. This
better align with the QUIC RFC 9000 which uses the "Size Known" state
definition. This change is purely cosmetic.
Review the whole API used to access/instantiate qcs.
A public function qcc_open_stream_local() is available to the
application protocol layer. It allows to easily opening a local stream.
The ID is automatically attributed to the next one available.
For remote streams, qcc_open_stream_remote() has been implemented. It
will automatically take care of allocating streams in a linear way
according to the ID. This function is called via qcc_get_qcs() which can
be used for each qcc_recv*() operations. For the moment, it is only used
for STREAM frames via qcc_recv(), but soon it will be implemented for
other frames types which can also be used to open a new stream.
qcs_new() and qcs_free() has been restricted to the MUX QUIC only as
they are now reserved for internal usage.
This change is a pure refactoring and should not have any noticeable
impact. It clarifies the developer intent and help to ensure that a
stream is not automatically opened when not desired.
<qcc.cl_bidi_r> is used to implement STREAM ID flow control enforcement.
Move it with all fields related to this operation and separated from MAX
STREAM DATA calcul.
This patch adds two BUG_ON on fd_insert() into the fdtab checking
if the fd has been correctly re-initialized into the fdtab
before a new insert.
It will raise a BUG if we try to insert the same fd multiple times
without an intermediate fd_delete().
First one checks that the owner for this fd in fdtab was reset to NULL.
Second one checks that the state flags for this fd in fdtab was reset
to 0.
This patch could be backported on version >= 2.4
This flag is not needed anymore as we're already marking the waiting
threads as harmless, thus the thread's bit is already covered by this
information. The variable was unexported.
The harmless status is not re-entrant, so sometimes for signal handling
it can be useful to know if we're already harmless or not. Let's add a
function doing that, and make the debugger use it instead of manipulating
the harmless mask.
thread_isolate() and thread_isolate_full() were relying on a set of thread
masks for all threads in different states (rdv, harmless, idle). This cannot
work anymore when the number of threads increases beyond LONGBITS so we need
to change the mechanism.
What is done here is to have a counter of requesters and the number of the
current isolated thread. Threads which want to isolate themselves increment
the request counter and wait for all threads to be marked harmless (or idle)
by scanning all groups and watching the respective masks. This is possible
because threads cannot escape once they discover this counter, unless they
also want to isolate and possibly pass first. Once all threads are harmless,
the requesting thread tries to self-assign the isolated thread number, and
if it fails it loops back to checking all threads. If it wins it's guaranted
to be alone, and can drop its harmless bit, so that other competing threads
go back to the loop waiting for all threads to be harmless. The benefit of
proceeding this way is that there's very little write contention on the
thread number (none during work), hence no cache line moves between caches,
thus frozen threads do not slow down the isolated one.
Once it's done, the isolated thread resets the thread number (hence lets
another thread take the place) and decrements the requester count, thus
possibly releasing all harmless threads.
With this change there's no more need for any global mask to synchronize
any thread, and we only need to loop over a number of groups to check
64 threads at a time per iteration. As such, tinfo's threads_want_rdv
could be dropped.
This was tested with 64 threads spread into 2 groups, running 64 tasks
(from the debug dev command), 20 "show sess" (thread_isolate()), 20
"add server blah/blah" (thread_isolate()), and 20 "del server blah/blah"
(thread_isolate_full()). The load remained very low (limited by external
socat forks) and no stuck nor starved thread was found.
Stopping threads need a mask to figure who's still there without scanning
everything in the poll loop. This means this will have to be per-group.
And we also need to have a global stopping groups mask to know what groups
were already signaled. This is used both to figure what thread is the first
one to catch the event, and which one is the first one to detect the end of
the last job. The logic isn't changed, though a loop is required in the
slow path to make sure all threads are aware of the end.
Note that for now the soft-stop still takes time for group IDs > 1 as the
poller is not yet started on these threads and needs to expire its timeout
as there's no way to wake it up. But all threads are eventually stopped.
The thread group info is not sufficient to represent a thread group's
current state as it's read-only. We also need something comparable to
the thread context to represent the aggregate state of the threads in
that group. This patch introduces ha_tgroup_ctx[] and tg_ctx for this.
It's indexed on the group id and must be cache-line aligned. The thread
masks that were global and that do not need to remain global were moved
there (want_rdv, harmless, idle).
Given that all the masks placed there now become group-specific, the
associated thread mask (tid_bit) now switches to the thread's local
bit (ltid_bit). Both are the same for nbtgroups 1 but will differ for
other values.
There's also a tg_ctx pointer in the thread so that it can be reached
from other threads.
This function was added in 2.0 when reworking the thread isolation
mechanism to make it more reliable. However it if fundamentally
incompatible with the full isolation mechanism provided by
thread_isolate_full() since that one will wait for all threads to
become idle while the former will wait for all threads to finish
waiting, causing a deadlock.
Given that it's not used, let's just drop it entirely before it gets
used by accident.
In order to kill all_threads_mask we'll need to have an equivalent for
the thread groups. The all_tgroups_mask does just this, it keeps one bit
set per enabled group.
In order to replace the global "all_threads_mask" we'll need to have an
equivalent per group. Take this opportunity for calling it threads_enabled
and make sure which ones are counted there (in case in the future we allow
to stop some).
Now that the tgid is accessible from the thread, it's pointless to have
it in the group, and it was only set but never used. However we'll soon
frequently need the mask corresponding to the group ID and the risk of
getting it wrong with the +1 or to shift 1 instead of 1UL is important,
so let's store the tgid_bit there.
At several places we're dereferencing the thread group just to catch
the group number, and this will become even more required once we start
to use per-group contexts. Let's just add the tgid in the thread_info
struct to make this easier.
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.
The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.
This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():
frontend test
bind :8888 maxconn 10 thread 1-2
rate-limit sessions 5
Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
Right now when an inter-thread wakeup happens, we preliminary check if the
thread was asleep, and if so we wake the poller up and remove its bit from
the sleeping mask. That's not very clean since the sleeping mask cannot be
entirely trusted since a thread that's about to wake up will already have
its sleeping bit removed.
This patch adds a new per-thread flag (TH_FL_NOTIFIED) to remember that a
thread was notified to wake up. It's cleared before checking the task lists
last, so that new wakeups can be considered again (since wake_thread() is
only used to notify about task wakeups and FD polling changes). This way
we do not need to modify a remote thread's sleeping mask anymore. As such
wake_thread() now only tests and sets the TH_FL_NOTIFIED flag but doesn't
clear sleeping anymore.
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
The thread flags were once believed to be local to the thread, but as
it stands, even the STUCK flag is shared since it's looked at by the
watchdog. As such we'll need to use atomic ops to manipulate them, and
likely to move them into the shared area.
This patch only moves the flag into the shared area so that we can later
decide whether it's best to leave them there or to move them back to the
local area. Interestingly, some tests have shown a 3% better performance
on dequeuing with this, while they're not used by other threads yet, so
there are definitely alignment effects that might change over time.
Almost every call place of wake_thread() checks for sleeping threads and
clears the sleeping mask itself, while the function is solely used for
there. Let's move the check and the clearing of the bit inside the function
itself. Note that updt_fd_polling() still performs the check because its
rules are a bit different.
Since we don't mix tasks from different threads in the run queues
anymore, we don't need to use the eb32sc_ trees and we can switch
to the regular eb32 ones. This uses cheaper lookup and insert code,
and a 16-thread test on the queues shows a performance increase
from 570k RPS to 585k RPS.
This bit field used to be a per-thread cache of the result of the last
lookup of the presence of a task for each thread in the shared cache.
Since we now know that each thread has its own shared cache, a test of
emptiness is now sufficient to decide whether or not the shared tree
has a task for the current thread. Let's just remove this mask.
grq_total was only used to know how many tasks were being queued in the
global runqueue for stats purposes, and that was transferred to the per
thread rq_total counter once assigned. We don't need this anymore since
we know where they are, so let's just directly update rq_total and drop
that one.
Since we only use the shared runqueue to put tasks only assigned to
known threads, let's move that runqueue to each of these threads. The
goal will be to arrange an N*(N-1) mesh instead of a central contention
point.
The global_rqueue_ticks had to be dropped (for good) since we'll now
use the per-thread rqueue_ticks counter for both trees.
A few points to note:
- the rq_lock stlil remains the global one for now so there should not
be any gain in doing this, but should this trigger any regression, it
is important to detect whether it's related to the lock or to the tree.
- there's no more reason for using the scope-based version of the ebtree
now, we could switch back to the regular eb32_tree.
- it's worth checking if we still need TASK_GLOBAL (probably only to
delete a task in one's own shared queue maybe).
The runqueue ticks counter is per-thread and wasn't initially meant to
be shared. We'll soon have to share it so let's make it atomic. It's
only updated when waking up a task, and no performance difference was
observed. It was moved in the thread_ctx struct so that it doesn't
pollute the local cache line when it's later updated by other threads.
This function stopped being used before 2.4 because either the task is
dequeued by the scheduler itself and it knows where to find it, or it's
killed by any thread, and task_kill() must be used for this as only this
one is safe.
It's difficult to say whether task_unlink_rq() is still safe, but once
the lock moves to a thread declared in the task itself, it will be even
more difficult to keep it safe.
Let's just remove it now before someone reuses it and causes trouble.
TASK_SHARED_WQ was set upon task creation and never changed afterwards.
Thus if a task was created to run anywhere (e.g. a check or a Lua task),
all its timers would always pass through the shared timers queue with a
lock. Now we know that tid<0 indicates a shared task, so we can use that
to decide whether or not to use the shared queue. The task might be
migrated using task_set_affinity() but it's always dequeued first so
the check will still be valid.
Not only this removes a flag that's difficult to keep synchronized with
the thread ID, but it should significantly lower the load on systems with
many checks. A quick test with 5000 servers and fast checks that were
saturating the CPU shows that the check rate increased by 20% (hence the
CPU usage dropped by 17%). It's worth noting that run_task_lists() almost
no longer appears in perf top now.
As previously advertised in comments, the mask-based task_new() is now
gone. The low-level function now is task_new_on() which takes a thread
number or a negative value for "any thread", which is turned to zero
for thread-less builds since there's no shared WQ in thiscase. The
task_new_here() and task_new_anywhere() functions were adjusted
accordingly.
This removes the mask-based variant so that from now on the low-level
function becomes appctx_new_on() and it takes either a thread number or
a negative value for "any thread". This way we can use task_new_on() and
task_new_anywhere() instead of task_new() which will soon disappear.
At several places we need to figure the ID of the first thread allowed
to run a task. Till now this was performed using my_ffsl(t->thread_mask)
but since we now have the thread ID stored into the task, let's use it
instead. This is tagged major because it starts to assume that tid<0 is
strictly equivalent to atleast2(thread_mask), and that as such, among
the allowed threads are the current one.
The tasks currently rely on a mask but do not have an assigned thread ID,
contrary to tasklets. However, in practice they're either running on a
single thread or on any thread, so that it will be worth simplifying all
this in order to ease the transition to the thread groups.
This patch introduces a "tid" field in the task struct, that's either
the number of the thread the task is attached to, or a negative value
if the task is not bound to a thread, (i.e. its mask is all_threads_mask).
The new ID is only set and updated but not used yet.
For now we still set tid_bit to (1UL << tid) because FDs will not
work with more than one group without this, but once FDs start to
adopt local masks this must change to thr->ltid_bit.
Some FDs might be offered to some external code (external libraries)
which will deal with them until they close them. As such we must not
close them upon fd_delete() but we need to delete them anyway so that
they do not appear anymore in the fdtab. This used to be handled by
fd_remove() before 2.3 but we don't have this anymore.
This patch introduces a new flag FD_DISOWN to let fd_delete() know that
the core doesn't own the fd and it must not be closed upon removal from
the fd_tab. This way it's totally unregistered from the poller but still
open.
This patch must be backported on branches >= 2.3 because it will be
needed to fix a bug affecting SSL async. it should be adapted on 2.3
because state flags were stored in a different way (via bits in the
structure).
Implement a new status function for ncbuf. It allows to quickly report
if a buffer contains data in a fragmented way, i.e. with gaps in between
or at start of the buffer.
To summarize, a buffer is considered as non-fragmented in the following
cases :
- a null or empty buffer
- a full buffer
- a buffer containing exactly one data block at the beginning, following
by a gap until the end.
With ~1500 bytes QUIC datagrams, we can handle less than 200 datagrams
which is less than the default maxpollevents value. This should reduce
the chances of fulfilling the connections RX buffers as reported by
Tristan in GH #1737.
Must be backported to 2.6.
Remove the call to qc_list_all_rx_pkts() which print messages on stderr
during RX buffer overruns and add a new counter for the number of dropped packets
because of such events.
Must be backported to 2.6
We want to be able to schedule a tasklet onto a thread after the current tasklet
is done. What we have to do is to insert this tasklet at the head of the thread
task list. Furthermore, we would like to serialize the tasklets. They must be
run in the same order as the order in which they have been scheduled. This is
implemented passing a list of tasklet as parameter (see <head> parameters) which
must be reused for subsequent calls.
_tasklet_wakeup_after_on() is implemented to accomplish this job.
tasklet_wakeup_after_on() and tasklet_wake_after() are only wrapper macros around
_tasklet_wakeup_after_on(). tasklet_wakeup_after_on() does exactly the same thing
as _tasklet_wakeup_after_on() without having to pass the filename and line in the
filename as parameters (usefull when DEBUG_TASK is enabled).
tasklet_wakeup_after() hides also the usage of the thread parameter which is
<tl> tasklet thread ID.
In GH #1760 (which is marked as being a feature), there were compilation
errors on MacOS which could be reproduced in Linux when building 32-bit code
(-m32 gcc option). Most of them were due to variables types mixing in QUIC_MIN macro
or using size_t type in place of uint64_t type.
Must be backported to 2.6.
Sometimes using "debug dev memstats" can be frustrating because all
pool allocations are reported through pool-os.h and that's all.
But in practice there's nothing wrong with also intercepting pool_alloc,
pool_free and pool_zalloc and report their call counts and locations,
so that's what this patch does. It only uses an alternate set of macroes
for these 3 calls when DEBUG_MEM_STATS is defined. The outputs are
reported as P_ALLOC (for both pool_malloc() and pool_zalloc()) and
P_FREE (for pool_free()).
freq_ctr_overshoot_period() function may be used to retrieve the excess of
events over the current period for a givent frequency counter, ignoring the
history. It is a way compare the "current rate" (the number of events over
the current period) to a given rate and estimate the excess of events.
It may be used to safely add new events, especially at the begining of the
current period for a frequency counter with large period. This way, it is
possible to smoothly add events during the whole period without quickly
consuming all the quota at the beginning of the period and waiting for the
next one to be able to add new events.
Sometimes we need to be able to signal one thread among a mask, without
caring much about which bit will be picked. At the moment we use ffsl()
for this but this sometimes results in imbalance at certain specific
places where the same first thread in a set is always the same one that
is selected.
Another approach would consist in using the rank finding function but it
requires a popcount and a setup phase, and possibly a modulo operation
depending on the popcount, which starts to be very expensive.
Here we take a different approach. The idea is an input bit value is
passed, from 0 to LONGBITS-1, and that as much as possible we try to
pick the bit matching it if it is set. Otherwise we look at a mirror
position based on a decreasing power of two, and jump to the side
that still has bits left. In 6 iterations it ends up spotting one bit
among 64 and the operations are very cheap and optimizable. This method
has the benefit that we don't care where the holes are located in the
mask, thus it shows a good distribution of output bits based on the
input ones. A long-time test shows an average of 16 cycles, or ~4ns
per lookup at 3.8 GHz, which is about twice as fast as using the rank
finding function.
Just like for that one, the code was stored into tools.c since we don't
have a C file for intops.
Implement quic_tp_version_info_dump() to dump such a transport parameter (only remote).
Call it from quic_transport_params_dump() which dump all the transport parameters.
Can be backported to 2.6 as it's useful for debugging.
This counter must be incremented only one time by connection and decremented
as soon as the handshake has failed or succeeded. This is a gauge. Under certain
conditions this counter could be decremented twice. For instance
after having received a TLS alert, then upon SSL_do_handshake() failure.
To stop having to deal to all the current combinations which can lead to such a
situation (and the next to come), add a connection flag to denote if this counter
has been already decremented for a connection. So, this counter must be decremented
only if this flag has not been already set.
Must be backported up to 2.6.
In order to better detect the danger caused by extra shared libraries
which replace some symbols, upon dlopen() we now compare a few critical
symbols such as malloc(), free(), and some OpenSSL symbols, to see if
the loaded library comes with its own version. If this happens, a
warning is emitted and TAINTED_REDEFINITION is set. This is important
because some external libs might be linked against different libraries
than the ones haproxy was linked with, and most often this will end
very badly (e.g. an OpenSSL object is allocated by haproxy and freed
by such libs).
Since the main source of dlopen() calls is the Lua lib, via a "require"
statement, it's worth trying to show a Lua call trace when detecting a
symbol redefinition during dlopen(). As such we emit a Lua backtrace if
Lua is detected as being in use.
Several bug reports were caused by shared libraries being loaded by other
libraries or some Lua code. Such libraries could define alternate symbols
or include dependencies to alternate versions of a library used by haproxy,
making it very hard to understand backtraces and analyze the issue.
Let's intercept dlopen() and set a new TAINTED_SHARED_LIBS flag when it
succeeds, so that "show info" makes it visible that some external libs
were added.
The redefinition is based on the definition of RTLD_DEFAULT or RTLD_NEXT
that were previously used to detect that dlsym() is usable, since we need
it as well. This should be sufficient to catch most situations.
This function may be used to try to show where some Lua code is currently
being executed. It tries hard to detect the initialization phase, both for
the global and the per-thread states, and for runtime states. This intends
to be used by error handlers to provide the users with indications about
what Lua code was being executed when the error triggered.
When the protocol is changed for a client connection at the stream level
(from TCP to H1/H2), there are two cases. The stream may be reused or
not. The first case, when the stream is reused is working. The second one is
buggy since the conn-stream refactoring and leads to a crash.
In this case, the new mux don't reuse the stream. It must be silently
aborted. However, its front stream connector is still referencing the
connection. So it must be detached. But it must be performed in two stages,
to be sure to not loose the context for the upgrade and to be able to
rollback on error. So now, before the upgrade, we prepare to detach the
stconn and it is finally detached if the upgrade succeeds. There is a trick
here. Because we pretend the stconn is detached but its state is preserved.
This patch must be backported to 2.6.
At this time haproxy supported only incompatible version negotiation feature which
consists in sending a Version Negotiation packet after having received a long packet
without compatible value in its version field. This version value is the version
use to build the current packet. This patch does not modify this behavior.
This patch adds the support for compatible version negotiation feature which
allows endpoints to negotiate during the first flight or packets sent by the
client the QUIC version to use for the connection (or after the first flight).
This is done thanks to "version_information" parameter sent by both endpoints.
To be short, the client offers a list of supported versions by preference order.
The server (or haproxy listener) chooses the first version it also supported as
negotiated version.
This implementation has an impact on the tranport parameters handling (in both
direcetions). Indeed, the server must sent its version information, but only
after received and parsed the client transport parameters). So we cannot
encode these parameters at the same time we instantiated a new connection.
Add QUIC_TP_DRAFT_VERSION_INFORMATION(0xff73db) new transport parameter.
Add tp_version_information new C struct to handle this new parameter.
Implement quic_transport_param_enc_version_info() (resp.
quic_transport_param_dec_version_info()) to encode (resp. decode) this
parameter.
Add qc_conn_finalize() which encodes the transport parameters and configure
the TLS stack to send them.
Add ->negotiated_ictx quic_conn C struct new member to store the Initial
QUIC TLS context for the negotiated version. The Initial secrets derivation
is version dependent.
Rename ->version to ->original_version and add ->negotiated_version to
this C struct to reflect the QUIC-VN RFC denomination.
Modify most of the QUIC TLS API functions to pass a version as parameter.
Export the QUIC version definitions to be reused at least from quic_tp.c
(transport parameters.
Move the token check after the QUIC connection lookup. As this is the original
version which is sent into a Retry packet, and because this original version is
stored into the connection, we must check the token after having retreived this
connection.
Add packet version to traces.
See https://datatracker.ietf.org/doc/html/draft-ietf-quic-version-negotiation-08
for more information about this new feature.
This is becoming difficult to handle the QUIC TLS related definitions
which arrive with a QUIC version (draft or not). So, here we add
quic_version C struct which does not define only the QUIC version number,
but also the QUIC TLS definitions which depend on a QUIC version.
Modify consequently all the QUIC TLS API to reuse these definitions
through new quic_version C struct.
Implement quic_pkt_type() function which return a packet type (0 up to 3)
depending on the QUIC version number.
Stop harding the Retry packet first byte in send_retry(): this is not more
possible because the packet type field depends on the QUIC version.
Also modify quic_build_packet_long_header() for the same reason: the packet
type depends on the QUIC version.
Add a quic_version C struct member to quic_conn C struct.
Modify qc_lstnr_pkt_rcv() to set this member asap.
Remove the version member from quic_rx_packet C struct: a packet is attached
asap to a connection (or dropped) which is the unique object which should
store the QUIC version.
Modify qc_pkt_is_supported_version() to return a supported quic_version C
struct from a version number.
Add Initial salt for QUIC v2 draft (initial_salt_v2_draft).
Due to missing brackets around the ternary C operator, quic_pto() could return zero
at the first run, before the QUIC connection was completely initialized. This leaded
the idle timeout task to be executed before this initialization completion. Then
this connection could be immediately released.
This bug was revealed by the multi_packet_client_hello QUIC tracker test.
Must be backported to 2.6.
The nonce and keys used to cipher the Retry tag depend on the QUIC version.
Add these definitions for 0xff00001d (draft-29) and v2 QUIC version. At least
draft-29 is useful for QUIC tracker tests with "quic-force-retry" enabled
on haproxy side.
Validated with -v 0xff00001d ngtcp2 option.
Could not validate the v2 nonce and key at this time because not supported.
ensures that we never insert too many entries in a headers input list.
On the decoding side, a new error QPACK_ERR_TOO_LARGE is reported in
this case.
This prevents crash if headers number on a H3 request or response is
superior to tune.http.maxhdr config value. Previously, a crash would
occur in QPACK decoding function.
Note that the process still crashes later with ABORT_NOW() because error
reporting on frame parsing is not implemented for now. It should be
treated with a RESET_STREAM frame in most cases.
This can be backported up to 2.6.
Clean up QPACK decoder API by removing dependencies on ncbuf and
MUX-QUIC. This reduces includes statements. It will also help to
implement a standalone QPACK decoder.
Add comments on the decoding function to facilitate code analysis.
Also remove the qpack_debug_hexdump() which prints the whole left buffer
on each header parsing. With large HEADERS frame payload, QPACK traces
are complicated to debug with this statement.
This macro was used both for binding and for lookups. When binding tasks
or FDs, using all_threads_mask instead is better as it will later be per
group. For lookups, ~0UL always does the job. Thus in practice the macro
was already almost not used anymore since the rest of the code could run
fine with a constant of all ones there.
Each thread has its own local thread id and its own global thread id,
in addition to the masks corresponding to each. Once the global thread
ID can go beyond 64 it will not be possible to have a global thread Id
bit anymore, so better start to remove it and use only the local one
from the struct thread_info.
Instead of having a global mask of all the profiled threads, let's have
one flag per thread in each thread's flags. They are never accessed more
than one at a time an are better located inside the threads' contexts for
both performance and scalability.
The flow control enforced at connection level is incorrectly calculated.
There is a risk of exceeding the limit. In most cases, this results in a
segfault produced by a BUG_ON which is here to catch this kind of error.
If not compiled with DEBUG_STRICT, this should generate a connection
closed by the client due to the flow control overflow.
The problem is encountered when transfered payload is big enough to fill
the transport congestion window. In this case, some data are rejected by
the transport layer and kept by the MUX to be reemitted later. However,
these preserved data are not counted on the connection flow control when
resubmitted, which gradually amplify the gap between expected and real
consumed flow control.
To fix this, handle the flow-control at the connection level in the same
way as the stream level. A new field qcc.tx.offsets is incremented as
soon as data are transfered between stream TX buffers. The field
qcc.tx.sent_offsets is preserved to count bytes handled by the transport
layer and stop the MUX transfer if limit is reached.
As already stated, this bug can occur during transfers with enough
emitted data, over multiple streams. When using a single stream, the
flow control at the stream level hides it.
The BUG_ON crash is reproduced systematically with quiche client :
$ quiche-client --no-verify --http-version HTTP/3 -n 10000 https://127.0.0.1:20443/10K
This must be backported up to 2.6 when confirmed to work as expected.
This should fix github issue #1738.
Frame type has changed during HTTP/3 specification process. Adjust it to
reflect the latest RFC 9114 status.
Concretly, type for GOAWAY and MAX_PUSH_ID frames has been adjusted.
The impact of this bug is limited as currently these frames are not
handled by haproxy and are ignored.
This can be backported up to 2.6.
Benoit Dolez reported that gcc-4.4 emits several "may be used
uninitialized" warnings around places where there are BUG_ON()
or ABORT_NOW(). The reason is that __builtin_unreachable() was
introduced in gcc-4.5 thus older ones do not know that the code
after such statements is not reachable.
This patch solves the problem by deplacing the statement with
an infinite loop on older versions. The compiler knows that the
code following it cannot be reached, and this is quite cheap
(2 to 4 bytes depending on architectures). It even reduces the
code size a little bit as the compiler doesn't have to optimize
for branches that do not exist.
This may be backported to older versions.
Clean the API used by decode_qcs() and transcoder internal functions.
Parsing functions now returns a ssize_t which represents the number of
consumed bytes or a negative error code. The total consumed bytes is
returned via decode_qcs().
The API is now unified and cleaner. The MUX can thus simply use the
return value of decode_qcs() instead of substracting the data bytes in
the buffer before and after the call. Transcoders functions are not
anymore obliged to remove consumed bytes from the buffer which was not
obvious.
Slightly modify decode_qcs function used by transcoders. The MUX now
gives a buffer instance on which each transcoder is free to work on it.
At the return of the function, the MUX removes consume data from its own
buffer.
This reduces the number of invocation to qcs_consume at the end of a
full demuxing process. The API is also cleaner with the transcoders not
responsible of calling it with the risk of having the input buffer
freed if empty.
As a mirror to qcc/qcs types, add a h3c pointer into h3s struct. This
should help to clean up H3 code and avoid to use qcs.qcc.ctx to retrieve
the h3c instance.
htx_get_head_blk() is used at plenty of places, many of which are known
to be safe, but the function checks for the presence of a first block
and returns NULL if it doesn't exist. While it's properly used, it makes
compilers complain at -Os on stream.c and mux_fcgi.c because they probably
don't propagate variables far enough to see that there's no risk.
Let's add an unchecked version for these use cases.
Add ->inc_err_cnt new callback to qcc_app_ops struct which can
be called from xprt to increment the application level error code counters.
It take the application context as first parameter to be generic and support
new QUIC applications to come.
Add h3_stats.c module with counters for all the frame types and error codes.
Add new counters to count the number of dropped packet upon parsing error, lost
sent packets and the number of stateless reset packet sent.
Take the oppportunity of this patch to rename CONN_OPENINGS to QUIC_ST_HALF_OPEN_CONN
(total number of half open connections) and QUIC_ST_HDSHK_FAILS to QUIC_ST_HDSHK_FAIL.
This is becoming difficult to distinguish the default values for
transport parameters which come with the RFC from our implementation
default values when not set by configuration (tunable parameters).
Add a comment to distinguish them.
Prefix these default values by QUIC_TP_DFLT_ to distinguish them from
QUIC_DFLT_* value even if there are not numerous.
Furthermore ->max_udp_payload_size must be first initialized to
QUIC_TP_DFLT_MAX_UDP_PAYLOAD_SIZE especially for received value.
Add tunable "tune.quic.frontend.max_streams_bidi" setting for QUIC frontends
to set the "initial_max_streams_bidi" transport parameter.
Add some documentation for this new setting.
Add two tunable settings both for backends and frontends "max_idle_timeout"
QUIC transport parameter, "tune.quic.frontend.max-idle-timeout" and
"tune.quic.backend.max-idle-timeout" respectively.
cfg_parse_quic_time() has been implemented to parse a time value thanks
to parse_time_err(). It should be reused for any tunable time value to be
parsed.
Add the documentation for this tunable setting only for frontend.
Add quic_transport_params_dump() static inline function to do so for
a quic_transport_parameters struct as parameter.
We use the trace API do dump these transport parameters both
after they have been initialized (RX/local) or received (TX/remote).
Make the transport parameters be standlone as much as possible as
it consists only in encoding/decoding data into/from buffers.
Reduce the size of xprt_quic.h. Unfortunalety, I think we will
have to continue to include <xprt_quic-t.h> to use the trace API
into this module.
The only two places where it was used was to carefully preserve the
SE_FL_WILL_CONSUME flag (since others are irrelevant there and the
previous RXBLK* flags moved to the stconn). Now that the flag is
cleared by default there's no need to re-created a fresh new one
when replacing the descriptor, so we can eliminate that remaining
trick.
This flag was the only remaining one that was inverted as a blocking
condition, requiring special handling to preset it on sedesc allocation.
Let's flip it in its definition and accessors.
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. Both the core functions and the ones in the
resolvers files were updated.
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The HTTP analyser and the backend functions
were all updated after being reviewed. Function stream_update_both_cs()
was renamed to stream_update_both_sc()
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The "nb_cs" stream-connector counter was
renamed to "nb_sc" and qc_attach_cs() was renamed to qc_attach_sc().
Function arguments and local variables called "cs" were renamed to "sc"
to avoid future confusion. The change is huge (~580 lines), so extreme
care was given not to change anything else.
The check struct had a "cs" field renamed to "sc", which also required
a tiny update to a few functions using it to distinguish a check from
a stream (log.c, payload.c, ssl_sample.c, tcp_sample.c, tcpcheck.c,
connection.c).
Function arguments and local variables called "cs" were renamed to "sc".
The presence of one "cs=" in the debugging traces was also turned to
"sc=" for consistency.
There's no more reason for keepin the code and definitions in conn_stream,
let's move all that to stconn. The alphabetical ordering of include files
was adjusted.
This file contains all the stream-connector functions that are specific
to application layers of type stream. So let's name it accordingly so
that it's easier to figure what's located there.
The alphabetical ordering of include files was preserved.
An equivalent applet_need_more_data() was added as well since that function
is mostly used from applet code. It makes it much clearer that the applet
is waiting for data from the stream layer.
These ones are essentially for the stream endpoint, let's give them a
name that matches the intent. Equivalent versions were provided in the
applet namespace to ease code legibility.
These ones were used exactly once and together, in sc_is_send_allowed().
No need to give them confusing names, instead let's just put the flags,
they're way more explicit, and drop the two functions.
This flag indicates the that stream endpoint is willing to consume output
data from the stream. Its new name makes this more explicit. The function
names will be updated accordingly, which will remove the disturbing "get"
everywhere.
The following flags are not at all related to the endpoint but to the
connector itself:
- SE_FL_RXBLK_ROOM
- SE_FL_RXBLK_BUFF
- SE_FL_RXBLK_CHAN
As such they have no business staying in the endpoint descriptor and
they must move to the stream connector. They've also been renamed
accordingly to better match what they correspond to (the same name
as the function that sets them).
The rare occurrences of cs_rx_blocked() were replaced by an explicit
test on the list of flags. The reason is that cs_rx_blocked() used to
preserve some tests that are not needed at certain places since already
known. For the same reason SE_FL_RXBLK_ANY wasn't converted. As such it
will later be possible to carefully review these few locations and
eliminate the unneeded flags from the tests. No particular function
was made to test them since they're explicit enough.
It now looks like ci_putchk() and friends could very well place the flag
themselves on the connector when they detect a buffer full condition, as
this would significantly simplify the high-level API. But all usages must
first be reviewed before this simplification can be done. For now it
remains done by applet_put*() instead.
It's more explicit this way. The cs_rx_endp_ready() function could be
removed so that the flag is directly tested. In the future it should
be inverted and the few places where it's set (or preserved via
SE_FL_APP_MASK) could be dropped.
At plenty of places we combine multiple flags checks to determine if we
can receive (endp_ready, rx_blocked, cf_shutr etc). Let's group them
under a single function that is meant to replace existing tests.
Some tests were only checking the rxblk flags at the connection level,
so for now they were not converted, this requires a bit of auditing
first, and probably a test to determine whether or not to check for
cf_shutr (e.g. there is none if no stream is present).
The analysis of cs_rx_endp_more() showed that the purpose is for a stream
endpoint to inform the connector that it's ready to deliver more data to
that one, and conversely cs_rx_endp_done() that it's done delivering data
so it should not be bothered again for this.
This was modified two ways:
- the operation is no longer performed on the connector but on the
endpoint so that there is no more doubt when reading applet code
about what this rx refers to; it's the endpoint that has more or
no more data.
- an applet implementation is also provided and mostly used from
applet code since it saves the caller from having to access the
endpoint descriptor.
It's visible that the flag ought to be inverted because some places
have to set it by default for no reason.
These functions are used by the application layer to disable or enable
reading at the stream connector's level when the input buffer failed to
be allocated (or was finally allocated). The new names makes things
clearer.
These functions were used by the channel to inform the lower layer
whether reading was acceptable or not. Usually this directly mimmicks
the CF_DONT_READ flag from the channel, which may be set when it's
desired not to buffer incoming data that will not be processed, or
that the buffer wants to be flushed before starting to read again,
or that bandwidth limiting might be enforced, etc. It's always a
policy reason, not a purely resource-based one.
The new name mor eclearly indicates that a stream connector cannot make
any more progress because it needs room in the channel buffer, or that
it may be unblocked because the buffer now has more room available. The
testing function is sc_waiting_room(). This is mostly used by applets.
Note that the flags will change soon.
This makes SE_FL_APPLET_NEED_CONN autonomous, in that we check for it
everywhere we have a relevant cs_rx_blocked(), so that the flag doesn't
need anymore to be covered by cs_rx_blocked(). Indeed, this flag doesn't
really translate a receive blocking condition but rather a refusal to
wake up an applet that is waiting for a connection to finish to setup.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK, because the flag being
specific to the endpoint only and not to the connector, we don't
want to preserve it when replacing the endpoint.
It's possible that cs_chk_rcv() could later be further simplified if
we can demonstrate that the two tests in it can be merged.
This flag is exclusively used when a front applet needs to wait for the
other side to connect (or fail to). Let's give it a more explicit name
and remove the ambiguous function that was used only once.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK, because the flag being
specific to the endpoint only and not to the connector, we don't
want to preserve it when replacing the endpoint.
This flag is no more needed, it was only set on shut read to be tested
by cs_rx_blocked() which is now properly tested for shutr as well. The
cs_rx_blk_shut() calls were removed. Interestingly it allowed to remove
a special case in the L7 retry code.
This also ensures we will not risk to set it back on a new endpoint
after cs_reset_endp() via SE_FL_APP_MASK.
One flag (RXBLK_SHUT) is always set with CF_SHUTR, so in order to remove
it, we first need to make sure we always check for CF_SHUTR where
cs_rx_blocked() is being used.
When a shutdown(WR) is performed, send is no longer allowed, and that is
currently handled by the explicit cs_done_get() in the various shutw()
calls. That's a bit ugly and complicated for no reason, let's simply
integrate the test of SHUTW in sc_is_send_allowed().
Note that the test could also be added wherever sc_is_send_allowed() is
used but for now proceeding like this limits the changes.
sc_is_send_allowed() is now used everywhere instead of the combination
of cs_tx_endp_ready() && !cs_tx_blocked(). There's no place where we
need them individually thus it's simpler. The test was placed in cs_util
as we'll complete it later.
A number of functions in cs_utils.h are not usable from functions taking
a const because they're not declared as using const, despite never
modifying the stconn. Let's address this for the following ones:
sc_ic(), sc_oc(), sc_ib(), sc_ob(), sc_strm_task(),
cs_opposite(), sc_conn_ready(), cs_src(), cs_dst(),
First it applies to the stream endpoint and not the conn_stream, and
second it only tests and touches the flags so it makes sense to call
it se_fl_ like other functions which only manipulate the flags, as
it's just a special case of flags.
It returns an stconn from a connection and not the opposite, so the name
change was more appropriate. In addition it was moved to connection.h
which manipulates the connection stuff, and it happens that only
connection.c uses it.
The following functions which act on a connection-based stream connector
were renamed to sc_conn_* (~60 places):
cs_conn_drain_and_shut
cs_conn_process
cs_conn_read0
cs_conn_ready
cs_conn_recv
cs_conn_send
cs_conn_shut
cs_conn_shutr
cs_conn_shutw
The function doesn't return a pointer to the mux but to the mux stream
(h1s, h2s etc). Let's adjust its name to reflect this. It's rarely used,
the name can be enlarged a bit. And of course s/cs/sc to accommodate for
the updated name.
These functions return the app-layer associated with an stconn, which
is a check, a stream or a stream's task. They're used a lot to access
channels, flags and for waking up tasks. Let's just name them
appropriately for the stream connector.
We're starting to propagate the stream connector's new name through the
API. Most call places of these functions that retrieve the channel or its
buffer are in applets. The local variable names are not changed in order
to keep the changes small and reviewable. There were ~92 uses of cs_ic(),
~96 of cs_oc() (due to co_get*() being less factorizable than ci_put*),
and ~5 accesses to the buffer itself.