The cache makes use of dates advertised by external components, such
as "last-modified" or "date". As such these are wall-clock dates, and
not internal dates. However, all comparisons are mistakenly made based
on the internal monotonic date which is designed to drift from the wall
clock one in order to catch up with stolen time (which can sometimes be
intense in VMs). As such after some run time some objects may fail to
validate or fail to expire depending on the direction of the drift. This
is particularly visible when applying an offset to the internal time to
force it to wrap soon after startup, as it will be shifted up to 49.7
days in the future depending on the current date; what happens in this
case is that the reg-test "cache_expires.vtc" fails on the 3rd test by
returning stale contents from the cache at the date of this commit.
It is really important that all external dates are compared against
"date" and not "now" for this reason.
This fix needs to be backported to all versions.
We've had a start date even before the internal monotonic clock existed,
but once the monotonic clock was added, the start date was not updated
to distinguish the wall clock time units and the internal monotonic time
units. The distinction is important because both clocks do not necessarily
progress at the same speed. The very rare occurrences of the wall-clock
date are essentially for human consumption and communication with third
parties (e.g. report the start date in "show info" for monitoring
purposes). However currently this one is also used to measure the distance
to "now" as being the process' uptime. This is actually not correct. It
only works because for now the two dates are initialized at the exact
same instant at boot but could still be wrong if the system's date shows
a big jump backwards during startup for example. In addition the current
situation prevents us from enforcing an abritrary offset at boot to reveal
some heisenbugs.
This patch adds a new "start_time" at boot that is set from "now" and is
used in uptime calculations. "start_date" instead is now set from "date"
and will always reflect the system date for human consumption (e.g. in
"show info"). This way we're now sure that any drift of the internal
clock relative to the system date will not impact the reported uptime.
This could possibly be backported though it's unlikely that anyone has
ever noticed the problem.
At some moments expired stick table records stop being removed. This
happens when the internal time wraps around the 32-bit limit, or every
49.7 days. What precisely happens is that some elements that are collected
close to the end of the time window (2^32 - table's "expire" setting)
might have been updated and will be requeued further, at the beginning
of the next window. Here, three bad situations happen:
- the incorrect integer-based comparison that is not aware of wrapping
will result in the scan to restart from the freshly requeued element,
skipping all those at the end of the window. The net effect of this
is that at each wakeup of the expiration task, only one element from
the end of the window will be expired, and other ones will remain
there for a very long time, especially if they have to wait for all
the predecessors to be picked one at a time after slow wakeups due
to a long expiration ; this is what was observed in issue #2034
making the table fill up and appear as not expiring at all, and it
seems that issue #2024 reports the same problem at the same moment
(since such issues happen for everyone roughly at the same time
when the clock doesn't drift too much).
- the elements that were placed at the beginning of the next window
are skipped as well for as long as there are refreshed entries at
the end of the previous window, so these ones participate to filling
the table as well. This is cause by the restart from the current,
updated node that is generally placed after most other less recently
updated elements.
- once the last element at the end of the window is picked, suddenly
there is a large amount of expired entries at the beginning of the
next window that all have to be requeued. If the expiration delay
is large, the number can be big and it can take a long time, which
can very likely explain the periodic crashes reported in issue #2025.
Limiting the batch size as done in commit dfe79251d ("BUG/MEDIUM:
stick-table: limit the time spent purging old entries") would make
sense for process_table_expire() as well.
This patch addresses the incorrect tree scan algorithm to make sure that:
- there's always a next element to compare against, even when dealing
with the last one in the tree, the first one must be used ;
- time comparisons used to decide whether to restart from the current
element use tick_is_lt() as it is the only case where we know the
current element will be placed before any other one (since the tree
respects insertion ordering for duplicates)
In order to reproduce the issue, it was found that injecting traffic on
a random key that spans over half of the size of a table whose expiration
is set to 15s while the date is going to wrap in 20s does exhibit an
increase of the table's size 5s after startup, when entries start to be
pushed to the next window. It's more effective when a second load
generator constantly hammers a same key to be certain that none of them
is ready to expire. This doesn't happen anymore after this patch.
This fix needs to be backported to all stable versions. The bug has been
there for as long as the stick tables were introduced in 1.4-dev7 with
commit 3bd697e07 ("[MEDIUM] Add stick table (persistence) management
functions and types"). A cleanup could consists in deduplicating that
code by having process_table_expire() call __stktable_trash_oldest(),
with that one improved to support an optional time check.
Display a warning when some text exists between the filename and the
options. This part is completely ignored so if there are filters here,
they were never parsed.
This could be backported in every versions. In the older versions, the
parsing was done in ssl_sock_load_cert_list_file() in ssl_sock.c.
Aurélien reported that the BUG_ON(!new_ts.nbgrp) added in 2.8-dev3 by
commit 50440457e ("MEDIUM: config: restrict shards, not bind_conf to one
group each") can trigger on some invalid configs where the thread_set on
the "bind" line couldn't be resolved. The reason is that we still enter
the parsing loop (as it was done previously) and we possibly have no
group to work on (which was the purpose of this assertion). There we
need to bypass all this block on such a condition.
No backport is needed.
Aurélien reported a bug making a statement such as "thread 2-2" fail for
a config made of exactly 2 threads. What happens is that the parser for
the "thread" keyword scans a range of thread numbers from either 1..64
or 0,-1,-2 for special values, and presets the bit masks accordingly in
the thread set, except that due to the 1..64 range, the shift length must
be reduced by one. Not doing this causes empty masks for single-bit values
that are exactly equal to the number of threads in the group and fails to
properly parse.
No backport is needed as this was introduced in 2.8-dev3 by commit
bef43dfa6 ("MINOR: thread: add a simple thread_set API").
Due to multithreading concurrency, it is difficult at this time to figure
out how this counter may become negative. This simple patch only checks this
will never be the case.
This issue arrives with this commit:
"9969adbcdc MINOR: stats: add by HTTP version cumulated number of sessions and requests"
So, this patch should be backported when the latter has been backported.
When stats_putchk() fails to peform the dump because available data space in
htx is less than the number of bytes pending in the dump buffer, we wait
for more room in the htx (ie: sc_need_room()) to retry the dump attempt
on the next applet invocation.
To provide consistent output, we have to make sure that the stat ctx is not
updated (or at least correctly reverted) in case stats_putchk() fails so
that the new dumping attempt behaves just like the previous (failed) one.
STAT_STARTED is not following this logic, the flag is set in
stats_dump_fields_json() as soon as some data is written to the output buffer.
It's done too early: we need to delay this step after the stats_putchk() has
successfully returned if we want to correctly handle the retries attempts.
Because of this, JSON output could suffer from extraneous ',' characters which
could make json parsers unhappy.
For example, this is the kind of errors you could get when using
`python -m json.tool` on such badly formatted outputs:
"Expecting value: line 1 column 2 (char 1)"
Unfortunately, fixing this means that the flag needs to be enabled at
multiple places, which is what we're doing in this patch.
(in stats_dump_proxy_to_buffer() where stats_dump_one_line() is involved
by underlying stats_dump_{fe,li,sv,be} functions)
Thereby, this raises the need for a cleanup to reduce code duplication around
stats_dump_proxy_to_buffer() function and simplify things a bit.
It could be backported to 2.6 and 2.7
In ("MINOR: stats: introduce stats field ctx"), we forgot
to apply the patch to servers.
This prevents "BUG/MINOR: stats: fix show stat json buffer limitation"
from working with servers dump.
We're adding the missing part related to servers dump.
This commit should be backported with the aforementioned commits.
When ctx->field was introduced with ("MINOR: stats: introduce stats field ctx")
a mistake was made for the STAT_PX_ST_LI state in stats_dump_proxy_to_buffer():
current_field reset is placed after the for loop, ie: after multiple lines
are dumped. Instead it should be placed right after each li line is dumped.
This could cause some output inconsistencies (missing fields), especially when
http dump is used with JSON output and "socket-stats" option is enabled
on the proxy, because when htx is full we restore the ctx->field with
current_field (which contains outdated value in this case).
This should be backported with ("MINOR: stats: introduce stats field ctx")
In ("BUG/MEDIUM: stats: Rely on a local trash buffer to dump the stats"),
we forgot to apply the patch in resolvers.c which provides the
stats_dump_resolvers() function that is involved when dumping with "resolvers"
domain.
As a consequence, resolvers dump was broken because stats_dump_one_line(),
which is used in stats_dump_resolv_to_buffer(), implicitely uses trash_chunk
from stats.c to prepare the dump, and stats_putchk() is then called with
global trash (currently empty) as output data.
Given that trash_dump variable is static and thus only available within stats.c
we change stats_putchk() function prototype so that the function does not take
the output buffer as an argument. Instead, stats_putchk() will implicitly use
the local trash_dump variable declared in stats.c.
It will also prevent further mixups between stats_dump_* functions and
stats_putchk().
This needs to be backported with ("BUG/MEDIUM: stats: Rely on a local trash
buffer to dump the stats")
In ("BUG/MINOR: stats: use proper buffer size for http dump"),
we used trash.size as source buffer size before applying the htx
overhead computation.
It is safer to use res->buf.size instead since res_htx (which is <htx> argument
passed to stats_putchk() in http context) is made from res->buf:
in http_stats_io_handler:
| res_htx = htx_from_buf(&res->buf);
This will prevent the hang bug from showing up again if res->buf.size were to be
less than trash.size (which is set according to tune.bufsize).
This should be backported with ("BUG/MINOR: stats: use proper buffer size for http dump")
Released version 2.8-dev3 with the following main changes :
- BUG/MINOR: sink: make sure to always properly unmap a file-backed ring
- DEV: haring: add a new option "-r" to automatically repair broken files
- BUG/MINOR: ssl: Fix leaks in 'update ssl ocsp-response' CLI command
- MINOR: ssl: Remove debug fprintf in 'update ssl ocsp-response' cli command
- MINOR: connection: add a BUG_ON() to detect destroying connection in idle list
- MINOR: mux-quic/h3: send SETTINGS as soon as transport is ready
- BUG/MINOR: h3: fix GOAWAY emission
- BUG/MEDIUM: mux-quic: fix crash on H3 SETTINGS emission
- BUG/MEDIUM: hpack: fix incorrect huffman decoding of some control chars
- BUG/MINOR: log: release global log servers on exit
- BUG/MINOR: ring: release the backing store name on exit
- BUG/MINOR: sink: free the forwarding task on exit
- CLEANUP: trace: remove the QUIC-specific ifdefs
- MINOR: trace: add a TRACE_ENABLED() macro to determine if a trace is active
- MINOR: trace: add a trace_no_cb() dummy callback for when to use no callback
- MINOR: trace: add the long awaited TRACE_PRINTF()
- MINOR: h2: add h2_phdr_to_ist() to make ISTs from pseudo headers
- MEDIUM: mux-h2/trace: add tracing support for headers
- CLEANUP: mux-h2/trace: shorten the name of the header enc/dec functions
- DEV: hpack: fix `trash` build regression
- MINOR: http_htx: add http_append_header() to append value to header
- MINOR: http_htx: add http_prepend_header() to prepend value to header
- MINOR: sample: add ARGC_OPT
- MINOR: proxy: introduce http only options
- MINOR: proxy/http_ext: introduce proxy forwarded option
- REGTEST: add ifnone-forwardfor test
- MINOR: proxy: move 'forwardfor' option to http_ext
- MINOR: proxy: move 'originalto' option to http_ext
- MINOR: http_ext: introduce http ext converters
- MINOR: http_ext: add rfc7239_is_valid converter
- MINOR: http_ext: add rfc7239_field converter
- MINOR: http_ext: add rfc7239_n2nn converter
- MINOR: http_ext: add rfc7239_n2np converter
- REGTEST: add RFC7239 forwarded header tests
- OPTIM: http_ext/7239: introduce c_mode to save some space
- MINOR: http_ext/7239: warn the user when fetch is not available
- MEDIUM: proxy/http_ext: implement dynamic http_ext
- MINOR: cfgparse/http_ext: move post-parsing http_ext steps to http_ext
- DOC: config: fix option spop-check proxy compatibility
- BUG/MINOR: fcgi-app: prevent 'use-fcgi-app' in default section
- DOC: config: 'http-send-name-header' option may be used in default section
- BUG/MINOR: mux-h2: Fix possible null pointer deref on h2c in _h2_trace_header()
- BUG/MINOR: http_ext/7239: ipv6 dumping relies on out of scope variables
- BUG/MEDIUM: h3: do not crash if no buf space for trailers
- OPTIM: h3: skip buf realign if no trailer to encode
- MINOR: mux-quic/h3: define stream close callback
- BUG/MEDIUM: h3: handle STOP_SENDING on control stream
- BUG/MINOR: h3: reject RESET_STREAM received for control stream
- MINOR: h3: add missing traces on closure
- BUG/MEDIUM: ssl: wrong eviction from the session cache tree
- BUG/MINOR: h3: fix crash due to h3 traces
- BUG/MINOR: h3: fix crash due to h3 traces
- BUG/MEDIUM: thread: consider secondary threads as idle+harmless during boot
- BUG/MINOR: stats: use proper buffer size for http dump
- BUILD: makefile: fix PCRE overriding specific lib path
- MINOR: quic: remove fin from quic_stream frame type
- MINOR: quic: ensure offset is properly set for STREAM frames
- MINOR: quic: define new functions for frame alloc
- MINOR: quic: refactor frame deallocation
- MEDIUM: quic: implement a retransmit limit per frame
- MINOR: quic: add config for retransmit limit
- OPTIM: htx: inline the most common memcpy(8)
- CLEANUP: quic: no need for atomics on packet refcnt
- MINOR: stats: add by HTTP version cumulated number of sessions and requests
- BUG/MINOR: quic: Possible stream truncations under heavy loss
- BUG/MINOR: quic: Too big PTO during handshakes
- MINOR: quic: Add a trace about variable states in qc_prep_fast_retrans()
- BUG/MINOR: quic: Do not ignore coalesced packets in qc_prep_fast_retrans()
- MINOR: quic: When probing Handshake packet number space, also probe the Initial one
- BUG/MAJOR: quic: Possible crash when processing 1-RTT during 0-RTT session
- MEDIUM: quic: Remove qc_conn_finalize() from the ClientHello TLS callbacks
- BUG/MINOR: quic: Unchecked source connection ID
- MEDIUM: listener: move the analysers mask to the bind_conf
- MINOR: listener: move maxseg and tcp_ut to bind_conf
- MINOR: listener: move maxaccept from listener to bind_conf
- MINOR: listener: move the backlog setting from listener to bind_conf
- MINOR: listener: move the maxconn parameter to the bind_conf
- MINOR: listener: move the ->accept callback to the bind_conf
- MINOR: listener: remove the useless ->default_target field
- MINOR: listener: move the nice field to the bind_conf
- MINOR: listener: move the NOLINGER option to the bind_conf
- MINOR: listener: move the NOQUICKACK option to the bind_conf
- MINOR: listener: move the DEF_ACCEPT option to the bind_conf
- MINOR: listener: move TCP_FO to bind_conf
- MINOR: listener: move the ACC_PROXY and ACC_CIP options to bind_conf
- MINOR: listener: move LI_O_UNLIMITED and LI_O_NOSTOP to bind_conf
- MINOR: listener: get rid of LI_O_TCP_L4_RULES and LI_O_TCP_L5_RULES
- CLEANUP: listener: remove the now unused options field
- MINOR: listener: remove the now useless LI_F_QUIC_LISTENER flag
- CLEANUP: config: remove test for impossible case regarding bind thread mask
- MINOR: thread: add a simple thread_set API
- MEDIUM: listener/config: make the "thread" parser rely on thread_sets
- CLEANUP: config: stop using bind_tgroup and bind_thread
- CLEANUP: listener/thread: remove now unused bind_conf's bind_tgroup/bind_thread
- CLEANUP: listener/config: remove the special case for shards==1
- MEDIUM: config: restrict shards, not bind_conf to one group each
- BUG/MEDIUM: quic: do not split STREAM frames if no space
- BUILD: thread: fix build warnings with older gcc compilers
When building STREAM frames in a packet buffer, if a frame is too large
it will be splitted in two. A shorten version will be used and the
original frame will be modified to represent the remaining space.
To ensure there is enough space to store the frame data length encoded
as a QUIC integer, we use the function max_available_room(). This
function can return 0 if there not only a small space left which is
insufficient for the frame header and the shorten data. Prior to this
patch, this wasn't check and an empty unneeded STREAM frame was built
and sent for nothing.
Change this by checking the value return by max_available_room(). If 0,
do not try to split this frame and continue to the next ones in the
packet.
On 2.6, this patch serves as an optimization which will prevent the building
of unneeded empty STREAM frames.
On 2.7, this behavior has the side-effect of triggering a BUG_ON()
statement on quic_build_stream_frame(). This BUG_ON() ensures that we do
not use quic_frame with OFF bit set if its offset is 0. This can happens
if the condition defined above is reproduced for a STREAM frame at
offset 0. An empty unneeded frame is built as descibed. The problem is
that the original frame is modified with its OFF bit set even if the
offset is still 0.
This must be backported up to 2.6.
Now that we're using thread_sets there's no need to restrict an entire
bind_conf to 1 group, the real concern being the FD, we can move that
restriction to the shard only. This means that as long as we have enough
shards and that they're properly aligned on group boundaries (i.e. shards
are an integer divider of the number of threads), we can support "bind"
lines spanning more than one group.
The check is still performed for shards to span more than one group,
and an error is emitted when this happens. But at least now it becomes
possible to have this:
global
nbthread 256
frontend foo
bind :1111 shards 4
bind :2222 shards by-thread
Let's now retrieve the first thread group and its mask from the
thread_set so that we don't need these fields in the bind_conf anymore.
For now we're still limited to the first group (like before) but that
allows to get rid of these fields and to make sure that there's nothing
"special" being done there anymore.
Instead of reading and storing a single group and a single mask for a
"thread" directive on a bind line, we now store the complete range in
a thread set that's stored in the bind_conf. The bind_parse_thread()
function now just calls parse_thread_set() to complete the current set,
which starts empty, and thread_resolve_group_mask() was updated to
support retrieving thread group numbers or absolute thread numbers
directly from the pre-filled thread_set, and continue to feed bind_tgroup
and bind_thread. The CLI parsers which were pre-initialized to set the
bind_tgroup to 1 cannot do it anymore as it would prevent one from
restricting the thread set. Instead check_config_validity() now detects
the CLI frontend and passes the info down to thread_resolve_group_mask()
that will automatically use only the group 1's threads for these
listeners. The same is done for the peers listeners for now.
At this step it's already possible to start with all previous valid
configs as well as extended ones supporting comma-delimited thread
sets. In addition the parser already accepts large ranges spanning
multiple groups, but since the underlying listeners infrastructure
is not read, for now we're maintaining a specific check against this
at the higher level of the config validity check.
The patch is a bit large because thread resolution is performed in
multiple steps, so we need to adjust all of them at once to preserve
functional and technical consistency.
The purpose is to be able to store large thread sets, defined by ranges
that may cross group boundaries, as well as define lists of groups and
masks. The thread_set struct implements the storage, and the parser is
in parse_thread_set(), with a focus on "bind" lines, but not only.
During 2.5 development, a fallback was implemented for bind "thread"
directives that would not map to existing threads, with commit e3f4d7496
("MEDIUM: config: resolve relative threads on bind lines to absolute ones").
The approch consisted in remapping the threads to other ones. But now
that relative threads and not absolute threads are stored in this mask,
this case cannot happen anymore, and this confusing hack is not needed
anymore.
This flag is only used to tag a QUIC listener, which we now know by
its bind_conf's xprt as well. It's only used to decide whether or not
to perform an extra initialization step on the listener. Let's drop it
as well as the flags field.
With the various fields and options moved, the listener struct reduced
by 48 bytes total.
All options that made sense were moved to the bind_conf, and remaining
ones were removed. This field isn't used at all anymore. The thr_idx
field was moved there to plug the hole.
LI_O_TCP_L4_RULES and LI_O_TCP_L5_RULES are only set by from the proxy
based on the presence or absence of tcp_req l4/l5 rules. It's basically
as cheap to check the list as it is to check the flag, except that there
is no need to maintain a copy. Let's get rid of them, and this may ease
addition of more dynamic stuff later.
These two flags are entirely for internal use and are even per proxy
in practice since they're used for peers and CLI to indicate (for the
first one) that the listener(s) are not subject to connection limits,
and for the second that the listener(s) should not be stopped on
soft-stop. No need to keep them in the listeners, let's move them to
the bind_conf under names BC_O_UNLIMITED and BC_O_NOSTOP.
These are only set per bind line and used when creating a sessions,
we can move them to the bind_conf under the names BC_O_ACC_PROXY and
BC_O_ACC_CIP respectively.
It's set per bind line ("tfo") and only used in tcp_bind_listener() so
there's no point keeping the address family tests, let's just store the
flag in the bind_conf under the name BC_O_TCP_FO.
This option is set per bind line, and was only set stored when the
address family is AF_INET4 or AF_INET6. That's pointless since it's
used only in tcp_bind_listener() which is only used for such families
as well, so it can now be moved to the bind_conf under the name
BC_O_DEF_ACCEPT.
It's currently declared per-frontend, though it would make sense to
support it per-line but in no case per-listener. Let's move the option
to a bind_conf option BC_O_NOLINGER.
This field is used by stream_new() to optionally set the applet the
stream will connect to for simple proxies like the CLI for example.
But it has never been configurable to anything and is always strictly
equal to the frontend's ->default_target. Let's just drop it and make
stream_new() only use the frontend's. It makes more sense anyway as
we don't want the proxy to work differently based on the "bind" line.
This idea was brought in 1.6 hoping that the h2 implementation would
use applets for decoding (which was dropped after the very first
attempt in 1.8).
The accept callback directly derives from the upper layer, generally
it's session_accept_fd(). As such it's also defined per bind line
so it makes sense to move it there.
The maxconn is set per bind line so let's move it there. This might
possibly even slightly reduce inter-thread contention since this one
is read-mostly and it was stored next to nbconn which changes for
each connection setup or teardown.
Like for previous values, maxaccept is really per-bind_conf, so let's
move it there. Some frontends (peers, log) set it to 1 so the assignment
was slightly moved.
These two arguments were only set and only used with tcpv4/tcpv6. Let's
just store them into the bind_conf instead of duplicating them for all
listeners since they're fixed per "bind" line.
When bind_conf were created, some elements such as the analysers mask
ought to have moved there but that wasn't the case. Now that it's
getting clearer that bind_conf provides all binding parameters and
the listener is essentially a listener on an address, it's starting
to get really confusing to keep such parameters in the listener, so
let's move the mask to the bind_conf. We also take this opportunity
for pre-setting the mask to the frontend's upon initalization. Now
several loops have one less argument to take care of.
The SCID (source connection ID) used by a peer (client or server) is sent into the
long header of a QUIC packet in clear. But it is also sent into the transport
parameters (initial_source_connection_id). As these latter are encrypted into the
packet, one must check that these two pieces of information do not differ
due to a packet header corruption. Furthermore as such a connection is unusuable
it must be killed and must stop as soon as possible processing RX/TX packets.
Implement qc_kill_con() to flag a connection as unusable and to kille it asap
waking up the idle timer task to release the connection.
Add a check to quic_transport_params_store() to detect that the SCIDs do not
match and make it call qc_kill_con().
Add several tests about connection to be killed at several critial locations,
especially in the TLS stack callback to receive CRYPTO data from or derive secrets,
and before preparing packet after having received others.
Must be backported to 2.6 and 2.7.
This is a bad idea to make the TLS ClientHello callback call qc_conn_finalize().
If this latter fails, this would generate a TLS alert and make the connection
send packet whereas it is not functional. But qc_conn_finalize() job was to
install the transport parameters sent by the QUIC listener. This installation
cannot be done at any time. This must be done after having possibly negotiated
the QUIC version and before sending the first Handshake packets. It seems
the better moment to do that in when the Handshake TX secrets are derived. This
has been found inspecting the ngtcp2 code. Calling SSL_set_quic_transport_params()
too late would make the ServerHello to be sent without the transport parameters.
The code for the connection update which was done from qc_conn_finalize() has
been moved to quic_transport_params_store(). So, this update is done as soon as
possible.
Add QUIC_FL_CONN_TX_TP_RECEIVED to flag the connection as having received the
peer transport parameters. Indeed this is required when the ClientHello message
is splitted between packets.
Add QUIC_FL_CONN_FINALIZED to protect the connection from calling qc_conn_finalize()
more than one time. This latter is called only when the connection has received
the transport parameters and after returning from SSL_do_hanshake() which is the
function which trigger the TLS ClientHello callback call.
Remove the calls to qc_conn_finalize() from from the TLS ClientHello callbacks.
Must be backported to 2.6. and 2.7.
This bug was revealed by some C1 interop tests (heavy hanshake packet
corruption) when receiving 1-RTT packets with a key phase update.
This lead the packet to be decrypted with the next key phase secrets.
But this latter is initialized only after the handshake is complete.
In fact, 1-RTT must never be processed before the handshake is complete.
Relying on the "qc->mux_state == QC_MUX_NULL" condition to check the
handshake is complete is wrong during 0-RTT sessions when the mux
is initialized before the handshake is complete.
Must be backported to 2.7 and 2.6.
This is not really a bug fix but an improvement. When the Handshake packet number
space has been detected as needed to be probed, we should also try to probe the
Initial packet number space if there are still packets in flight. Furthermore
we should also try to send up to two datagrams.
Must be backported to 2.6 and 2.7.
This function is called only when probing only one packet number space
(Handshake) or two times the same one (Application). So, there is no risk
to prepare two times the same frame when uneeded because we wanted to
probe two packet number spaces. The condition "ignore the packets which
has been coalesced to another one" is not necessary. More importantly
the bug is when we want to prepare a Application packet which has
been coalesced to an Handshake packet. This is always the case when
the first Application packet is sent. It is always coalesced to
an Handshake packet with an ACK frame. So, when lost, this first
application packet was never resent. It contains the HANDSHAKE_DONE
frame to confirm the completion of the handshake to the client.
Must be backported to 2.6 and 2.7.
During the handshake and when the handshake has not been confirmed
the acknowledgement delays reported by the peer may be larger
than max_ack_delay. max_ack_delay SHOULD be ignored before the
handshake is completed when computing the PTO. But the current code considered
the wrong condition "before the hanshake is completed".
Replace the enum value QUIC_HS_ST_COMPLETED by QUIC_HS_ST_CONFIRMED to
fix this issue. In quic_loss.c, the parameter passed to quic_pto_pktns()
is renamed to avoid any possible confusion.
Must be backported to 2.7 and 2.6.
This may happen during retransmission of frames which can be splitted
(CRYPTO, or STREAM frames). One may have to split a frame to be
retransmitted due to the QUIC protocol properties (packet size limitation
and packet field encoding sizes). The remaining part of a frame which
cannot be retransmitted must be detached from the original frame it is
copied from. If not, when the really sent part will be acknowledged
the remaining part will be acknowledged too but not sent!
Must be backported to 2.7 and 2.6.
Add cum_sess_ver[] new array of counters to count the number of cumulated
HTTP sessions by version (h1, h2 or h3).
Implement proxy_inc_fe_cum_sess_ver_ctr() to increment these counter.
This function is called each a HTTP mux is correctly initialized. The QUIC
must before verify the application operations for the mux is for h3 before
calling proxy_inc_fe_cum_sess_ver_ctr().
ST_F_SESS_OTHER stat field for the cumulated of sessions others than
HTTP sessions is deduced from ->cum_sess_ver counter (for all the session,
not only HTTP sessions) from which the HTTP sessions counters are substracted.
Add cum_req[] new array of counters to count the number of cumulated HTTP
requests by version and others than HTTP requests. This new member replace ->cum_req.
Modify proxy_inc_fe_req_ctr() which increments these counters to pass an HTTP
version, 0 special values meaning "other than an HTTP request". This is the case
for instance for syslog.c from which proxy_inc_fe_req_ctr() is called with 0
as version parameter.
ST_F_REQ_TOT stat field compputing for the cumulated number of requests is modified
to count the sum of all the cum_req[] counters.
As this patch is useful for QUIC, it must be backported to 2.7.
This is a leftover from the implementation's history, but the
quic_rx_packet and quic_tx_packet ref counts were still atomically
updated. It was found in perf top that the cost of the atomic inc
in quic_tx_packet_refinc() alone was responsible for 1% of the CPU
usage at 135 Gbps. Given that packets are only processed on their
assigned thread we don't need that anymore and this can be replaced
with regular non-atomic operations.
Doing this alone has reduced the CPU usage of qc_do_build_pkt()
from 3.6 to 2.5% and increased the overall bit rate by about 1%.