A segfault happens when receiving a CONNECTION_CLOSE during handshake.
This is because the mux is not initialized at this stage but the
transport layer dereferences it.
Fix this by ensuring that the MUX is initialized before. Thanks to Willy
for his help on this one. Welcome in the QUIC-men team !
By passing a 3rd argument it's now possible to set a randomness level
according to which received packets will be replaced by one of the 20
previous ones. This happens in both directions and the buffer is common
so that it's possible to receive responses on requests and conversely,
which adds to the perturbation. E.g:
./dev/udp/udp-perturb 127.0.0.4:9443 127.0.0.4:8443 10
This will add 10% randomness on forwarded packets between these two
ports.
QUIC really needs more traffic perturbation for the tests. Let's have
a tiny UDP proxy for this, mostly derived from the 'connect' test suite.
For now it only supports a single "connection" at once, but the code is
here to support more. A new "connection" will simply replace the previous
one. It doesn't yet cause traffic perturbations, this is still to be added.
Some of the setsockopt() are possibly unneeded. The error handling is
almost inexistent, and polling for sends is not implemented at all (will
cause losses). No stats are collected.
This is the pool equivalent of commit 97ea9c49f ("BUG/MEDIUM: fd: always
align fdtab[] to 64 bytes"). After a careful code review, it happens that
the pool heads are the other structures allocated with malloc/calloc that
claim to be aligned to a size larger than what the allocator can offer.
While no issue was reported on them, no memset() is performed and no type
is large, this is a problem waiting to happen, so better fix it. In
addition, it's relatively easy to do by storing the allocation address
inside the pool_head itself and use it at free() time. Finally, threads
might benefit from the fact that the caches will really be aligned and
that there will be no false sharing.
This should be backported to all versions where it applies easily.
When POSTing a request with a payload, and reusing the same httpclient
lua instance, one could encounter a spinning of the httpclient appctx.
Indeed the sent counter is not reset between 2 POSTs and the condition
for sending the EOM flag is never met.
Must fixed issue #1593.
To be backported in 2.5.
Many inline functions involve some BUG_ON() calls and because of the
partial complexity of the functions, they're not inlined anymore (e.g.
co_data()). The reason is that the expression instantiates the message,
its size, sometimes a counter, then the atomic OR to taint the process,
and the back trace. That can be a lot for an inline function and most
of it is always the same.
This commit modifies this by delegating the common parts to a dedicated
function "complain()" that takes care of updating the counter if needed,
writing the message and measuring its length, and tainting the process.
This way the caller only has to check a condition, pass a pointer to the
preset message, and the info about the type (bug or warn) for the tainting,
then decide whether to dump or crash. Note that this part could also be
moved to the function but resulted in complain() always being at the top
of the stack, which didn't seem like an improvement.
Thanks to these changes, the BUG_ON() calls do not result in uninlining
functions anymore and the overall code size was reduced by 60 to 120 kB
depending on the build options.
This one is referenced in initcalls by its pointer, it makes no sense
to declare it inline. At best it causes function duplication, at worst
it doesn't build on older compilers.
This one is referenced in initcalls by its pointer, it makes no sense
to declare it inline. At best it causes function duplication, at worst
it doesn't build on older compilers.
The 3 functions http_{req,res,after_res}_keywords_register() are
referenced in initcalls by their pointer, it makes no sense to declare
them inline. At best it causes function duplication, at worst it doesn't
build on older compilers.
This one is referenced in initcalls by its pointer, it makes no sense
to declare it inline. At best it causes function duplication, at worst
it doesn't build on older compilers.
gcc 6 continues its saga with excessive reports of null-deref warnings.
This time it was in the IS_HTX_CS() macro. Let's use __cs_conn() after
cs_conn() was checked.
Do not distinguish the direction (TX/RX) when settings TLS secrets flags.
There is not such a distinction in the RFC 9001.
Assemble them at the same level: at the upper context level.
This is required since this previous commit:
"MINOR: quic: Post handshake I/O callback switching"
If not, such packets remain endlessly in the RX buffer and cannot be parsed
by the new I/O callback used after the handshake has been confirmed.
Wakeup asap the timer task when setting its timer in the past.
Take also the opportunity of this patch to make simplify quic_pto_pktns():
calling tick_first() is useless here to compare <lpto> with <tmp_pto>.
We had several warnings when building haproxy 2.5.4 with
old openssl 1.0.1e. This version of openssl is the latest
available in EOL centos 6.
include/haproxy/openssl-compat.h:157:51: \
warning: "LIBRESSL_VERSION_NUMBER" is not defined
This patch fixed the build. It changes the #if condition,
as done in other similar parts of openssl-compat.h.
Since the recent refactoring on the conn-streams, a stream has always a
defined frontend and backend conn-streams. Thus, in stream_dump(), there is
no reason to still test if these conn-streams are defined.
In addition, still in stream_dump(), get the stream-interfaces using the
conn-streams and not the opposite.
This patch should fix issue #1589 and #1590.
In the same way than for normalize_uri.vtc, a "Connection: close" header is
added to all responses to avoid any connection reuse. This should avoid any
"HTTP header incomplete" errors.
Reorganize the Rx path for STREAM frames on bidirectional streams. A new
function qcc_recv is implemented on the MUX. It will handle the STREAM
frames copy and offset calculation from transport to MUX.
Another function named qcc_decode_qcs from the MUX can be called by
transport each time new STREAM data has been copied.
The architecture is now cleaner with the MUX layer in charge of parsing
the STREAM frames offsets. This is required to be able to implement the
flow-control on the MUX layer.
Note that as a convenience, a STREAM frame is not partially copied to
the MUX buffer. This simplify the implementation for the moment but it
may change in the future to optimize the STREAM frames handling.
For the moment, only bidirectional streams benefit from this change. In
the future, it may be extended to unidirectional streams to unify the
STREAM frames processing.
FIN flag on a STREAM frame was not detected if the frame was previously
buffered on qcs.rx.frms before being handled.
To fix this, copy the fin field from the quic_stream instance to
quic_rx_strm_frm. This is required to properly notify the FIN flag on
qc_treat_rx_strm_frms for the MUX layer.
Without this fix, the request channel might be left opened after the
last STREAM frame reception if there is out-of-order frames on the Rx
path.
This flag is set when the STREAM frame with FIN set has been received on
a qcs instance. For now, this is only used as a BUG_ON guard to prevent
against multiple frames with FIN set. It will also be useful when
reorganize the RX path and move some of its code in the mux.
Adjust the function to handle buffered STREAM frames. If the offset of
the frame was already fully received, discard the frame. If only
partially received, compute the difference and copy only the newly
offset.
Before this change, a buffered frame representing a fully or partially
received offset caused the loop to be interrupted. The frame was
preserved, thus preventing frames with greater offset to be handled.
This may fix some occurences of stalled transfer on the request channel
if there is out-of-order STREAM frames on the Rx path.
qc_strm_cpy can be simplified by simply using b_putblk which already
handle wrapping of the destination buffer. The function is kept to
update the frame length and offset fields.
This enumerates a few of the options that are expected to have an effect
on the process' self-checks at the expense of more or less performance,
and how to choose sets of options for different deployments.
With BUG_ON() being enabled by default it is more useful to use a BUG_ON()
instead of an effectively never-taken if, as any incorrect assumptions will
become much more visible.
see 488ee7fb6 ("BUG/MAJOR: proxy_protocol: Properly validate TLV lengths")
Transform the unreachability comment into a call to `my_unreachable()` to allow
the compiler from benefitting from it.
see d1b15b6e9 ("MINOR: proxy_protocol: Ingest PP2_TYPE_UNIQUE_ID on incoming connections")
see 615f81eb5 ("MINOR: connection: Use a `struct ist` to store proxy_authority")
The new macro was introduced with commit 86bcc5308 ("DEBUG: implement 4
levels of choices between warn and crash.") but some older compilers can
complain that we test the value when the macro is not defined despite
having already been checked in a previous #if directive. Let's just
repeat the test for the definition.
693b23bb1 ("MEDIUM: tree-wide: Use unsafe conn-stream API when it is
relevant") introduced a regression in DEBUG_STRICT mode because some BUG_ON
conditions were inverted. It should ok now.
In addition, ALREADY_CHECKED macro was removed from appctx_wakeup() function
because it is useless now.
There is no connection reuse to avoid race conditions in HTTP reg-tests. But
time to time, normalize_uri.vtc still report "HTTP header incomplete"
error. It seems to be because HTTP keep-alive is still used at the session
level. Thus when the same server section is used to handle multiple requests
for the same client, via a "-repeat" statement, a new request for this client
may be handled by HAProxy before the server is restarted.
To avoid any trouble, HTTP keep-alive is disabled on the server side by
adding "Connection: close" header in responses. It seems to be ok now. We
let the CI decide.
In htx_xfer_blks() function, when headers or trailers are partially
transferred, we rollback the copy by removing copied blocks. Internally, all
blocks between <dstref> and <dstblk> are removed. But if the transfer was
stopped because we failed to reserve a block, the variable <dstblk> is
NULL. Thus, we must not try to remove it. It is unexpected to call
htx_remove_blk() in this case.
htx_remove_blk() was updated to test <blk> variable inside the existing
BUG_ON(). The block must be defined.
For now, this bug may only be encountered when H2 trailers are copied. On H2
headers, the destination buffer is empty. Thus a swap is performed.
This patch should fix the issue #1578. It must be backported as far as 2.4.
When an HTTP health-check is performed in FCGI, we must not rely on the SI
source and destination addresses to set default parameters
(REMOTE_ADDR/REMOTE_PORT and SERVER_NAME/SERVER_PORT) because the backend
conn-stream is not attached to a stream but to a healt-check. Thus, there is
no stream-interface. In addition, there is no client connection because it
is an "internal" session.
Thus, for now, in this case, there is only the server connection that can be
used. So src/dst addresses are retrieved from the server connection when the
CS application is a health-check.
This patch should solve issue #1572. It must be backported to 2.5. Note than
the CS api has changed. Thus, on HAProxy 2.5, we should test the session's
origin instead:
const struct sockaddr_storage *src = (cs_check(fstrm->cs) ? ...);
const struct sockaddr_storage *dst = (cs_check(fstrm->cs) ? ...);
This way si_*_recv() and si_*_sned() API are defined the same
way. si_sync_snd/si_sync_recv are both exported and defined in the C
file. And si_cs_send/si_cs_recv are private and only used by
stream-interface internals.
The unsafe conn-stream API (__cs_*) is now used when we are sure the good
endpoint or application is attached to the conn-stream. This avoids compiler
warnings about possible null derefs. It also simplify the code and clear up
any ambiguity about manipulated entities.
Depending on the context, we know the endpoint or the application attached
to the conn_stream is defined and we know its type. However, having
accessors testing the endpoint or the application may lead the compiler to
report possible null derefs here and there. The alternative is to add
useless tests or use ALREAD_CHECKED/DISGUISE macros. It is tedious and
inelegant.
So now, similarily to the ob API, the safe API, testing
endpoint/application, relies on an unsafe one (same name prefixed with
'__'). This way, any caller may use the unsafe API when it is relevant.
In addition, there is no reason to test the conn-stream itself. It is the
caller responsibility to be sure there is a conn-stream to get its endpoint
or its application. And most of type, we are sure to have a conn-stream.
A few functions such as c_adv(), c_rew(), co_set_data() or co_skip() got a
BUG_ON_HOT() to make sure they're not used to push more data than available
in the buffer. Note that with HTX the margin can be high and will less easily
trigger, but the goal is to detect a misuse early enough.
co_data() should never be called with a wrong c->output. At least it never
happens in regtests, but we're adding a CHECK_IF_HOT() there to avoid crashing
but report it if it ever happened when the hot path checks are enabled.
The use of co_set_data() should be strictly limited to setting the amount
of existing data to be transmitted. It ought not be used to decrement the
output after the data have left the buffer, because doing so involves
performing incorrect calculations using co_data() that still comprises
data that are not in the buffer anymore. Let's use c_rew() for this, which
is made exactly for this purpose, i.e. decrement c->output by as much as
requested. This is cleaner, faster, and will permit stricter checks.
A number of tests are now performed in low-level buffer management
functions to verify that we're not appending data to a full buffer
for example, or that the buffer passed in argument is consistent in
that its data don't outweigh its size. The few functions that already
involve memcpy() or memmove() instead got a BUG_ON() that will always
be enabled, since the overhead remains minimalist.
The buffer ring management functions br_* were all stuffed with BUG_ON()
statements that never triggered and that are on some fast paths (e.g. in
mux_h2). Let's turn them to BUG_ON_HOT() instead.
Two new BUG_ON variants, BUG_ON_HOT() and CHECK_IF_HOT() are introduced
to debug hot paths (such as low-level API functions). These ones must
not be enabled by default as they would significantly affect performance
but they may be enabled by setting DEBUG_STRICT to a value above 1. In
this case, DEBUG_STRICT_ACTION is mostly respected with a small change,
which is that the no_crash variant of BUG_ON() isn't turned to a regular
warning but to a one-time warning so as not to spam with warnings in a
hot path. It is for this reason that there is no WARN_ON_HOT().
We used to have DEBUG_STRICT_NOCRASH to disable crashes on BUG_ON().
Now we have other levels (WARN_ON(), CHECK_IF()) so we need something
finer-grained.
This patch introduces DEBUG_STRICT_ACTION which takes an integer value.
0 disables crashes and is the equivalent of DEBUG_STRICT_NOCRASH. 1 is
the default and only enables crashes on BUG_ON(). 2 also enables crashes
on WARN_ON(), and 3 also enables warnings on CHECK_IF(), and is suited
to developers and CI.
Now we'll explicitly mention if the test was a bug/warn/check, and
"FATAL" is only displayed when the process crashes. The non-crashing
BUG_ON() also suggests to report to developers.
The only reason for warning once is to check if a condition really
happens. Let's use a term that better translates the intent, that's
important when reading the code.