This is the h3 version of this previous fix:
BUG/MINOR: h2: reject more chars from the :path pseudo header
In addition to the current NUL/CR/LF, this will also reject all other
control chars, the space and '#' from the :path pseudo-header, to avoid
taking the '#' for a part of the path. It's still possible to fall back
to the previous behavior using "option accept-invalid-http-request".
Here the :path header value is scanned a second time to look for
forbidden chars because we don't know upfront if we're dealing with a
path header field or another one. This is no big deal anyway for now.
This should be progressively backported to 2.6, along with the
following commits it relies on (the same as for h2):
REGTESTS: http-rules: add accept-invalid-http-request for normalize-uri tests
REORG: http: move has_forbidden_char() from h2.c to http.h
MINOR: ist: add new function ist_find_range() to find a character range
MINOR: http: add new function http_path_has_forbidden_char()
This is the h2 version of this previous fix:
BUG/MINOR: h1: do not accept '#' as part of the URI component
In addition to the current NUL/CR/LF, this will also reject all other
control chars, the space and '#' from the :path pseudo-header, to avoid
taking the '#' for a part of the path. It's still possible to fall back
to the previous behavior using "option accept-invalid-http-request".
This patch modifies the request parser to change the ":path" pseudo header
validation function with a new one that rejects 0x00-0x1F (control chars),
space and '#'. This way such chars will be dropped early in the chain, and
the search for '#' doesn't incur a second pass over the header's value.
This should be progressively backported to stable versions, along with the
following commits it relies on:
REGTESTS: http-rules: add accept-invalid-http-request for normalize-uri tests
REORG: http: move has_forbidden_char() from h2.c to http.h
MINOR: ist: add new function ist_find_range() to find a character range
MINOR: http: add new function http_path_has_forbidden_char()
MINOR: h2: pass accept-invalid-http-request down the request parser
Seth Manesse and Paul Plasil reported that the "path" sample fetch
function incorrectly accepts '#' as part of the path component. This
can in some cases lead to misrouted requests for rules that would apply
on the suffix:
use_backend static if { path_end .png .jpg .gif .css .js }
Note that this behavior can be selectively configured using
"normalize-uri fragment-encode" and "normalize-uri fragment-strip".
The problem is that while the RFC says that this '#' must never be
emitted, as often it doesn't suggest how servers should handle it. A
diminishing number of servers still do accept it and trim it silently,
while others are rejecting it, as indicated in the conversation below
with other implementers:
https://lists.w3.org/Archives/Public/ietf-http-wg/2023JulSep/0070.html
Looking at logs from publicly exposed servers, such requests appear at
a rate of roughly 1 per million and only come from attacks or poorly
written web crawlers incorrectly following links found on various pages.
Thus it looks like the best solution to this problem is to simply reject
such ambiguous requests by default, and include this in the list of
controls that can be disabled using "option accept-invalid-http-request".
We're already rejecting URIs containing any control char anyway, so we
should also reject '#'.
In the H1 parser for the H1_MSG_RQURI state, there is an accelerated
parser for bytes 0x21..0x7e that has been tightened to 0x24..0x7e (it
should not impact perf since 0x21..0x23 are not supposed to appear in
a URI anyway). This way '#' falls through the fine-grained filter and
we can add the special case for it also conditionned by a check on the
proxy's option "accept-invalid-http-request", with no overhead for the
vast majority of valid URIs. Here this information is available through
h1m->err_pos that's set to -2 when the option is here (so we don't need
to change the API to expose the proxy). Example with a trivial GET
through netcat:
[08/Aug/2023:16:16:52.651] frontend layer1 (#2): invalid request
backend <NONE> (#-1), server <NONE> (#-1), event #0, src 127.0.0.1:50812
buffer starts at 0 (including 0 out), 16361 free,
len 23, wraps at 16336, error at position 7
H1 connection flags 0x00000000, H1 stream flags 0x00000810
H1 msg state MSG_RQURI(4), H1 msg flags 0x00001400
H1 chunk len 0 bytes, H1 body len 0 bytes :
00000 GET /aa#bb HTTP/1.0\r\n
00021 \r\n
This should be progressively backported to all stable versions along with
the following patch:
REGTESTS: http-rules: add accept-invalid-http-request for normalize-uri tests
Similar fixes for h2 and h3 will come in followup patches.
Thanks to Seth Manesse and Paul Plasil for reporting this problem with
detailed explanations.
We're adding a new argument "relaxed" to h2_make_htx_request() so that
we can control its level of acceptance of certain invalid requests at
the proxy level with "option accept-invalid-http-request". The goal
will be to add deactivable checks that are still desirable to have by
default. For now no test is subject to it.
As its name implies, this function checks if a path component has any
forbidden headers starting at the designated location. The goal is to
seek from the result of a successful ist_find_range() for more precise
chars. Here we're focusing on 0x00-0x1F, 0x20 and 0x23 to make sure
we're not too strict at this point.
This looks up the character range <min>..<max> in the input string and
returns a pointer to the first one found. It's essentially the equivalent
of ist_find_ctl() in that it searches by 32 or 64 bits at once, but deals
with a range.
Historically the parsing error used to apply only to too large headers,
so this is what has been reported in traces. But nowadays we can also
reject invalid characters, and when this happens the trace is a bit
misleading, so let's mention "or invalid".
In practice it's exactly the same for h3 as 54f53ef7c ("BUG/MAJOR: h2:
reject header values containing invalid chars") was for h2: we must
make sure never to accept NUL/CR/LF in any header value because this
may be used to construct splitted headers on the backend. Hence we
apply the same solution. Here pseudo-headers, headers and trailers are
checked separately, which explains why we have 3 locations instead of
2 for h2 (+1 for response which we don't have here).
This is marked major for consistency and due to the impact if abused,
but the reality is that at the time of writing, this problem is limited
by the scarcity of the tools which would permit to build such a request
in the first place. But this may change over time.
This must be backported to 2.6. This depends on the following commit
that exposes the filtering function:
REORG: http: move has_forbidden_char() from h2.c to http.h
This function is not H2 specific but rather generic to HTTP. We'll
need it in H3 soon, so let's move it to HTTP and rename it to
http_header_has_forbidden_char().
If the "limited-quic" globale option wa not set, the QUIC listener bindings were
not bound, this is ok, but silently ignored.
Add a warning in these cases to ask the user to explicitely enable the QUIC
bindings when building QUIC support against a TLS/SSL library without QUIC
support (OpenSSL).
Add a check to the QUIC packet handler running at application level (after the
handshake is complete) to release the quic_conn memory calling quic_conn_release().
This is done only if the mux is not started.
When the connection enters the "connection closing" state after having sent
a datagram with CONNECTION_CLOSE frames inside its packets, a lot of memory
may be freed from quic_conn objects (QUIC connection). This is done allocating
a reduced sized object which keeps enough information to handle the remaining
incoming packets for the connection in "connection closing" state, and to
continue to send again the previous datagram with CONNECTION_CLOSE frames inside
which has already been sent.
Define a new quic_cc_conn struct which represents the connection object after
entering the "connection close" state and after having release the quic_conn
connection object.
Define <pool_head_quic_cc_conn> new pool for these quic_cc_conn struct objects.
Define QUIC_CONN_COMMON structure which is shared between quic_conn struct object
(the connection before entering "connection close" state), and new quic_cc_conn
struct object (the connection after entering "connection close"). So, all the
members inside QUIC_CONN_COMMON may be indifferently dereferenced from a
quic_conn struct or a quic_cc_conn struct pointer.
Implement qc_new_cc_conn() function to allocate such connections in
"connection close" state. This function is responsible of copying the
required information from the original connection (quic_conn) to the remaining
connection (quic_cc_conn). Among others initialization, it redefined the
QUIC packet handler task to quic_cc_conn_io_cb() and the idle timer task
to qc_cc_idle_timer_task(). quic_cc_conn_io_cb() drains the received and
resend the datagram which CONNECTION_CLOSE frame which has already been sent
when entering "connection close" state. qc_cc_idle_timer_task() only releases
the remaining quic_cc_conn struct object.
Modify quic_conn_release() to allocate quic_cc_conn struct objects from the
original connection passed as argument. It does nothing if this original
connection is not in closing state, or if the idle timer has already expired.
Implement quic_release_cc_conn() to release a "connection close" connection.
It is called when its timer expires or if an error occured when sending
a packet from this connection when the peer is no more reachable.
Add "quic_cids" new pool to allocate the ->cids trees of quic_conn objects.
Replace ->cids member of quic_conn objects by pointer to "quic_cids" and
adapt the code consequently. Nothing special.
Add a new pool <pool_head_quic_cc_buf> for buffer used when building datagram
wich CONNECTION_CLOSE frames inside with QUIC_MIN_CC_PKTSIZE(128) as minimum
size.
Add ->cc_buf_area to quic_conn struct to store such buffers.
Add ->cc_dgram_len to store the size of the "connection close" datagrams
and ->cc_buf a buffer struct to be used with ->cc_buf_area as ->area member
value.
Implement qc_get_txb() to be called in place of qc_txb_alloc() to allocate
a struct "quic_cc_buf" buffer when the connection needs an immediate close
or a buffer struct if not.
Modify qc_prep_hptks() and qc_prep_app_pkts() to allow them to use such
"quic_cc_buf" buffer when an immediate close is required.
Move rx.bytes, tx.bytes and tx.prep_bytes quic_conn struct member to
bytes anonymous struct (bytes.rx, bytes.tx and bytes.prep member respectively).
They are moved before being defined into a bytes anonoymous struct common to
a future struct to be defined.
Consequently adapt the code.
Add a BUG_ON() to quic_peer_validated_addr() to check the amplification limit
is respected when it return false(0), i.e. when the connection is not validated.
Implement quic_may_send_bytes() which returns the number of bytes which may be
sent when the connection has not already been validated and call this functions
at several places when this is the case (after having called
quic_peer_validated_addr()).
Furthermore, this patch improves the code maintainability. Some patches to
come will have to rename ->[rt]x.bytes quic_conn struct members.
When a "replace-header" action is used, we loop on all headers in the
message to change value of all headers matching a name. The new value is
placed in a trash. However, there is a race here because if the message must
be defragmented, another trash is used. If several defragmentation are
performed because several headers must be updated at same time, the first
trash, used to store the new value, may be crushed. Indeed, there are only 2
pre-allocated trash used in rotation. and the trash to store the new value
is never renewed. As consequece, random data may be inserted into the header
value.
Here, to fix the issue, we must take care to refresh the trash buffer when
we evaluated a new header. This way, a trash used for the new value, and
eventually another way for the htx defragmentation. But that's all.
Thanks to Christian Ruppert for his detailed report.
This patch must be to all stable versions. On the 2.0, the patch must be
applied on src/proto_htx.c and the function is named
htx_transform_header_str().
A HTTP server may provide a complete response even prior receiving the
full request. In this case, RFC 9114 allows the server to abort read
with a STOP_SENDING with error code H3_NO_ERROR.
This scenario was notably reproduced with haproxy and an inactive
server. If the client send a POST request, haproxy may provide a full
HTTP 503 response before the end of the full request.
GCC warns about a possible NULL dereference when requeuing a datagram on
the connection socket. This happens due to a MT_LIST_POP to retrieve a
rxbuf instance.
In fact, this can never be NULL there is enough rxbuf allocated for each
thread. Once a thread has finished to work with it, it must always
reappend it.
This issue was introduced with the following patch :
commit b34d353968
BUG/MEDIUM: quic: consume contig space on requeue datagram
As such, it must be backported in every version with the above commit.
This should fix the github CI compilation error.
A thread must always reappend the rxbuf instance after finishing
datagram reception treatment. This was not the case on one error code
path : when fake datagram allocation fails on datagram requeing.
This issue was introduced with the following patch :
commit b34d353968
BUG/MEDIUM: quic: consume contig space on requeue datagram
As such, it must be backported in every version with the above commit.
Splicing is not available on all platform. Thus a dedicated script is used
to check we properly skip payload for bodyless response when splicing is
used. This way, we are still able to test the feature with the original
script on all platform.
This patch fixes an issue on the CI introduced by commit ef2b15998
("BUG/MINOR: htx/mux-h1: Properly handle bodyless responses when splicing is
used"). It must be backported with the above commit.
Fields of sedesc structure were documented in the comment about the
structure itself. It was not really convenient, hard to read, hard to
update. So comments about the fields are moved on the corresponding field
line, as usual.
In the http-client I/O handler, HTX request and response are loaded from the
channels buffer. Some changes are preformed in these messages. So, we must
take care to commit changes into the underlying buffer by calling
htx_to_buf().
It is especially important when the HTX message becoms empty to be able to
quickly release the buffer.
This patch should be backported as far as 2.6.
When handling UDP datagram reception, it is possible to receive a QUIC
packet for one connection to the socket attached to another connection.
To protect against this, an explicit comparison is done against the
packet DCID and the quic-conn CID. On no match, the datagram is requeued
and dispatched via rxbuf and will be treated as if it arrived on the
listener socket.
One reason for this wrong reception is explained by the small race
condition that exists between bind() and connect() syscalls during
connection socket initialization. However, one other reason which was
not thought initially is when clients reuse the same IP:PORT for
different connections. In this case the current FD attribution is not
optimal and this can cause a substantial number of requeuing.
This situation has revealed a bug during requeuing. If rxbuf contig
space is not big enough for the datagram, the incoming datagram was
dropped, even if there is space at buffer origin. This can cause several
datagrams to be dropped in a series until eventually buffer head is
moved when passing through the listener FD.
To fix this, allocate a fake datagram to consume contig space. This is
similar to the handling of datagrams on the listener FD. This allows
then to store the datagram to requeue on buffer head and continue.
This can be reproduced by starting a lot of connections. To increase the
phenomena, POST are used to increase the number of datagram dropping :
$ while true; do curl -F "a=@~/50k" -k --http3-only -o /dev/null https://127.0.0.1:20443/; done
There is a mechanisme in the H1 and H2 multiplexer to skip the payload when
a response is returned to the client when it must not contain any payload
(response to a HEAD request or a 204/304 response). However, this does not
work when the splicing is used. The H2 multiplexer does not support the
splicing, so there is no issue. But with the mux-h1, when data are sent
using the kernel splicing, the mux on the server side is not aware the
client side should skip the payload. And once the data are put in a pipe,
there is no way to stop the sending.
It is a defect of the current design. This will be easier to deal with this
case when the mux-to-mux forwarding will be implemented. But for now, to fix
the issue, we should add an HTX flag on the start-line to pass the info from
the client side to the server side and be able to disable the splicing in
necessary.
The associated reg-test was improved to be sure it does not fail when the
splicing is configured.
This patch should be backported as far as 2.4..
When the stream expiration date is computed at the end of process_stream(),
if there is no longer analyzer on the request channel, its analyse
expiration date is reset. The same is now performed on the response
channel. This way, we are sure to not inherit of an orphan expired date.
This should prevent spinning loop on process_stream().
The bandwidth limitation filter sets the analyse expiration date on the
channel to restart the data forwarding and thus limit the bandwidth.
However, this expiration date is not reset on abort. So it is possible to
reuse the same expiration date to set the stream one. If it expired before
the end of the stream, this will lead to a spinning loop on process_stream()
because the task expiration date is always set in past.
To fix the issue, when the analyse ends on a channel, the bandwidth
limitation filter reset the corrsponding analyse expiration date.
This patch should fix the issue #2230. It must be backported as far as 2.7.
Surprisingly, commit 00e00fb42 ("REORG: cfgparse: extract curproxy as a
global variable") caused a build breakage on the CI but not on two
developers' machines. It looks like it's dependent on the linker version
used. What happens is that flt_spoe.c already has a curproxy struct which
already is a copy of the one passed by the parser because it also needed
it to be exported, so they now conflict. Let's just drop this unused copy.
The ->openssl_compat struct member of the QUIC connection object was not fully
initialized. This was done on purpose, believing that ->write_level and
->read_level member was initialized by quic_tls_compat_keylog_callback() (the
keylog callback) before entering quic_tls_compat_msg_callback() which
has to parse the TLS messages. In fact this is not the case at all.
quic_tls_compat_msg_callback() is called before quic_tls_compat_keylog_callback()
when receiving the first TLS ClientHello message.
->write_level and ->read_level was not initialized to <ssl_encryption_initial> (= 0)
as this is implicitely done by the originial ngxinx wrapper which calloc()s the openssl
compatibily structure. This could lead to a crash after ssl_to_qel_addr() returns
NULL when called by ha_quic_add_handshake_data().
This patch explicitely initialializes ->write_level and ->read_level to
<ssl_encryption_initial> (=0).
No need to backport.
When DATA frames are decoded for a QUIC stream, we take care to not exceed
the announced content-length, if any. To do so, we check we don't received
more data than excepted but also no less than announced. For the last check,
we rely on the fin bit.
However, it is possible to have several DATA frames to decode at a time
while the end of the stream was received. In this case, we must take care to
handle the fin bit only on the last frame. But because of a bug, the fin bit
was handled to early, erroneously triggering an internal error.
This patch must be backported as far as 2.6.
If the buffer is completely full, the function chunk_appendf() would
write a zero past it, which can result in unexpected behavior.
Now we make a check before calling vsnprintf() and return the current
chunk size if no room is available.
This should be backported as far as 2.0.
Move the TX part of the code to quic_tx.c.
Add quic_tx-t.h and quic_tx.h headers for this TX part code.
The definition of quic_tx_packet struct has been move from quic_conn-t.h to
quic_tx-t.h.
Same thing for the TX part:
Move the RX part of the code to quic_rx.c.
Add quic_rx-t.h and quic_rx.h headers for this TX part code.
The definition of quic_rx_packet struct has been move from quic_conn-t.h to
quic_rx-t.h.
Move the code which directly calls the functions of the OpenSSL QUIC API into
quic_ssl.c new C file.
Some code have been extracted from qc_conn_finalize() to implement only
the QUIC TLS part (see quic_tls_finalize()) into quic_tls.c.
qc_conn_finalize() has also been exported to be used from this new quic_ssl.c
C module.
To accelerate the compilation of quic_conn.c file, export the code in relation
with the traces from quic_conn.c to quic_trace.c.
Also add some headers (quic_trace-t.h and quic_trace.h).
The memory allocated for TLS cipher context used to encrypt/decrypt QUIC v2
packets should not be released as soon as possible. Indeed, even if
after having received an client Handshake packet one may drop the Initial
TLS cipher context, one has often to used it to acknowledged Initial packets.
No need to backport.