As stated in RFC3986, for an absolute-form URI, an empty path should be
normalized to a path of "/". This is part of scheme based normalization
rules. This kind of normalization is already performed for default ports. So
we might as well deal with the case of empty path.
The associated reg-tests was updated accordingly.
This patch should fix the issue #2573. It may be backported as far as 2.4 if
necessary.
As stated in issue #2565, checks on the request target during H1 message
parsing are not good enough. Invalid paths, not starting by a slash are in
fact parsed as authorities. The same error is repeated at the sample fetch
level. This last point is annoying because routing rules may be fooled. It
is also an issue when the URI or the Host header are updated.
Because the error is repeated at different places, it must be fixed. We
cannot be lax by arguing it is the server's job to accept or reject invalid
request targets. With this patch, we strengthen the checks performed on the
request target during H1 parsing. Idea is to reject invalid requests at this
step to be sure it is safe to manipulate the path or the authority at other
places.
So now, the asterisk-form is only allowed for OPTIONS and OTHER methods.
This last point was added to not reject the H2 preface. In addition, we take
care to have only one asterisk and nothing more. For the CONNECT method, we
take care to have a valid authority-form. All other form are rejected. The
authority-form is now only supported for CONNECT method. No specific check
is performed on the origin-form (except for the CONNECT method). For the
absolute-form, we take care to have a scheme and a valid authority.
These checks are not perfect but should be good enough to properly identify
each part of the request target for a relative small cost. But, it is a
breaking change. Some requests are now be rejected while they was not on
older versions. However, nowadays, it is most probably not an issue. If it
turns out it's really an issue for legitimate use-cases, an option would be
to supports these kinds of requests when the "accept-invalid-http-request"
option is set, with the consequence of seeing some sample fetches having an
unexpected behavior.
This patch should fix the issue #2665. It MUST NOT be backported. First
because it is a breaking change. And then because by avoiding backporting
it, it remains possible to relax the parsing with the
"accept-invalid-http-request" option.
I've had this test here never committed over the last 2.5 years, that
works fine and I didn't notice it was not part of the tree. It makes a
server return odd-sized chunked responses with short pauses between half
of thems and verifies they're not truncated on the client. It may detect
eventually state machine breakages, so better commit it.
We now take care to properly handle the abortonclose close option if it is
set on the backend and be sure we ignore it when it is set on the frontend
(inherited from the defaults section).
mux-to-mux fast-forwarding will be added. To avoid mix with the splicing and
simplify the commits, the kernel splicing support is removed from the
stconn. CF_KERN_SPLICING flag is removed and the support is no longer tested
in process_stream().
In the stconn part, rcv_pipe() callback function is no longer called.
Reg-tests scripts testing the kernel splicing are temporarly marked as
broken.
The content-length header parser has its dedicated function, in order
to take extreme care about invalid, unparsable, or conflicting values.
But there's a corner case in it, by which it stops comparing values
when reaching the end of the header. This has for a side effect that
an empty value or a value that ends with a comma does not deserve
further analysis, and it acts as if the header was absent.
While this is not necessarily a problem for the value ending with a
comma as it will be cause a header folding and will disappear, it is a
problem for the first isolated empty header because this one will not
be recontructed when next ones are seen, and will be passed as-is to the
backend server. A vulnerable HTTP/1 server hosted behind haproxy that
would just use this first value as "0" and ignore the valid one would
then not be protected by haproxy and could be attacked this way, taking
the payload for an extra request.
In field the risk depends on the server. Most commonly used servers
already have safe content-length parsers, but users relying on haproxy
to protect a known-vulnerable server might be at risk (and the risk of
a bug even in a reputable server should never be dismissed).
A configuration-based work-around consists in adding the following rule
in the frontend, to explicitly reject requests featuring an empty
content-length header that would have not be folded into an existing
one:
http-request deny if { hdr_len(content-length) 0 }
The real fix consists in adjusting the parser so that it always expects a
value at the beginning of the header or after a comma. It will now reject
requests and responses having empty values anywhere in the C-L header.
This needs to be backported to all supported versions. Note that the
modification was made to functions h1_parse_cont_len_header() and
http_parse_cont_len_header(). Prior to 2.8 the latter was in
h2_parse_cont_len_header(). One day the two should be refused but the
former is also used by Lua.
The HTTP messaging reg-tests were completed to test these cases.
Thanks to Ben Kallus of Dartmouth College and Narf Industries for
reporting this! (this is in GH #2237).
Splicing is not available on all platform. Thus a dedicated script is used
to check we properly skip payload for bodyless response when splicing is
used. This way, we are still able to test the feature with the original
script on all platform.
This patch fixes an issue on the CI introduced by commit ef2b15998
("BUG/MINOR: htx/mux-h1: Properly handle bodyless responses when splicing is
used"). It must be backported with the above commit.
There is a mechanisme in the H1 and H2 multiplexer to skip the payload when
a response is returned to the client when it must not contain any payload
(response to a HEAD request or a 204/304 response). However, this does not
work when the splicing is used. The H2 multiplexer does not support the
splicing, so there is no issue. But with the mux-h1, when data are sent
using the kernel splicing, the mux on the server side is not aware the
client side should skip the payload. And once the data are put in a pipe,
there is no way to stop the sending.
It is a defect of the current design. This will be easier to deal with this
case when the mux-to-mux forwarding will be implemented. But for now, to fix
the issue, we should add an HTX flag on the start-line to pass the info from
the client side to the server side and be able to disable the splicing in
necessary.
The associated reg-test was improved to be sure it does not fail when the
splicing is configured.
This patch should be backported as far as 2.4..
Depending on the timing, time to time, the log messages can be mixed. A
client can start and be fully handled by HAProxy (including its log message)
before the log message of the previous client was emitted or received. To
fix the issue, a barrier was added to be sure to eval the "expect" rule on
logs before starting the next client.
Since the commit f2b02cfd9 ("MAJOR: http-ana: Review error handling during
HTTP payload forwarding"), during the payload forwarding, we are analyzing a
side, we stop to test the opposite side. It means when the HTTP request
forwarding analyzer is called, we no longer check the response side and vice
versa.
Unfortunately, since then, the HTTP tunneling is broken after a protocol
upgrade. On the response is switch in TUNNEL mode. The request remains in
DONE state. As a consequence, data received from the server are forwarded to
the client but not data received from the client.
To fix the bug, when both sides are in DONE state, both are switched in same
time in TUNNEL mode if it was requested. It is performed in the same way in
http_end_request() and http_end_response().
This patch should fix the issue #2125. It is 2.8-specific. No backport
needed.
If "-dF" command line argument is passed to haproxy to execute the script,
by sepcifying HAPROXY_ARGS variable, http_splicing.vtc is now skipped.
Without this patch, the script fails when the fast-forward is disabled.
A feature command was added to detect if infinite forward is disabled to be
able to skip the script. Unfortunately, it is no supported to evaluate such
expression. Thus remove it. For now, reg-tests must not be executed with
"-dF" option.
The -dF option can now be used to disable data fast-forward. It does the
same than the global option "tune.fast-forward off". Some reg-tests may rely
on this optim. To detect the feature and skip such script, the following
vtest command must be used:
feature cmd "$HAPROXY_PROGRAM -cc '!(globa.tune & GTUNE_NO_FAST_FWD)'"
The error handling in the HTTP payload forwarding is far to be ideal because
both sides (request and response) are tested each time. It is espcially ugly
on the request side. To report a server error instead of a client error,
there are some workarounds to delay the error handling. The reason is that
the request analyzer is evaluated before the response one. In addition,
errors are tested before the data analysis. It means it is possible to
truncate data because errors may be handled to early.
So the error handling at this stages was totally reviewed. Aborts are now
handled after the data analysis. We also stop to finish the response on
request error or the opposite. As a side effect, the HTTP_MSG_ERROR state is
now useless. As another side effect, the termination flags are now set by
the HTTP analysers and not process_stream().
When we wait for the request body, we are still in the request analysis. So
a SF_FINST_R flag must be reported in logs. Even if some data are already
received, at this staged, nothing is sent to the server.
This patch could be backported in all stable versions.
Depending on the timing, time to time, the log messages can be mixed. A
client can start and be fully handled by HAProxy (including its log message)
before the log message of the previous client was emitted or received. To
fix the issue, a barrier was added to be sure to eval the "expect" rule on
logs before starting the next client.
This patch should fix the issue #1847. It may be backported to all branches
containing this reg-tests.
From time to time, users complain to get 400-Bad-request responses for
totally valid CONNECT requests. After analysis, it is due to the H1 parser
performs an exact match between the authority and the host header value. For
non-CONNECT requests, it is valid. But for CONNECT requests the authority
must contain a port while it is often omitted from the host header value
(for default ports).
So, to be sure to not reject valid CONNECT requests, a basic authority
validation is now performed during the message parsing. In addition, the
host header value is normalized. It means the default port is removed if
possible.
This patch should solve the issue #1761. It must be backported to 2.6 and
probably as far as 2.4.
The default client timeout is too small to be sure to always wait end of
slow clients (the last 2 clients use a delay to send their request). But it
cannot be increased because it will slow down the regtest execution. So a
dedicated frontend with a higher client timeout has been added. This
frontend is used by "slow" clients. The other one is used for normal
requests.
Depending on the timing, time to time, the log message for "/c4" request can
be received before the one for "/c2" request. To (hopefully) fix the issue,
a barrier has been added to wait "/c2" log message before sending other
requests.
Depending on the timing, the second client that should be reported as a
client abort during connection attempt ("CC--" termination state) is
sometime logged as a server close ("SC--" termination state) instead. It
happens because sometime the connection failure to the server s1 is detected
by haproxy before the client c2 aborts. There is no retries and the
connection timeout is set to 100ms. So, to work, the client abort must be
performed and detected by haproxy in less than 100ms.
To fix the issue, the c2 client is now routed to a backend with a connection
timeout set to 1 second and 10 retries. It should be large enough to detect
the client aborts (~10s)
In addition, there is another race condition when the script is
started. sometime, server s1 is not stopped when the first client sends its
request. So a barrier was added to be sure it is stopped before starting to
send requests. And we wait to be sure the server is detected as DOWN to
unblock the barrier. It is performed by a dedicated backend with an
healthcheck on the server s1.
This patch should solve issue #1664.
With the CI occasionally slowing down, we're starting to see again some
spurious failures despite the long 1-second timeouts. This reports false
positives that are disturbing and doesn't provide as much value as this
could. However at this delay it already becomes a pain for developers
to wait for the tests to complete.
This commit adds support for the new environment variable
HAPROXY_TEST_TIMEOUT that will allow anyone to modify the connect,
client and server timeouts. It was set to 5 seconds by default, which
should be plenty for quite some time in the CI. All relevant values
that were 200ms or above were replaced by this one. A few larger
values were left as they are special. One test for the set-timeout
action that used to rely on a fixed 1-sec value was extended to a
fixed 5-sec, as the timeout is normally not reached, but it needs
to be known to compare the old and new values.
Some tests expect a 503, typically those that check that wrong CA/CRL
will not be accepted between a server and a frontend. But such tests
tend to last very long simply because of the 1-second turn-around on
connection retries that happens during the failure. Let's properly set
the retries count to zero for these ones. One test purposely wants to
exhaust the retries so the retries was set to 1 instead.
Implement parsing for the server keyword 'ws'. This is used to configure
the mode of selection for websocket protocol. The configuration
documentation has been updated.
A new regtest has been created to test the proper behavior of the
keyword.
The RFC8441 was not respected by haproxy in regards with server support
for Extended CONNECT. The Extended CONNECT method was used to convert an
Upgrade header stream even if no SETTINGS_ENABLE_CONNECT_PROTOCOL was
received, which is forbidden by the RFC8441. In this case, the behavior
of the http/2 server is unspecified.
Fix this by flagging the connection on receiption of the RFC8441
settings SETTINGS_ENABLE_CONNECT_PROTOCOL. Extended CONNECT is thus only
be used if the flag is present. In the other case, the stream is
immediatly closed as there is no way to handle it in http/2. It results
in a http/1.1 502 or http/2 RESET_STREAM to the client side.
The protocol-upgrade regtest has been extended to test that haproxy does
not emit Extended CONNECT on servers without RFC8441 support.
It must be backported up to 2.4.
Some changes were pushed to improve parsing of the Transfer-Encoding header
parsing annd all related stuff. This new script adds some tests to validate
these changes.
When a message is parsed and copied into the channel buffer, in
h1_process_demux(), more space is requested if some pending data remain
after the parsing while the channel buffer is not empty. To do so,
CS_FL_WANT_ROOM flag is set. It means the H1 parser needs more space in the
channel buffer to continue. In the stream-interface, when this flag is set,
the SI is considered as blocked on the RX path. It is only unblocked when
some data are sent.
However, it is not accurrate because the parsing may be stopped because
there is not enough data to continue. For instance in the middle of a chunk
size. In this case, some data may have been already copied but the parser is
blocked because it must receive more data to continue. If the calling SI is
blocked on RX at this stage when the stream is waiting for the payload
(because http-buffer-request is set for instance), the stream remains stuck
infinitely.
To fix the bug, we must request more space to the app layer only when it is
not possible to copied more data. Actually, this happens when data remain in
the input buffer while the H1 parser is in states MSG_DATA or MSG_TUNNEL, or
when we are unable to copy headers or trailers into a non-empty buffer.
The first condition is quite easy to handle. The second one requires an API
refactoring. h1_parse_msg_hdrs() and h1_parse_msg_tlrs() fnuctions have been
updated. Now it is possible to know when we need more space in the buffer to
copy headers or trailers (-2 is returned). In the H1 mux, a new H1S flag
(H1S_F_RX_CONGESTED) is used to track this state inside h1_process_demux().
This patch is part of a series related to the issue #1362. It should be
backported as far as 2.0, probably with some adaptations. So be careful
during backports.
The abortonclose test was only expecting a close after all server
retries were exhausted, it didn't check for the pending 503, which
fails with new versions of vtest starting with commit 8d6c6bd
("Leak-plugging on barriers").
This may be backported, but carefully in case older versions would
really close without responding.
This test ensure that h2 pseudo headers are properly checked for invalid
characters and the host header is ignored if :authority is present. This
is necessary to prevent h2 desync attacks as described here
https://portswigger.net/research/http2
This test ensure that http scheme-based normalization is properly
applied on target URL and host header. It uses h2 clients as it is not
possible to specify an absolute url for h1 vtc clients.
Since the commit 5e702fcad ("MINOR: http-ana: Use -1 status for client
aborts during queuing and connect"), -1 status is reported in the log
message when the client aborts during queuing and
connect. http_abortonclose.vtc script must be update accordingly.
Some regtests involve multiple requests from multiple clients, which can
be dispatched as multiple requests to a server. It turns out that the
idle connection sharing works so well that very quickly few connections
are used, and regularly some of the remaining idle server connections
time out at the moment they were going to be reused, causing those random
"HTTP header incomplete" traces in the logs that make them fail often. In
the end this is only an artefact of the test environment.
And indeed, some tests like normalize-uri which perform a lot of reuse
fail very often, about 20-30% of the times in the CI, and 100% of the
time in local when running 1000 tests in a row. Others like ubase64,
sample_fetches or vary_* fail less often but still a lot in tests.
This patch addresses this by adding "tune.idle-pool.shared off" to all
tests which have at least twice as many requests as clients. It proves
very effective as no single error happens on normalize-uri anymore after
10000 tests. Also 100 full runs of all tests yield no error anymore.
One test is tricky, http_abortonclose, it used to fail ~10 times per
1000 runs and with this workaround still fails once every 1000 runs.
But the test is complex and there's a warning in it mentioning a
possible issue when run in parallel due to a port reuse.
This script test abortonclose option for HTTP/1 client only. It may be
backported as far as 2.0. But on the 2.2 and prior, the syslog part must be
adapted to catch log messages emitted by proxy during HAProxy
startup. Following lines must be added :
recv
expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy fe1 started."
recv
expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy fe2 started."