Now with fc_glitches and bc_glitches we can retrieve the number of
detected glitches on a front or back connection. On the backend it
can indicate a bug in a server that may induce frequent reconnections
hence CPU usage in TLS reconnections, and on the frontend it may
indicate an abusive client that may be trying to attack the stack
or to fingerprint it. Small non-zero values are definitely expected
and can be caused by network glitches for example, as well as rare
bugs in the other component (or maybe even in haproxy). These should
never be considered as alarming as long as they remain low (i.e.
much less than one per request). A reg-test is provided.
The new global keywords "http-err-codes" and "http-fail-codes" allow to
redefine which HTTP status codes indicate a client-induced error or a
server error, as tracked by stick-table counters. This is only done
globally, though everything was done so that it could easily be extended
to a per-proxy mechanism if there was a real need for this (but it would
eat quite more RAM then).
A simple reg-test was added (http-err-fail.vtc).
Willy made me realize that tree-based matching may also suffer from
out-of-order mapfile loading, as opposed to what's being said in
b546bb6d ("BUG/MINOR: map: list-based matching potential ordering
regression") and the associated REGTEST.
Indeed, in case of duplicated keys, we want to be sure that only the key
that was first seen in the file will be returned (as long as it is not
removed). The above fix is still valid, and the list-based match regtest
will also prevent regressions for tree-based match since mapfile loading
logic is currently match-type agnostic.
But let's clarify that by making both the code comment and the regtest
more precise.
As shown in "BUG/MINOR: map: list-based matching potential ordering
regression", list-based matching types such as dom are affected by the
order in which elements are loaded from the map.
Since this is historical behavior and existing usages depend on it, we
add a test to prevent future regressions.
Previous patch fixed a regression which caused some config with
attach-srv to be rejected if the rule was declared before the target
server itself. To better detect this kind of error, mix the declaration
order in the corresponding regtest.
Previously an expression like:
path,word(2,/) -m found
always returned `true`.
Bug exists since the `word` converter exists. That is:
c9a0f6d023
The same bug was previously fixed for the `field` converter in commit
4381d26edc.
The fix should be backported to 1.6+.
Mark the reverse HTTP feature as experimental. This will allow to adjust
if needed the configuration mechanism with future developments without
maintaining retro-compatibility.
Concretely, each config directives linked to it now requires to specify
first global expose-experimental-directives before. This is the case for
the following directives :
- rhttp@ prefix uses in bind and server lines
- nbconn bind keyword
- attach-srv tcp rule
Each documentation section refering to these keywords are updated to
highlight this new requirement.
Note that this commit has duplicated on several places the code from the
global function check_kw_experimental(). This is because the latter only
work with cfg_keyword type. This is not adapted with bind_kw or
action_kw types. This should be improve in a future patch.
http_reuse_be_transparent.vtc relies on "transparent" proxy option which
is guarded by the USE_TPROXY ifdef at multiple places in the code.
Hence, executing the above test when haproxy was compiled without the
USE_TPROXY feature (ie: generic target) results in this kind of error:
*** h1 debug|[NOTICE] (1189756) : haproxy version is 2.9-dev1-8fc21e-807
*** h1 debug|[NOTICE] (1189756) : path to executable is ./haproxy
*** h1 debug|[ALERT] (1189756) : config : parsing [/tmp/vtc.1189751.18665e7b/h1/cfg:11]: option 'transparent' is not supported due to build options.
*** h1 debug|[ALERT] (1189756) : config : Error(s) found in configuration file : /tmp/vtc.1189751.18665e7b/h1/cfg
Now we skip the regtest if TPROXY feature is missing.
After giving it some thought, it could pretty well happen that other
protocols benefit from the sticky algorithm that some used to emulate
using a "stick-on int(0)" or things like this previously. So better
rename it to "sticky" right now instead of having to keep that "log-"
prefix forever. It's still limited to logs, of course, only the algo
is renamed in the config.
I've had this test here never committed over the last 2.5 years, that
works fine and I didn't notice it was not part of the tree. It makes a
server return odd-sized chunked responses with short pauses between half
of thems and verifies they're not truncated on the client. It may detect
eventually state machine breakages, so better commit it.
"log-balance" directive was recently introduced to configure the
balancing algorithm to use when in a log backend. However, it is
confusing and it causes issues when used in default section.
In this patch, we take another approach: first we remove the
"log-balance" directive, and instead we rely on existing "balance"
directive to configure log load balancing in log backend.
Some algorithms such as roundrobin can be used as-is in a log backend,
and for log-only algorithms, they are implemented as "log-$name" inside
the "backend" directive.
The documentation was updated accordingly.
Since the reload is now synchronous over the master CLI, try to reload
with it. This was a problem before with the signals because it wasn't
possible to wait for the end of the reload before sending the requests.
This activate again this test, we will see if it's more stable or we
will deactivate it again..
We now take care to properly handle the abortonclose close option if it is
set on the backend and be sure we ignore it when it is set on the frontend
(inherited from the defaults section).
Current version of VTest tests the output of "haproxy -c" instead of the
return code. Since we don't output anymore when the configuration is
valid, this broke the test. (a06f621).
This fixes the issue by adding the -V when doing a -conf-OK. But this
must fixed in VTest.
To follow-up the implementation of the new set-proxy-v2-tlv-fmt
keyword in the server, the connection is updated to use the previously
allocated TLVs. If no value was specified, we send out an empty TLV.
As the feature is fully working with this commit, documentation and a
test for the server and default-server are added as well.
This new fetcher can be used to extract the list of cookie names from
Cookie request header or from Set-Cookie response header depending on
the stream direction. There is an optional argument that can be used
as the delimiter (which is assumed to be the first character of the
argument) between cookie names. The default delimiter is comma (,).
Note that we will treat the Cookie request header as a semi-colon
separated list of cookies and each Set-Cookie response header as
a single cookie and extract the cookie names accordingly.
Signature algorithms allows us to select the right certificates when
using TLSv1.3. This patch update the ssl_crt-list_filters.vtc regtest to
do more precise testing with TLSv1.3 in addition to TLSv1.2.
This allow us to test correctly bug #2300.
It could be backported to 2.8 with the previous fix for certificate
selection.
Method now returns the content of Json Arrays, if it is specified in
Json Path as String. The start and end character is a square bracket. Any
complex object in the array is returned as Json, so that you might get Arrays
of Array or objects. Only recommended for Arrays of simple types (e.g.,
String or int) which will be returned as CSV String. Also updated
documentation and fixed issue with parenthesis and other changes from
comments.
This patch was discussed in issue #2281.
Signed-off-by: William Lallemand <wlallemand@haproxy.com>
The maxconn keyword is not used anymore for reverse HTTP bind. It has
been replaced recently by the new keyword nbconn. As it's default value
is 1, it can be safely removed from the regtest without affecting its
behavior.
mux-to-mux fast-forwarding will be added. To avoid mix with the splicing and
simplify the commits, the kernel splicing support is removed from the
stconn. CF_KERN_SPLICING flag is removed and the support is no longer tested
in process_stream().
In the stconn part, rcv_pipe() callback function is no longer called.
Reg-tests scripts testing the kernel splicing are temporarly marked as
broken.
This regtest declares and uses 3 log backends, one of which has TCP syslog
servers declared in it and other ones UDP syslog servers.
Some tests aims at testing log distribution reliability by leveraging the
log-balance hash algorithm with a key extracted from the request URL, and
the dummy vtest syslog servers ensure that messages are sent to the
correct endpoint. Overall this regtest covers essential parts of the log
message distribution and log-balancing logic involved with log backends.
It also leverages the log-forward section to perform the TCP->UDP
translation required to test UDP endpoints since vtest syslog servers
work in UDP mode.
Finally, we have some tests to ensure that the server queuing/dequeuing
and failover (backup) logics work properly.
This patch add a hash of the Origin header to the cache's secondary key.
This enables to manage store responses that have a "Vary: Origin" header
in the cache when vary is enabled.
This cannot be considered as a means to manage CORS requests though, it
only processes the Origin header and hashes the presented value without
any form of URI normalization.
This need was expressed by Philipp Hossner in GitHub issue #251.
Co-Authored-by: Philipp Hossner <philipp.hossner@posteo.de>
in random-forwarding.vtc script, adding "Content-Lnegth; 0" header in the
successful response to the CONNECT request is invalid but it may also lead
to wrong check on the response. "rxresp" directive don"t handle CONNECT
response. Thus "-no_obj" must be added instead, to be sure the payload won't
be retrieved or expected.
Added set-timeout for frontend side of session, so it can be used to set
custom per-client timeouts if needed. Added cur_client_timeout to fetch
client timeout samples.
Prior to this commit, converter "bytes" takes only integer values as
arguments. After this commit, it can take variable names as inputs.
This allows us to dynamically determine the offset/length and capture
them in variables. These variables can then be used with the converter.
Example use case: parsing a token present in a request header.
This patch implements the 'curves' keyword on server lines as well as
the 'ssl-default-server-curves' keyword in the global section.
It also add the keyword on the server line in the ssl_curves reg-test.
These keywords allow the configuration of the curves list for a server.
Based on the new, generic allocation infrastructure, a new sample
fetch fc_pp_tlv is introduced. It is an abstraction for existing
PPv2 TLV sample fetches. It takes any valid TLV ID as argument and
returns the value as a string, similar to fc_pp_authority and
fc_pp_unique_id.
Bug was introduced by commit 26654 ("MINOR: ssl: add "crt" in the
cert_exts array").
When looking for a .crt directly in the cert_exts array, the
ssl_sock_load_pem_into_ckch() function will be called with a argument
which does not have its ".crt" extensions anymore.
If "ssl-load-extra-del-ext" is used this is not a problem since we try
to add the ".crt" when doing the lookup in the tree.
However when using directly a ".crt" without this option it will failed
looking for the file in the tree.
The fix removes the "crt" entry from the array since it does not seem to
be really useful without a rework of all the lookups.
Should fix issue #2265
Must be backported as far as 2.6.
This test instantiates two haproxy instances :
* first one uses a reverse server with two bind pub and priv
* second one uses a reverse bind to initiate connection to priv endpoint
On startup, only first haproxy instance is up. A client send a request
to pub endpoint and should receive a HTTP 503 as no connection are
available on the reverse server.
Second haproxy instance is started. A delay of 3 seconds is inserted to
wait for the connection between the two LBs. Then a client retry the
request and this time should receive a HTTP 200 reusing the bootstrapped
connection.
This regtest is similar to the previous one, except the optional name
argument is specified.
An extra haproxy instance is used as a gateway for clear/TLS as vtest
does not support TLS natively.
A first request is done by specifying a name which does not match the
idle connection SNI. This must result in a HTTP 503. Then the correct
name is used which must result in a 200.
Test support for reverse server. This can be test without the opposite
haproxy reversal support though a combination of VTC clients used to
emit HTTP/2 responses after connection.
This test ensures that first we get a 503 when connecting on a reverse
server with no idle connection. Then a dummy VTC client is connected to
act as as server. It is then expected that the same request is achieved
with a 200 this time.
Introduced in:
424981cde REGTEST: add ifnone-forwardfor test
b015b3eb1 REGTEST: add RFC7239 forwarded header tests
see also:
fbbbc33df REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+
Ben Kallus also noticed that we preserve leading zeroes on content-length
values. While this is totally valid, it would be safer to at least trim
them before passing the value, because a bogus server written to parse
using "strtol(value, NULL, 0)" could inadvertently take a leading zero
as a prefix for an octal value. While there is not much that can be done
to protect such servers in general (e.g. lack of check for overflows etc),
at least it's quite cheap to make sure the transmitted value is normalized
and not taken for an octal one.
This is not really a bug, rather a missed opportunity to sanitize the
input, but is marked as a bug so that we don't forget to backport it to
stable branches.
A combined regtest was added to h1or2_to_h1c which already validates
end-to-end syntax consistency on aggregate headers.
The content-length header parser has its dedicated function, in order
to take extreme care about invalid, unparsable, or conflicting values.
But there's a corner case in it, by which it stops comparing values
when reaching the end of the header. This has for a side effect that
an empty value or a value that ends with a comma does not deserve
further analysis, and it acts as if the header was absent.
While this is not necessarily a problem for the value ending with a
comma as it will be cause a header folding and will disappear, it is a
problem for the first isolated empty header because this one will not
be recontructed when next ones are seen, and will be passed as-is to the
backend server. A vulnerable HTTP/1 server hosted behind haproxy that
would just use this first value as "0" and ignore the valid one would
then not be protected by haproxy and could be attacked this way, taking
the payload for an extra request.
In field the risk depends on the server. Most commonly used servers
already have safe content-length parsers, but users relying on haproxy
to protect a known-vulnerable server might be at risk (and the risk of
a bug even in a reputable server should never be dismissed).
A configuration-based work-around consists in adding the following rule
in the frontend, to explicitly reject requests featuring an empty
content-length header that would have not be folded into an existing
one:
http-request deny if { hdr_len(content-length) 0 }
The real fix consists in adjusting the parser so that it always expects a
value at the beginning of the header or after a comma. It will now reject
requests and responses having empty values anywhere in the C-L header.
This needs to be backported to all supported versions. Note that the
modification was made to functions h1_parse_cont_len_header() and
http_parse_cont_len_header(). Prior to 2.8 the latter was in
h2_parse_cont_len_header(). One day the two should be refused but the
former is also used by Lua.
The HTTP messaging reg-tests were completed to test these cases.
Thanks to Ben Kallus of Dartmouth College and Narf Industries for
reporting this! (this is in GH #2237).