Since mailers/healthcheckmail.vtc already requires lua to emulate the
SMTP server for the test, force it to use lua mailers example script
to send email-alerts so we don't rely anymore on legacy tcpcheck
mailers implementation.
This is done by simply loading examples/mailers.lua (as a symlink) from
haproxy config file.
As this feature has a dependency on resolvers being configured,
this test acts as good documentation as well.
This change also has a spelling fix for filename.
This commit makes sure that if three is no "alpn", "npn" nor "no-alpn"
setting on a "bind" line which corresponds to an HTTPS or QUIC frontend,
we automatically turn on "h2,http/1.1" as an ALPN default for an HTTP
listener, and "h3" for a QUIC listener. This simplifies the configuration
for end users since they won't have to explicitly configure the ALPN
string to enable H2, considering that at the time of writing, HTTP/1.1
represents less than 7% of the traffic on large infrastructures. The
doc and regtests were updated. For more info, refer to the following
thread:
https://www.mail-archive.com/haproxy@formilux.org/msg43410.html
Since the commit f2b02cfd9 ("MAJOR: http-ana: Review error handling during
HTTP payload forwarding"), during the payload forwarding, we are analyzing a
side, we stop to test the opposite side. It means when the HTTP request
forwarding analyzer is called, we no longer check the response side and vice
versa.
Unfortunately, since then, the HTTP tunneling is broken after a protocol
upgrade. On the response is switch in TUNNEL mode. The request remains in
DONE state. As a consequence, data received from the server are forwarded to
the client but not data received from the client.
To fix the bug, when both sides are in DONE state, both are switched in same
time in TUNNEL mode if it was requested. It is performed in the same way in
http_end_request() and http_end_response().
This patch should fix the issue #2125. It is 2.8-specific. No backport
needed.
This patch proposes to enumerate servers using internal HAProxy list.
Also, remove the flag SRV_F_NON_PURGEABLE which makes the server non
purgeable each time Lua uses the server.
Removing reg-tests/cli_delete_server_lua.vtc since this test is no
longer relevant (we don't set the SRV_F_NON_PURGEABLE flag anymore)
and we already have a more generic test:
reg-tests/server/cli_delete_server.vtc
Co-authored-by: Aurelien DARRAGON <adarragon@haproxy.com>
In order to increase usability, the "show ssl ocsp-response" also takes
a frontend certificate path as parameter. In such a case, it behaves the
same way as "show ssl cert foo.pem.ocsp".
Instead of having a dedicated httpclient instance and its own code
decorrelated from the actual auto update one, the "update ssl
ocsp-response" will now use the update task in order to perform updates.
Since the cli command allows to update responses that were never
included in the auto update tree, a new flag was added to the
certificate_ocsp structure so that the said entry can be inserted into
the tree "by hand" and it won't be reinserted back into the tree after
the update process is performed. The 'update_once' flag "stole" a bit
from the 'fail_count' counter since it is the one less likely to reach
UINT_MAX among the ocsp counters of the certificate_ocsp structure.
This new logic required that every certificate_ocsp entry contained all
the ocsp-related information at all time since entries that are not
supposed to be configured automatically can still be updated through the
cli. The logic of the ssl_sock_load_ocsp was changed accordingly.
This patch adds the support for the PS algorithms when verifying JWT
signatures (rsa-pss). It was not managed during the first implementation
and previously raised an "Unmanaged algorithm" error.
The tests use the same rsa signature as the plain rsa tests (RS256 ...)
and the implementation simply adds a call to
EVP_PKEY_CTX_set_rsa_padding in the function that manages rsa and ecdsa
signatures.
The signatures in the reg-test were built thanks to the PyJWT python
library once again.
When adding a new certificate through the CLI and appending it to a
crt-list with the 'ocsp-update' option set, the new certificate would
not be added to the OCSP response update list.
The only thing that was missing was the copy of the ocsp_update mode
from the ssl_bind_conf into the ckch_store's object.
An extra wakeup of the update task also needed to happen in case the
newly inserted entry needs to be updated before the next wakeup of the
task.
This patch does not need to be backported.
Add tests for the "show ssl ocsp-updates" cli command as well as the new
'base64' parameter that can be passed to the "show ssl ocsp-response"
command.
The options were after the filters which does not work well and now
raises a warning. It did not break the regtest because the crt-lists
were not actually used by clients.
Added new testcases for all 4 branches of smp_fetch_hdr_ip():
- a plain IPv4 address
- an IPv4 address with an port number
- a plain IPv6 address
- an IPv6 address wrapped in [] brackets
304 responses contains "Content-length" or "Transfer-encoding"
headers. rxresp action expects to get a payload in this case, even if 304
reponses must not have any payload. A workaround was added to remove these
headers from the 304 responses. However, a better solution is to only get
the response headers from clients using rxresphdrs action.
If a payload is erroneously added in these reponses, the scripts will fail
the same way. So it is safe.
Since commit cc9bf2e5f "MEDIUM: cache: Change caching conditions"
responses that do not have an explicit expiration time are not cached
anymore. But this mechanism wrongly used the TX_CACHE_IGNORE flag
instead of the TX_CACHEABLE one. The effect this had is that a cacheable
response that corresponded to a request having a "Cache-Control:
no-cache" for instance would not be cached.
Contrary to what was said in the other commit message, the "checkcache"
option should not be impacted by the use of the TX_CACHEABLE flag
instead of the TX_CACHE_IGNORE one. The response is indeed considered as
not cacheable if it has no expiration time, regardless of the presence
of a cookie in the response.
This should fix GitHub issue #2048.
This patch can be backported up to branch 2.4.
In this scripts, several clients perform a requests and exit because an SSL
error is expected and thus no response is sent. However, we must explicitly
wait for the connection close, via an "expect_close" statement. Otherwise,
depending on the timing, HAProxy may detect the client abort before any
connection attempt on the server side and no SSL error is reported, making
the script to fail.
If "-dF" command line argument is passed to haproxy to execute the script,
by sepcifying HAPROXY_ARGS variable, http_splicing.vtc is now skipped.
Without this patch, the script fails when the fast-forward is disabled.
A feature command was added to detect if infinite forward is disabled to be
able to skip the script. Unfortunately, it is no supported to evaluate such
expression. Thus remove it. For now, reg-tests must not be executed with
"-dF" option.
The -dF option can now be used to disable data fast-forward. It does the
same than the global option "tune.fast-forward off". Some reg-tests may rely
on this optim. To detect the feature and skip such script, the following
vtest command must be used:
feature cmd "$HAPROXY_PROGRAM -cc '!(globa.tune & GTUNE_NO_FAST_FWD)'"
Add a new test to prevent any regression for the if-none parameter in
the "forwardfor" proxy option.
This will ensure upcoming refactors don't break reference behavior.
The wrong return value was checked, resulting in dead code and
potential bugs.
It should fix GitHub issue #2005.
This patch should be backported up to 2.5.
When the JWT token signature is using ECDSA algorithm (ES256 for
instance), the signature is a direct concatenation of the R and S
parameters instead of OpenSSL's DER format (see section
3.4 of RFC7518).
The code that verified the signatures wrongly assumed that they came in
OpenSSL's format and it did not actually work.
We now have the extra step of converting the signature into a complete
ECDSA_SIG that can be fed into OpenSSL's digest verification functions.
The ECDSA signatures in the regtest had to be recalculated and it was
made via the PyJWT python library so that we don't end up checking
signatures that we built ourselves anymore.
This patch should fix GitHub issue #2001.
It should be backported up to branch 2.5.
The error handling in the HTTP payload forwarding is far to be ideal because
both sides (request and response) are tested each time. It is espcially ugly
on the request side. To report a server error instead of a client error,
there are some workarounds to delay the error handling. The reason is that
the request analyzer is evaluated before the response one. In addition,
errors are tested before the data analysis. It means it is possible to
truncate data because errors may be handled to early.
So the error handling at this stages was totally reviewed. Aborts are now
handled after the data analysis. We also stop to finish the response on
request error or the opposite. As a side effect, the HTTP_MSG_ERROR state is
now useless. As another side effect, the termination flags are now set by
the HTTP analysers and not process_stream().
When we wait for the request body, we are still in the request analysis. So
a SF_FINST_R flag must be reported in logs. Even if some data are already
received, at this staged, nothing is sent to the server.
This patch could be backported in all stable versions.
Tests a subpart of the ocsp auto update feature. It will mainly focus on
the 'auto' mode since the 'on' one relies strongly on timers way too
long to be used in a regtest context.
testdir can be a very long directory since it depends on source
directory path, this can lead to failure during tests when UNIX socket
path exceeds maximum allowed length of 97 characters as defined in
str2sa_range().
16:48:14 [ALERT] *** h1 debug| (10082) : config : parsing [/tmp/haregtests-2022-12-17_16-47-39.4RNzIN/vtc.4850.5d0d728a/h1/cfg:19] : 'bind' : socket path 'unix@/local/p4clients/pkgbuild-bB20r/workspace/build/HAProxy/HAProxy-2.7.x.68.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/HAProxy-2.7.x/src/reg-tests/lua/srv3' too long (max 97)
Also, it is not advisable to create UNIX socket in actual source
directory, but instead use dedicated temporary directory create for test
purpose.
This should be backported to 2.6
The test still need to have more start condition, like ulimit checks
and less strict value checks.
To be backported where it was activated (as far as 2.5)
When an error occurred during the request parsing, the H1 multiplexer is
responsible to sent a response to the client and to release the H1 stream
and the H1 connection. In HTTP mode, it is not an issue because at this
stage the H1 connection is in embryonic state. Thus it can be released
immediately.
However, it is a problem if the connection was first upgraded from a TCP
connection. In this case, a stream-connector is attached. The H1 stream is
not orphan. Thus it must not be released at this stage. It must be detached
first. Otherwise a BUG_ON() is triggered in h1s_destroy().
So now, the H1S is destroyed on early errors but only if the H1C is in
embryonic state.
This patch may be related to #1966. It must be backported to 2.7.
Check if USE_OBSOLETE_LINK=1 was used so it could run this test when
ASAN is not built, since ASAN require this option.
For this test to work, the ulimit -n value must be big enough.
Could be backported at least to 2.5.
change the expected maxconn from 10000 to 11000 in
automatic_maxconn.vtc
To be backported only if the test failed, the value might be the right
one in previous versions.