Using curl for SSL tests can be a problem if it wasn't compiled with the
right SSL library and if it didn't share any cipher with HAProxy. To
have more robust tests we now use HAProxy as an SSL client, so we are
sure that the client and the server share the same SSL requirements.
This patch also adds timeouts in the default section, logs on stderr and
fix some indentation issues.
This reg-test tests the client auth feature of HAProxy for both the
backend and frontend section with a CRL list.
This reg-test uses 2 chained listeners because vtest does not handle the
SSL. Test the frontend client auth and the backend side at the same
time.
It sends 3 requests: one with a correct certificate, one with an expired
one and one which was revoked. The client then checks if we received the
right one with the right error.
Certificates, CA and CRL are expiring in 2050 so it should be fine for
the CI.
This test could be backported as far as HAProxy 1.6
This reverts commit 1979943c30ef285ed04f07ecf829514de971d9b2.
Captures in comment was only used when a tcp-check expect based on a negative
regex matching failed to eventually report what was captured while it was not
expected. It is a bit far-fetched to be useable IMHO. on-error and on-success
log-format strings are far more usable. For now there is few check sample
fetches (in fact only one...). But it could be really powerful to report info in
logs.
Parse back-references in comments of tcp-check expect rules. If references are
made, capture groups in the match and replace references to it within the
comment when logging the error. Both text and binary regex can caputre groups
and reference them in the expect rule comment.
[Cf: I slightly updated the patch. exp_replace() function is used instead of a
custom one. And if the trash buffer is too small to contain the comment during
the substitution, the comment is ignored.]
Some expect rules cannot be satisfied due to inherent ambiguity towards
the received data: in the absence of match, the current behavior is to
be forced to wait either the end of the connection or a buffer full,
whichever comes first. Only then does the matching diagnostic is
considered conclusive. For instance :
tcp-check connect
tcp-check expect !rstring "^error"
tcp-check expect string "valid"
This check will only succeed if the connection is closed by the server before
the check timeout. Otherwise the first expect rule will wait for more data until
"^error" regex matches or the check expires.
Allow the user to explicitly define an amount of data that will be
considered enough to determine the value of the check.
This allows succeeding on negative rstring rules, as previously
in valid condition no match happened, and the matching was repeated
until the end of the connection. This could timeout the check
while no error was happening.
[Cf: I slighly updated the patch. The parameter was renamed and the value is a
signed integer to support -1 as default value to ignore the parameter.]
The 'http-check send' directive have been added to add headers and optionnaly a
payload to the request sent during HTTP healthchecks. The request line may be
customized by the "option httpchk" directive but there was not official way to
add extra headers. An old trick consisted to hide these headers at the end of
the version string, on the "option httpchk" line. And it was impossible to add
an extra payload with an "http-check expect" directive because of the
"Connection: close" header appended to the request (See issue #16 for details).
So to make things official and fully support payload additions, the "http-check
send" directive have been added :
option httpchk POST /status HTTP/1.1
http-check send hdr Content-Type "application/json;charset=UTF-8" \
hdr X-test-1 value1 hdr X-test-2 value2 \
body "{id: 1, field: \"value\"}"
When a payload is defined, the Content-Length header is automatically added. So
chunk-encoded requests are not supported yet. For now, there is no special
validity checks on the extra headers.
This patch is inspired by Kiran Gavali's work. It should fix the issue #16 and
as far as possible, it may be backported, at least as far as 1.8.
Regtest unique-id.vtc was added by commit 5fcec84c58 ("REGTEST: Add
unique-id reg-test") but it relies on the "uuid" sample fetch which
is only available in version 2.0 and above. Let's reflect that in
the REQUIRE_VERSION tag.
Regtest proxy_protocol_tlv_validation was added by commit 488ee7fb6e
("BUG/MAJOR: proxy_protocol: Properly validate TLV lengths") but it
relies on a trick involving http-after-response to append a header
after a 400-badreq response, which is not possible in earlier versions,
so make it depend on 2.2.
Add a sample fetch for the name of a bind. This can be useful to
take decisions when PROXY protocol is used and we can't rely on dst,
such as the sample config below.
defaults
mode http
listen bar
bind 127.0.0.1:1111
server s1 127.0.1.1:1234 send-proxy
listen foo
bind 127.0.1.1:1234 name foo accept-proxy
http-request return status 200 hdr dst %[dst] if { dst 127.0.1.1 }
The abns_socket in seamless-reload regtest regularly fails in Travis-CI
on smaller machines only (typically the ppc64le and sometimes s390x).
The error always reports an incomplete HTTP header as seen from the
client. And this can occasionally be reproduced on the minicloud ppc64le
image when setting a huge file descriptors limit (1 million).
What happens in fact is the following: depending on the binding order,
some connections from the client might reach the TCP listener on the
old instance and be forwarded to the ABNS listener of the second
instance just being prepared to start up. But due to the huge number
of FDs, setting them up takes slightly more time and the 20ms server
timeout may expire before the new instance finishes its startup. This
can result in an occasional 504, except that since the client timeout
is the same as the server timeout, both sides are closed at the same
time and the client doesn't receive the 504.
In addition a second problem plugs onto this: by default http-reuse is
enabled. Some requests being forwarded to the older instance will be
sent over an already established connection. But the CPU used by the
starting process using many FDs will be taken away from the older
process, whose abns listener will not see a request for more than 20ms,
and will decide to kill the idle client connection. At the same moment
the TCP proxy forwards a request over this closing connection, it
detects the close and silently closes the other side to let the
client retry, which is detected by the vtest client as another case
of empty header. This is easier to reproduce in VMs with few CPUs
(2 or less) and some noisy neighbors such as a few spinning loops in
background.
Let's just increase this tests' timeout to avoid this. While a few
ms are close to the scheduler's granularity, this test is never
supposed to trigger the timeouts so it's safe to go higher without
impacts on the test execution time. At one second the problem seems
impossible to reproduce on the minicloud VMs.
These are mostly comments in the code. A few error messages were fixed
and are of low enough importance not to deserve a backport. Some regtests
were also fixed.
This patch adds the `unique-id` option to `proxy-v2-options`. If this
option is set a unique ID will be generated based on the `unique-id-format`
while sending the proxy protocol v2 header and stored as the unique id for
the first stream of the connection.
This feature is meant to be used in `tcp` mode. It works on HTTP mode, but
might result in inconsistent unique IDs for the first request on a keep-alive
connection, because the unique ID for the first stream is generated earlier
than the others.
Now that we can send unique IDs in `tcp` mode the `%ID` log variable is made
available in TCP mode.
This patch fixes PROXYv2 parsing when the payload of the TCP connection is
fused with the PROXYv2 header within a single recv() call.
Previously HAProxy ignored the PROXYv2 header length when attempting to
parse the TLV, possibly interpreting the first byte of the payload as a
TLV type.
This patch adds proper validation. It ensures that:
1. TLV parsing stops when the end of the PROXYv2 header is reached.
2. TLV lengths cannot exceed the PROXYv2 header length.
3. The PROXYv2 header ends together with the last TLV, not allowing for
"stray bytes" to be ignored.
A reg-test was added to ensure proper behavior.
This patch tries to find the sweat spot between a small and easily
backportable one, and a cleaner one that's more easily adaptable to
older versions, hence why it merges the "if" and "while" blocks which
causes a reindent of the whole block. It should be used as-is for
versions 1.9 to 2.1, the block about PP2_TYPE_AUTHORITY should be
dropped for 2.0 and the block about CRC32C should be dropped for 1.8.
This bug was introduced when TLV parsing was added. This happened in commit
b3e54fe387. This commit was first released
with HAProxy 1.6-dev1.
A similar issue was fixed in commit 7209c204bd.
This patch must be backported to HAProxy 1.6+.
if the monitor-uri starts by a slash ('/'), the matching is performed against
the request's path instead of the request's uri. It is a workaround to let the
HTTP/2 requests match the monitor-uri. Indeed, in HTTP/2, clients are encouraged
to send absolute URIs only.
This patch is not tagged as a bug, because the previous behavior matched exactly
what the doc describes. But it may surprise that HTTP/2 requests don't match the
monitor-uri.
This patch may be backported to 2.1 because URIs of HTTP/2 are stored using the
absolute-form starting this version. For previous versions, this patch will only
helps explicitely absolute HTTP/1 requests (and only the HTX part because on the
legacy HTTP, all the URI is matched).
It should fix the issue #509.
Ilya reported that the "which" utility is not that much portable and is
absent from Fedora. "type -p" is not portable either, and the correct
solution appears to be "command -v", so let's use this for now, we can
change it again in the future in case of problems.
Link: https://www.mail-archive.com/haproxy@formilux.org/msg36332.html
A reg test has been added to ensure the evaluation of http-after-responses rules
is functionnal for all kind of responses (server, applet and internal
responses).
2 reg tests have been added to ensure the HTTP return action is functionnal. A
reg test is about returning error files. The other one is about returning
default responses and responses based on string or file payloads.
2 reg tests are added. The first one ensures the declaration of errors in a
proxy is fonctionnal. It declares http-errors sections and declare error files
using the errorfile and the errorfiles directives, both in the default section
and the frontend sections. The second one ensures it is possible to use a custom
error file for an HTTP deny rule.
With this new reg test we ensure the strict rewriting mode of HTTP rules is
functional. The mode is tested for request and response rules. The default mode
(strict), the swtich off and the reset on new ruleset are tested for both.
First, concat() is a converter, not a sample fetch. So use str() sample fetch
with no string and call concat on it. Then, the argument of the set-uri rule
must be a log format string. So it must be inside %[] to be evaluated.
This regtest validates all hashes that we support, on all input bytes from
0x00 to 0xFF. Those supporting avalanche are tested as well. It also tests
len(), hex() and base64(). It purposely does not enable sha2() because this
one relies on OpenSSL and there's no point in validating that OpenSSL knows
how to hash, what matters is that we can test our hashing functions in all
cases. However since the tests were written, they're still present and
commented out in case that helps.
It may be backported to supported versions, possibly dropping a few algos
that were not supported (e.g. crc32c requires 1.9 minimum).
Note that this test will fail on crc32/djb2/sdbm/wt6 unless patches
"BUG/MINOR: stream: init variables when the list is empty" and
"BUG/MAJOR: hashes: fix the signedness of the hash inputs" are included.
This regtest tests the issue #446 by starting 2 programs and checking if
they exist in the "show proc" of the master CLI.
Should be backported as far as 2.0.
Implement #REQUIRE_BINARIES for vtc files.
The run-regtests.sh script will check if the binary is available in the
environment, if not, it wil disable the vtc.
Add a reg-test which test the update of a certificate over the CLI. This
test requires socat and curl.
This commit also adds an ECDSA certificate in the ssl directory.
Instead of failing the conversion when an invalid number of bits is
given the sha2 converter now fails with an appropriate error message
during startup.
The sha2 converter was introduced in d437630237,
which is in 2.1 and higher.
This test checks that an HTTP message is properly processed when we failed to
add the HTX EOM block in an HTX message during the parsing because the buffer is
full. Some space must be released in the buffer to make it possible. This
requires an extra pass in the H1 multiplexer. Here, we must be sure the mux is
called while there is no more incoming data.
It is a "devel" test because conditions to run the test successfully is highly
dependent on the implementation. So if it fail, it is not necessarily a bug. It
may be due of an internal change. It relies on internal HTX sample fetches.
The sample fetche can get srv_name without foreach
`core.backends["bk"].servers`.
Then we can get Server class quickly via
`core.backends[txn.f:be_name()].servers[txn.f:srv_name()]`.
Issue#342
VTest can now enable mworker and mcli with separate flags so lets
update vtc files that need it. This also allows to revert the change
made with 1545a59c ("REGTESTS: make seamless-reload depend on 1.9
and above").
Since latest updates, vtest requires the master CLI when running in
master-worker mode, and this one is only available starting with 1.9.
The seamless reload test is the only one depending on this and now
fails on 1.8, so let's adjust it accordingly.
Previously an expression like:
path,field(2,/) -m found
always returned `true`.
Bug exists since the `field` converter exists. That is:
f399b0debf
The fix should be backported to 1.6+.
This test launches a HAProxy process in master worker with 'nbproc 4'.
It sends a "show info" to the process 3 and verify that the right
process replied.
This regtest depends on the support of the master CLI for VTest.