Only one Host header can be defined and some headers are automatically skipped
(Connection, Content-Length and Transfer-Encoding). In addition, a note about
the synchronisation of the Host header value and the request uri has been added.
Because in HTTP, the host header and the request authority, if any, must be
identical, we keep both synchornized. It means the right flags are set on the
HTX statrt-line calling http_update_host(). There is no header when it happens,
but it is not an issue. Then, if a Host header is inserted,
http_update_authority() is called.
Note that for now, the host header is not automatically added when required.
Connection, content-length and transfer-encoding headers are ignored for
http-check send rules. For now, the keep-alive is not supported and the
"connection: close" header is always added to the request. And the
content-length header is automatically added.
cpu_calls, cpu_ns_avg, cpu_ns_tot, lat_ns_avg and lat_ns_tot depend on the
stream to find the current task and must check for it or they may cause a
crash if misused or used in a log-format string after commit 5f940703b3
("MINOR: log: Don't depends on a stream to process samples in log-format
string").
This must be backported as far as 1.9.
get_http_auth() expects a valid stream but this is not mentioned, though
fortunately it's always called from places which already check this.
smp_prefetch_htx() performs all the required checks and is the key to the
stability of almost all sample fetch functions, so let's make this clearer.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The unique-id sample fetch function, if called without a stream, will result
in a crash.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.7.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The http_first_req sample fetch function, if called without a stream, will
result in a crash.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The capture.req.hdr, capture.res.hdr, capture.req.method, capture.req.uri,
capture.req.ver and capture.res.ver sample fetches used to assume the
presence of a stream, which is not necessarily the case (especially after
the commit above) and would crash haproxy if incorrectly used. Let's make
sure they check for this stream.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The capture-req and capture-res converters were in this case and could
crash the process if misused.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
The documentation for check implies that without an application
level check configured, it only enables simple tcp checks. What it
actually does is verify that the configured transport layer is available,
and that optional application level checks succeed.
This reg-test tests the client auth feature of HAProxy for both the
backend and frontend section with a CRL list.
This reg-test uses 2 chained listeners because vtest does not handle the
SSL. Test the frontend client auth and the backend side at the same
time.
It sends 3 requests: one with a correct certificate, one with an expired
one and one which was revoked. The client then checks if we received the
right one with the right error.
Certificates, CA and CRL are expiring in 2050 so it should be fine for
the CI.
This test could be backported as far as HAProxy 1.6
Mux-h1 currently heavily relies on the presence of an upper stream, even
when waiting for a new request after one is being finished, and it's that
upper stream that's in charge of request and keep-alive timeouts for now.
But since recent commit 493d9dc6ba ("MEDIUM: mux-h1: do not blindly wake
up the tasklet at end of request anymore") that assumption was broken as
the purpose of this change was to avoid initiating processing of a request
when there's no data in the buffer. The side effect is that there's no more
timeout to handle the front connection, resulting in dead front connections
stacking up as clients get kicked off the net.
This fix makes sure we always enable the timeout when there's no stream
attached to the connection. It doesn't do this for back connections since
they may purposely be left idle.
No backport is needed as this bug was introduced in 2.2-dev4.
A bug was introduced in the commit 2edcd4cbd ("BUG/MINOR: checks: Avoid
incompatible cast when a binary string is parsed"). The length of the
destination buffer must be set before call the parse_binary() function.
No backport needed.
It can be sometimes useful to measure total time of a request as seen
from an end user, including TCP/TLS negotiation, server response time
and transfer time. "Tt" currently provides something close to that, but
it also takes client idle time into account, which is problematic for
keep-alive requests as idle time can be very long. "Ta" is also not
sufficient as it hides TCP/TLS negotiationtime. To improve that, introduce
a "Tu" timer, without idle time and everything else. It roughly estimates
time spent time spent from user point of view (without DNS resolution
time), assuming network latency is the same in both directions.
When a tcp-check line is parsed, a warning may be reported if the keyword is
used for a frontend. The return value must be used to report it. But this info
is lost before the end of the function.
Partly fixes issue #600. No backport needed.
When an error is found during the parsing of an expect rule (tcp or http),
everything is released at the same place, at the end of the function.
Partly fixes issue #600. No backport needed.
parse_binary() function must be called with a pointer on an integer. So don't
pass a pointer on a size_t element, casting it to a pointer on a integer.
Partly fixes issue #600. No backport needed.
The variable s, pointing on the check server, may be null when a connection is
openned. It happens for email alerts. To avoid ambiguities, its use is now more
explicit. Comments have been added at some places and tests on the variable have
been added elsewhere (useless but explicit).
Partly fixes issue #600.
If a message is not fully received from a mysql server, depending on last_read
value, an error must be reported or we must wait for more data. The first if
statement must not check last_read.
Partly fixes issue #600. No backport needed.
The version is partially parsed to set the flag HTX_SL_F_VER_11 on the HTX
message. But exactly 8 chars is expected. So if "HTTP/2" is specified, the flag
is not set. Thus, the version parsing has been updated to handle "HTTP/2" and
"HTTP/2.0" the same way.
For PostgreSQL health check, there is a regex on the backend authentication
packet. It must match to succeed. But it exists 6 types of authentication
packets and the regex only matches the first one (AuthenticationOK). This patch
fixes the regex to match all authentication packets.
No backport needed.
At the end of a tcp-check based health check, if there is still a connection
attached to the check, it must be closed. But it must be done before releasing
the session, because the session may still be referenced by the mux. For
instance, an h2 stream may still have a reference on the session.
No need to backport.
This bug was introduced by the commit 2444aa5b ("MEDIUM: sessions: Don't be
responsible for connections anymore."). In session_check_idle_conn(), when the
mux is destroyed, its context must be passed as argument instead of the
connection.
It is de 2.2-dev bug. No need to backport.
When a new connection is established, if an old connection is still attached to
the current check, it must be detroyed. When it happens, the old conn-stream
must be used to unsubscribe for events, not the new one.
No backport is needed.
The variable 'ret' must only be declared When HAProxy is compiled with the SSL
support (more precisely SSL_CTRL_SET_TLSEXT_HOSTNAME must be defined).
No backport needed.
Since the tcp-check based heath checks uses the best multuplexer for a
connection, the mux-pt is no longer the only possible choice. So events
subscriptions and unsubscriptions must be done with the mux.
No backport needed.
It is now possible to match on a comma-separated list of status codes or range
of codes. In addtion, instead of a string comparison to match the response's
status code, a integer comparison is performed. Here is an example:
http-check expect status 200,201,300-310
When default parameters are set in a request message, we get the client
connection using the session's origin. But it is not necessarily a
conncection. For instance, for health checks, thanks to recent changes, it may
be a check object. At this step, the client connection may be NULL. Thus, we
must be sure to have a client connection before using it.
This patch must be backported to 2.1.
It is now possible to force the mux protocol for a tcp-check based health check
using the server keyword "check-proto". If set, this parameter overwrites the
server one.
In the same way, a "proto" parameter has been added for tcp-check and http-check
connect rules. If set, this mux protocol overwrites all others for the current
connection.
First, when a server health check is initialized, it inherits the mux protocol
from the server if it is not already specified. Because there is no option to
specify the mux protocol for the checks, it is always inherited from the server
for now.
Then, if the connect rule is configured to use the server options, the mux
protocol of the check is used, if defined. Of course, if a mux protocol is
already defined for the connect rule, it is used in priority. But for now, it is
not possible.
Thus, if a server is configured to use, for instance, the h2 protocol, it is
possible to do the same for the health-checks.
No backport needed.
The documentation about the comment argument for some tcp-check and http-check
directives was missing. As well as the description of "tcp-check comment" and
"http-check comment" directives.
This reverts commit 1979943c30ef285ed04f07ecf829514de971d9b2.
Captures in comment was only used when a tcp-check expect based on a negative
regex matching failed to eventually report what was captured while it was not
expected. It is a bit far-fetched to be useable IMHO. on-error and on-success
log-format strings are far more usable. For now there is few check sample
fetches (in fact only one...). But it could be really powerful to report info in
logs.
HTTP health-checks now use HTX multiplexers. So it is important to really send
the amount of outgoing data for such checks because the HTX buffers appears
always full.
No backport needed.
Since all tcp-check rulesets are globally stored, it is a problem to use
list. For configuration with many backends, the lookups in list may be costly
and slow downs HAProxy startup. To solve this problem, tcp-check rulesets are
now stored in a tree.
When some data are scheduled to be sent, we must be sure to subscribe for sends
if nothing was sent. Because of a bug, when nothing was sent, connection errors
are checks. If no error is found, we exit, waiting for more data, without any
subcription on send events.
No need to backport.
Instead of accessing directly to the ist fields, the ist API is used instead. To
get its length or its pointer, to release it or to duplicate it. It is more
readable this way.