It is now possible to add http-check expect rules matching HTTP header names and
values. Here is the format of these rules:
http-check expect header name [ -m <meth> ] <name> [log-format] \
[ value [ -m <meth> ] <value> [log-format] [full] ]
the name pattern (name ...) is mandatory but the value pattern (value ...) is
optionnal. If not specified, only the header presence is verified. <meth> is the
matching method, applied on the header name or the header value. Supported
matching methods are:
* "str" (exact match)
* "beg" (prefix match)
* "end" (suffix match)
* "sub" (substring match)
* "reg" (regex match)
If not specified, exact matching method is used. If the "log-format" option is
used, the pattern (<name> or <value>) is evaluated as a log-format string. This
option cannot be used with the regex matching method. Finally, by default, the
header value is considered as comma-separated list. Each part may be tested. The
"full" option may be used to test the full header line. Note that matchings are
case insensitive on the header names.
agent-check.vtc script fails time to time because the 2nd cli command is sent to
early. Waiting for the connection close in the s1 server should be enough to be
sure the server state is updated.
For an http-check ruleset, it should be allowed to set a chain of expect
rules. But an error is triggered during the post-parsing because of a wrong
test, inherited from the evaluation mode before the refactoring.
No need to backport.
The status (ok, error and timeout) of an TCP or HTTP expect rule are set to
HCHK_STATUS_UNKNOWN by default, when not specified, during the configuration
parsing. This does not change the default status used for a terminal expect rule
(ok=L7OK, err=L7RSP and tout=L7TOUT). But this way, it is possible to know if a
specific status was forced by config or not.
It is now possible to use different matching methods to look for header names in
an HTTP message:
* The exact match. It is the default method. http_find_header() uses this
method. http_find_str_header() is an alias.
* The prefix match. It evals the header names starting by a prefix.
http_find_pfx_header() must be called to use this method.
* The suffix match. It evals the header names ending by a suffix.
http_find_sfx_header() must be called to use this method.
* The substring match. It evals the header names containing a string.
http_find_sub_header() must be called to use this method.
* The regex match. It evals the header names matching a regular expression.
http_match_header() must be called to use this method.
HTPP sample fetches acting on the response can now be called from any sample
expression or log-format string in a tcp-check based ruleset. To avoid any
ambiguities, all these sample fetches are in the check scope, for instance
check.hdr() or check.cook().
SSL sample fetches acting on the server connection can now be called from any
sample expression or log-format string in a tcp-check based ruleset. ssl_bc and
ssl_bc_* sample fetches are concerned.
It is now possible to call be_id, be_name, srv_id and srv_name sample fetches
from any sample expression or log-format string in a tcp-check based ruleset.
It is now possible to call check.payload(), check.payload_lv() and check.len()
sample fetches from any sample expression or log-format string in a tcp-check
based ruleset. In fact, check.payload() was already added. But instead of having
a specific function to handle this sample fetch, we use the same than
req.payload().
These sample fetches act on the check input buffer, containing data received for
the server. So it should be part of or after an expect rule, but before any send
rule. Because the input buffer is cleared at this stage.
Some HTTP sample fetches will be accessible from the context of a http-check
health check. Thus, the prefetch function responsible to return the HTX message
has been update to handle a check, in addition to a channel. Both cannot be used
at the same time. So there is no ambiguity.
A binary sample data can be converted, implicitly or not, to a string by cutting
the buffer on the first null byte.
I guess this patch should be backported to all stable versions.
srv_cleanup_connections() is supposed to be static, so mark it as so.
This patch should be backported where commit 6318d33ce6
("BUG/MEDIUM: connections: force connections cleanup on server changes")
will be backported, that is to say v1.9 to v2.1.
Fixes: 6318d33ce6 ("BUG/MEDIUM: connections: force connections cleanup
on server changes")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
After we call SSL_SESSION_get_id(), the length of the id in bytes is
stored in "len", which was never checked. This could cause unexpected
behavior when using the "ssl_fc_session_id" or "ssl_bc_session_id"
fetchers (eg. the result can be an empty value).
The issue was introduced with commit 105599c ("BUG/MEDIUM: ssl: fix
several bad pointer aliases in a few sample fetch functions").
This patch must be backported to 2.1, 2.0, and 1.9.
When only request headers are parsed, the host header should not be compared to
the request authority because no start-line was parsed. Thus there is no
authority.
Till now this bug was hidden because this parsing mode was only used for the
response in the FCGI multiplexer. Since the HTTP checks refactoring, the request
headers may now also be parsed without the start-line.
This patch fixes the issue #610. It must be backported to 2.1.
I've been trying to understand a change of behaviour between v2.2dev5 and
v2.2dev6. Indeed our probe is regularly testing to add and remove
servers on a given backend such as:
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 263 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 0.0.0.0 0 1 256 256 0 15 3 0 14 0 0 0 - 0 -
-> curl on the corresponding frontend: reply from server:31255
# echo "set server be_foo/srv1 addr 10.236.139.34 port 31257" | sudo socat stdio /var/lib/haproxy/stats
IP changed from '0.0.0.0' to '10.236.139.34', port changed from '0' to '31257' by 'stats socket command'
# echo "set server be_foo/srv1 weight 256" | sudo socat stdio /var/lib/haproxy/stats
# echo "set server be_foo/srv1 check-port 8500" | sudo socat stdio /var/lib/haproxy/stats
health check port updated.
# echo "set server be_foo/srv1 state ready" | sudo socat stdio /var/lib/haproxy/stats
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 105 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 10.236.139.34 2 0 256 256 2319 15 3 2 6 0 0 0 - 31257 -
-> curl on the corresponding frontend: reply for server:31257
(notice the difference of weight)
# echo "set server be_foo/srv1 state maint" | sudo socat stdio /var/lib/haproxy/stats
# echo "set server be_foo/srv1 addr 0.0.0.0 port 0" | sudo socat stdio /var/lib/haproxy/stats
IP changed from '10.236.139.34' to '0.0.0.0', port changed from '31257' to '0' by 'stats socket command'
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 263 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 0.0.0.0 0 1 256 256 0 15 3 0 14 0 0 0 - 0 -
-> curl on the corresponding frontend: reply from server:31255
# echo "set server be_foo/srv1 addr 10.236.139.34 port 31256" | sudo socat stdio /var/lib/haproxy/stats
IP changed from '0.0.0.0' to '10.236.139.34', port changed from '0' to '31256' by 'stats socket command'
# echo "set server be_foo/srv1 weight 256" | sudo socat stdio /var/lib/haproxy/stats
# echo "set server be_foo/srv1 check-port 8500" | sudo socat stdio /var/lib/haproxy/stats
health check port updated.
# echo "set server be_foo/srv1 state ready" | sudo socat stdio /var/lib/haproxy/stats
# echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
113 be_foo 1 srv0 10.236.139.34 2 0 1 1 105 15 3 4 6 0 0 0 - 31255 -
113 be_foo 2 srv1 10.236.139.34 2 0 256 256 2319 15 3 2 6 0 0 0 - 31256 -
-> curl on the corresponding frontend: reply from server:31257 (!)
Here we indeed would expect to get an anver from server:31256. The issue
is highly linked to the usage of `pool-purge-delay`, with a value which
is higher than the duration of the test, 10s in our case.
a git bisect between dev5 and dev6 seems to show commit
079cb9af22 ("MEDIUM: connections: Revamp the way idle connections are killed")
being the origin of this new behaviour.
So if I understand the later correctly, it seems that it was more a
matter of chance that we did not saw the issue earlier.
My patch proposes to force clean idle connections in the two following
cases:
- we set a (still running) server to maintenance
- we change the ip/port of a server
This commit should be backported to 2.1, 2.0, and 1.9.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
When a stream is detached from its connection, we try to move the connection in
an idle list to keep it opened, the session one or the server one. But it must
only be done if there is no connection error and if we want to keep it
open. This last statement is true if FCGI_CF_KEEP_CONN flag is set. But the test
is inverted at the stage.
This patch must be backported to 2.1.
fcgi_release() function is responsible to release a FCGI connection. But the
release of the connection itself is missing.
This patch must be backported to 2.1.
When the last stream is detached from a FCGI connection, if the server don't add
the connection in its idle list, the connection is destroyed. Thus it is
important to exist immediately from the detach function. A return statement is
missing here.
This bug was introduced in the commit 2444aa5b6 ("MEDIUM: sessions: Don't be
responsible for connections anymore.").
It is a 2.2-dev bug. No need to backport.
Now we very rarely catch spinning streams, and whenever we catch one it
seems a filter is involved, but we currently report no info about them.
Let's print the list of enabled filters on the stream with such a crash
to help with the reports. A typical output will now look like this:
[ALERT] 121/165908 (1110) : A bogus STREAM [0x7fcaf4016a60] is spinning at 2 calls per second and refuses to die, aborting now! Please report this error to developers [strm=0x7fcaf4016a60 src=127.0.0.1 fe=l1 be=l1 dst=<CACHE> rqf=6dc42000 rqa=48000 rpf=a0040223 rpa=24000000 sif=EST,10008 sib=DIS,80110 af=(nil),0 csf=0x7fcaf4023c00,10c000 ab=0x7fcaf40235f0,4 csb=(nil),0 cof=0x7fcaf4016610,1300:H1(0x7fcaf4016840)/RAW((nil))/tcpv4(29) cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0) filters={0x7fcaf4016fb0="cache store filter", 0x7fcaf4017080="compression filter"}]
This may be backported to 2.0.
Given that a "count" value of 32M was seen in _shctx_wait4lock(), it
is very important to prevent this from happening again. It's absolutely
essential to prevent the value from growing unbounded because with an
increase of the number of threads, the number of successive failed
attempts will necessarily grow.
Instead now we're scanning all 2^p-1 values from 3 to 255 and are
bounding to count to 255 so that in the worst case each thread tries an
xchg every 255 failed read attempts. That's one every 4 on average per
thread when there are 64 threads, which corresponds to the initial count
of 4 for the first attempt so it seems like a reasonable value to keep a
low latency.
The bug was introduced with the shctx entries in 1.5 so the fix must
be backported to all versions. Before 1.8 the function was called
_shared_context_wait4lock() and was in shctx.c.
Jrme reported an amazing crash in the spinlock version of
_shctx_wait4lock() with an extremely high <count> value of 32M! The
root cause is that the function cannot deal with contention on the lock
at all because it forgets to check if the lock's value has changed! As
such, every time it's called due to a contention, it waits twice as
long before trying again and lets the caller check for the contention
by itself.
The correct thing to do is to compare the value again at each loop.
This way it makes sure to mostly perform read accesses on the shared
cache line without writing too often, and to be ready fast enough to
try to grab the lock. And we must not increase the count on success
either!
Unfortunately I'd have expected to see a performance boost on the cache
with this but there was absolutely no change, so it's very likely that
these issues only happen once in a while and are sufficient to derail
the process when they strike, but not to have a permanent performance
impact.
The bug was introduced with the shctx entries in 1.5 so the fix must
be backported to all versions. Before 1.8 the function was called
_shared_context_wait4lock() and was in shctx.c.
I changed my mind twice on this one and pushed after the last test with
threads disabled, without re-enabling long long, causing this rightful
build warning.
This needs to be backported if the previous commit ff64d3b027 ("MINOR:
threads: export the POSIX thread ID in panic dumps") is backported as
well.
It is very difficult to map a panic dump against a gdb thread dump
because the thread numbers do not match. However gdb provides the
pthread ID but this one is supposed to be opaque and not to be cast
to a scalar.
This patch provides a fnuction, ha_get_pthread_id() which retrieves
the pthread ID of the indicated thread and casts it to an unsigned
long long so as to lose the least possible amount of information from
it. This is done cleanly using a union to maintain alignment so as
long as these IDs are stored on 1..8 bytes they will be properly
reported. This ID is now presented in the panic dumps so it now
becomes possible to map these threads. When threads are disabled,
zero is returned. For example, this is a panic dump:
Thread 1 is about to kill the process.
*>Thread 1 : id=0x7fe92b825180 act=0 glob=0 wq=1 rq=0 tl=0 tlsz=0 rqsz=0
stuck=1 prof=0 harmless=0 wantrdv=0
cpu_ns: poll=5119122 now=2009446995 diff=2004327873
curr_task=0xc99bf0 (task) calls=4 last=0
fct=0x592440(task_run_applet) ctx=0xca9c50(<CLI>)
strm=0xc996a0 src=unix fe=GLOBAL be=GLOBAL dst=<CLI>
rqf=848202 rqa=0 rpf=80048202 rpa=0 sif=EST,200008 sib=EST,204018
af=(nil),0 csf=0xc9ba40,8200
ab=0xca9c50,4 csb=(nil),0
cof=0xbf0e50,1300:PASS(0xc9cee0)/RAW((nil))/unix_stream(20)
cob=(nil),0:NONE((nil))/NONE((nil))/NONE(0)
call trace(20):
| 0x59e4cf [48 83 c4 10 5b 5d 41 5c]: wdt_handler+0xff/0x10c
| 0x7fe92c170690 [48 c7 c0 0f 00 00 00 0f]: libpthread:+0x13690
| 0x7ffce29519d9 [48 c1 e2 20 48 09 d0 48]: linux-vdso:+0x9d9
| 0x7ffce2951d54 [eb d9 f3 90 e9 1c ff ff]: linux-vdso:__vdso_gettimeofday+0x104/0x133
| 0x57b484 [48 89 e6 48 8d 7c 24 10]: main+0x157114
| 0x50ee6a [85 c0 75 76 48 8b 55 38]: main+0xeaafa
| 0x50f69c [48 63 54 24 20 85 c0 0f]: main+0xeb32c
| 0x59252c [48 c7 c6 d8 ff ff ff 44]: task_run_applet+0xec/0x88c
Thread 2 : id=0x7fe92b6e6700 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=0 prof=0 harmless=1 wantrdv=0
cpu_ns: poll=786738 now=1086955 diff=300217
curr_task=0
Thread 3 : id=0x7fe92aee5700 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=0 prof=0 harmless=1 wantrdv=0
cpu_ns: poll=828056 now=1129738 diff=301682
curr_task=0
Thread 4 : id=0x7fe92a6e4700 act=0 glob=0 wq=0 rq=0 tl=0 tlsz=0 rqsz=0
stuck=0 prof=0 harmless=1 wantrdv=0
cpu_ns: poll=818900 now=1153551 diff=334651
curr_task=0
And this is the gdb output:
(gdb) info thr
Id Target Id Frame
* 1 Thread 0x7fe92b825180 (LWP 15234) 0x00007fe92ba81d6b in raise () from /lib64/libc.so.6
2 Thread 0x7fe92b6e6700 (LWP 15235) 0x00007fe92bb56a56 in epoll_wait () from /lib64/libc.so.6
3 Thread 0x7fe92a6e4700 (LWP 15237) 0x00007fe92bb56a56 in epoll_wait () from /lib64/libc.so.6
4 Thread 0x7fe92aee5700 (LWP 15236) 0x00007fe92bb56a56 in epoll_wait () from /lib64/libc.so.6
We can clearly see that while threads 1 and 2 are the same, gdb's
threads 3 and 4 respectively are haproxy's threads 4 and 3.
This may be backported to 2.0 as it removes some confusion in github issues.
We tried hard to make sure we report threads as not stuck at various
crucial places, but one of them is special, it's the listener_accept()
function. The reason it is special is because it will loop a certain
number of times (default: 64) accepting incoming connections, allocating
resources, dispatching them to other threads or running L4 rules on them,
and while all of this is supposed to be extremely fast, when the machine
slows down or runs low on memory, the expectedly small delays in malloc()
caused by contention with other threads can quickly accumulate and suddenly
become critical to the point of triggering the watchdog. Furthermore, it
is technically possible to trigger this by pure configuration by setting
a huge tune.maxaccept value, which should not be possible.
Given that each operation isn't related to the same task but to a different
one each time, it is appropriate to mark the thread as not stuck each time
it accepts new work that possibly gets dispatched to other threads which
execute it.
This looks like this could be a good reason for the issue reported in
issue #388.
This fix must be backported to 2.0.
Building without threads now shows this warning:
src/ssl_sock.c: In function 'cli_io_handler_commit_cert':
src/ssl_sock.c:12121:24: warning: unused variable 'bind_conf' [-Wunused-variable]
struct bind_conf *bind_conf = ckchi->bind_conf;
^~~~~~~~~
This is because the variable is needed only to unlock the structure, and
the unlock operation does nothing in this case. Let's mark the variable
__maybe_unused for this, but it would be convenient in the long term if
we could make the thread macros pretend they consume the argument so that
this remains less visible outside.
No backport is needed.
Improve the test by removing the curl command and using the same proxy
chaining technique as in commit 3ed722f ("REGTEST: ssl: remove curl from
the "add ssl crt-list" test").
A 3rd request was added which must fail, to ensure that the SNI was
effectively removed from HAProxy.
This patch also adds timeouts in the default section, logs on stderr and
fix some indentation issues.
Using curl for SSL tests can be a problem if it wasn't compiled with the
right SSL library and if it didn't share any cipher with HAProxy. To
have more robust tests we now use HAProxy as an SSL client, so we are
sure that the client and the server share the same SSL requirements.
This patch also adds timeouts in the default section, logs on stderr and
fix some indentation issues.
Only one Host header can be defined and some headers are automatically skipped
(Connection, Content-Length and Transfer-Encoding). In addition, a note about
the synchronisation of the Host header value and the request uri has been added.
Because in HTTP, the host header and the request authority, if any, must be
identical, we keep both synchornized. It means the right flags are set on the
HTX statrt-line calling http_update_host(). There is no header when it happens,
but it is not an issue. Then, if a Host header is inserted,
http_update_authority() is called.
Note that for now, the host header is not automatically added when required.
Connection, content-length and transfer-encoding headers are ignored for
http-check send rules. For now, the keep-alive is not supported and the
"connection: close" header is always added to the request. And the
content-length header is automatically added.
cpu_calls, cpu_ns_avg, cpu_ns_tot, lat_ns_avg and lat_ns_tot depend on the
stream to find the current task and must check for it or they may cause a
crash if misused or used in a log-format string after commit 5f940703b3
("MINOR: log: Don't depends on a stream to process samples in log-format
string").
This must be backported as far as 1.9.
get_http_auth() expects a valid stream but this is not mentioned, though
fortunately it's always called from places which already check this.
smp_prefetch_htx() performs all the required checks and is the key to the
stability of almost all sample fetch functions, so let's make this clearer.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The unique-id sample fetch function, if called without a stream, will result
in a crash.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.7.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The http_first_req sample fetch function, if called without a stream, will
result in a crash.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The capture.req.hdr, capture.res.hdr, capture.req.method, capture.req.uri,
capture.req.ver and capture.res.ver sample fetches used to assume the
presence of a stream, which is not necessarily the case (especially after
the commit above) and would crash haproxy if incorrectly used. Let's make
sure they check for this stream.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
Since commit 5f940703b3 ("MINOR: log: Don't depends on a stream to process
samples in log-format string") it has become quite obvious that a few sample
fetch functions and converters were still heavily dependent on the presence
of a stream without testing for it.
The capture-req and capture-res converters were in this case and could
crash the process if misused.
This fix adds a check for the stream's existence, and should be backported
to all stable versions up to 1.6.
The documentation for check implies that without an application
level check configured, it only enables simple tcp checks. What it
actually does is verify that the configured transport layer is available,
and that optional application level checks succeed.
This reg-test tests the client auth feature of HAProxy for both the
backend and frontend section with a CRL list.
This reg-test uses 2 chained listeners because vtest does not handle the
SSL. Test the frontend client auth and the backend side at the same
time.
It sends 3 requests: one with a correct certificate, one with an expired
one and one which was revoked. The client then checks if we received the
right one with the right error.
Certificates, CA and CRL are expiring in 2050 so it should be fine for
the CI.
This test could be backported as far as HAProxy 1.6
Mux-h1 currently heavily relies on the presence of an upper stream, even
when waiting for a new request after one is being finished, and it's that
upper stream that's in charge of request and keep-alive timeouts for now.
But since recent commit 493d9dc6ba ("MEDIUM: mux-h1: do not blindly wake
up the tasklet at end of request anymore") that assumption was broken as
the purpose of this change was to avoid initiating processing of a request
when there's no data in the buffer. The side effect is that there's no more
timeout to handle the front connection, resulting in dead front connections
stacking up as clients get kicked off the net.
This fix makes sure we always enable the timeout when there's no stream
attached to the connection. It doesn't do this for back connections since
they may purposely be left idle.
No backport is needed as this bug was introduced in 2.2-dev4.
A bug was introduced in the commit 2edcd4cbd ("BUG/MINOR: checks: Avoid
incompatible cast when a binary string is parsed"). The length of the
destination buffer must be set before call the parse_binary() function.
No backport needed.
It can be sometimes useful to measure total time of a request as seen
from an end user, including TCP/TLS negotiation, server response time
and transfer time. "Tt" currently provides something close to that, but
it also takes client idle time into account, which is problematic for
keep-alive requests as idle time can be very long. "Ta" is also not
sufficient as it hides TCP/TLS negotiationtime. To improve that, introduce
a "Tu" timer, without idle time and everything else. It roughly estimates
time spent time spent from user point of view (without DNS resolution
time), assuming network latency is the same in both directions.
When a tcp-check line is parsed, a warning may be reported if the keyword is
used for a frontend. The return value must be used to report it. But this info
is lost before the end of the function.
Partly fixes issue #600. No backport needed.
When an error is found during the parsing of an expect rule (tcp or http),
everything is released at the same place, at the end of the function.
Partly fixes issue #600. No backport needed.