The fc_conn_err and fc_conn_err_str sample fetches give information
about the problem that made the connection fail. This information would
previously only have been given by the error log messages meaning that
thanks to these fetches, the error log can now be included in a custom
log format. The log strings were all found in the conn_err_code_str
function.
The CO_ER_SSL_EARLY_FAILED and CO_ER_CIP_TIMEOUT connection error codes
were missing in the conn_err_code_str switch which converts the error
codes into string.
This patch can be backported on all stable branches.
Cleanup the mworker_cli_proxy_create() function by removing the
allocation and init of the proxy which is done manually, and replace it
by alloc_new_proxy(). Do the same with the free_proxy() function.
This patch also move the insertion at the end of the function.
Disable the output of the statistics of internal proxies (PR_CAP_INT),
wo we don't rely only on the px->uuid > 0. This will allow to hide more
cleanly the internal proxies in the stats.
This patch renames the proxy capability "LUA" to "INT" so it could be
used for any internal proxy.
Every proxy that are not user defined should use this flag.
This part was fixed several times since commit aade4edc1 ("BUG/MEDIUM:
mux-h2: Don't handle pending read0 too early on streams") and there are
still some cases where a read0 event may be ignored because a partial frame
inhibits the event.
Here, we must take care to set H2_CF_END_REACHED flag if a read0 was
received while a partial frame header is received or if the padding length
is missing.
To ease partial frame detection, H2_CF_DEM_SHORT_READ flag is introduced. It
is systematically removed when some data are received and is set when a
partial frame is found or when dbuf buffer is empty. At the end of the
demux, if the connection must be closed ASAP or if data are missing to move
forward, we may acknowledge the pending read0 event, if any. For now,
H2_CF_DEM_SHORT_READ is not part of H2_CF_DEM_BLOCK_ANY mask.
This patch should fix the issue #1328. It must be backported as far as 2.0.
The splicing does not work anymore because the H1 connection is not swap to
splice mode when rcv_pipe() callback function is called. It is important to
set H1C_F_WANT_SPLICE flag to inhibit data receipt via the buffer
API. Otherwise, because there are always data in the buffer, it is not
possible to use the kernel splicing.
This bug was introduced by the commit 2b861bf72 ("MINOR: mux-h1: clean up
conditions to enabled and disabled splicing").
The patch must be backported to 2.4.
If a connection is closed during the preface while no data are received, if
the dontlognull option is set, no log message must be emitted. However, this
will still be handled as a protocol error. Only the log is omitted.
This patch should fix the issue #1336 for H2 sessions. It must be backported
to 2.4 and 2.3 at least, and probably as far as 2.0.
If a H1 connection is closed while no data are received, if the dontlognull
option is set, no log message must be emitted. Because the H1 multiplexer
handles early errors, it must take care to obey this option. It is true for
400-Bad-Request, 408-Request-Time-out and 501-Not-Implemented
responses. 500-Internal-Server-Error responses are still logged.
This patch should fix the issue #1336 for H1 sessions. It must be backported
to 2.4.
When doing a reload with a configuration which requires the
master-worker mode, the configuration check will fail because the check
is not done with -W/-Ws.
Example:
wla@kikyo:~/haproxy$ ./haproxy -Ws -c -f haproxy.cfg
Configuration file is valid
wla@kikyo:~/haproxy$ ./haproxy -c -f haproxy.cfg
[NOTICE] (13153) : haproxy version is 2.5-dev2-4567b3-16
[NOTICE] (13153) : path to executable is ./haproxy
[ALERT] (13153) : config : Can't use a 'program' section without master worker mode.
[ALERT] (13153) : config : Fatal errors found in configuration.
This patch fixes the issue by adding -Ws on the check command line.
Must be backported in all stable branches. (The file was previously in
contrib/systemd/haproxy.service.in).
Use non-checked function to retrieve listener/server via obj_type. This
is done as a previous obj_type function ensure that the type is well
known and the instance is not NULL.
Incidentally, this should prevent the coverity report from the #1335
github issue which warns about a possible NULL dereference.
When we evaluate a DNS response item, it may be necessary to look for a
server with a hostname matching the item target into the named servers
tree. To do so, the item target is transformed to a lowercase string. It
must be a null-terminated string. Thus we must explicitly set the trailing
'\0' character.
For a specific resolution, the named servers tree contains all servers using
this resolution with a hostname loaded from a state file. Because of this
bug, same entry may be duplicated because we are unable to find the right
server, assigning this way the item to a free server slot.
This patch should fix the issue #1333. It must be backported as far as 2.2.
Commit 048368ef6 ("MINOR: deinit: always deinit the init_mutex on
failed initialization") added the missing unlock but forgot to
condition it on USE_THREAD, resulting in a build failure. No
backport is needed.
This addresses oss-fuzz issue 36426.
A config like the below fails to validate because of a bogus test:
backend b1
tcp-check connect port 1234
option tcp-check
server s1 1.2.3.4 check
[ALERT] (18887) : config : config: proxy 'b1': server 's1' has neither
service port nor check port, and a tcp_check rule
'connect' with no port information.
A || instead of a && only validates the connect rule when both the
address *and* the port are set. A work around is to set the rule like
this:
tcp-check connect addr 0:1234 port 1234
This needs to be backported as far as 2.2 (2.0 is OK).
Agent stats were lost during the stats refactoring performed in the 2.4 to
simplify the Prometheus exporter. stats_fill_sv_stats() function must fill
ST_F_AGENT_* and ST_F_LAST_AGT stats.
This patch should fix the issue #1331. It must be backported to 2.4.
Some ssl samples cause a segfault when the stream is not instantiated,
for example during an invalid HTTP request. A new check is added to
prevent the stream dereferencing if NULL.
This is the list of the affected samples :
- ssl_s_chain_der
- ssl_s_der
- ssl_s_i_dn
- ssl_s_key_alg
- ssl_s_notafter
- ssl_s_notbefore
- ssl_s_s_dn
- ssl_s_serial
- ssl_s_sha1
- ssl_s_sig_alg
- ssl_s_version
This bug can be reproduced easily by using one of these samples in a
log-format string. Emit an invalid HTTP request with an HTTP client to
trigger the crash.
This bug has been reported in redmine issue 3913.
This must be backported up to 2.2.
This undocumented variable is only for internal use, and its sole
presence affects the process' behavior, as shown in bug #1324. It must
not be exported to workers, external checks, nor programs. Let's unset
it before forking programs and workers.
This should be backported as far as 1.8. The worker code might differ
a bit before 2.5 due to the recent removal of multi-process support.
The master-worker code registers an exit handler to deal with configuration
issues during reload, leading to a restart of the master process in wait
mode. But it shouldn't do that when it's expected that the program stops
during config parsing or condition checks, as the reload operation is
unexpectedly called and results in abnormal behavior and even crashes:
$ HAPROXY_MWORKER_REEXEC=1 ./haproxy -W -c -f /dev/null
Configuration file is valid
[NOTICE] (18418) : haproxy version is 2.5-dev2-ee2420-6
[NOTICE] (18418) : path to executable is ./haproxy
[WARNING] (18418) : config : Reexecuting Master process in waitpid mode
Segmentation fault
$ HAPROXY_MWORKER_REEXEC=1 ./haproxy -W -cc 1
[NOTICE] (18412) : haproxy version is 2.5-dev2-ee2420-6
[NOTICE] (18412) : path to executable is ./haproxy
[WARNING] (18412) : config : Reexecuting Master process in waitpid mode
[WARNING] (18412) : config : Reexecuting Master process
Note that the presence of this variable happens by accident when haproxy
is called from within its own programs (see issue #1324), but this should
be the object of a separate fix.
This patch fixes this by preventing the atexit registration in such
situations. This should be backported as far as 1.8. MODE_CHECK_CONDITION
has to be dropped for versions prior to 2.5.
Oss-fuzz reports in issue 36328 that we can recurse too far by passing
extremely deep expressions to the ".if" parser. I thought we were still
limited to the 1024 chars per line, that would be highly sufficient, but
we don't have any limit now :-/
Let's just pass a maximum recursion counter to the recursive parsers.
It's decremented for each call and the expression fails if it reaches
zero. On the most complex paths it can add 3 levels per parenthesis,
so with a limit of 1024, that's roughly 343 nested sub-expressions that
are supported in the worst case. That's more than sufficient, for just
a few kB of RAM.
No backport is needed.
The init_mutex was not unlocked in case an error is encountered during
a thread initialization, and the polling loop was aborted during startup.
In practise it does not have any observable effect since an explicit
exit() is placed there, but it could confuse some debugging tools or
some static analysers, so let's release it as expected.
This addresses issue #1326.
Since last change on HTTP analysers (252412316 "MEDIUM: proxy: remove
long-broken 'option http_proxy'"), http_process_request() may only return
internal errors on failures. Thus the label used to handle bad requests may
be removed.
This patch should fix the issue #1330.
This option had always been broken in HTX, which means that the first
breakage appeared in 1.9, that it was broken by default in 2.0 and that
no workaround existed starting with 2.1. The way this option works is
praticularly unfit to the rest of the configuration and to the internal
architecture. It had some uses when it was introduced 14 years ago but
nowadays it's possible to do much better and more reliable using a
set of "http-request set-dst" and "http-request set-uri" rules, which
additionally are compatible with DNS resolution (via do-resolve) and
are not exclusive to normal load balancing. The "option-http_proxy"
example config file was updated to reflect this.
The option is still parsed so that an error message gives hints about
what to look for.
The cfg_free_cond_{term,and,expr}() functions used to take a pointer to
the pointer to be freed in order to replace it with a NULL once done.
But this doesn't cope well with freeing lists as it would require
recursion which the current code tried to avoid.
Let's just change the API to free the area and let the caller set the NULL.
This leak was reported by oss-fuzz (issue 36265).
While we do free the array containing the arguments, we do not free
allocated ones. Most of them are unresolved, but strings are allocated
and have to be freed as well. Note that for the sake of not breaking
the args resolution list that might have been set, we still refrain
from doing this if a resolution was already programmed, but for most
common cases (including the ones that can be found in config conditions
and at run time) we're safe.
This may be backported to stable branches, but it relies on the new
free_args() function that was introduced by commit ab213a5b6 ("MINOR:
arg: add a free_args() function to free an args array"), and which is
likely safe to backport as well.
This leak was reported by oss-fuzz (issue 36265).
Released version 2.5-dev2 with the following main changes :
- BUILD/MEDIUM: tcp: set-mark support for OpenBSD
- DOC: config: use CREATE USER for mysql-check
- BUG/MINOR: stick-table: fix several printf sign errors dumping tables
- BUG/MINOR: peers: fix data_type bit computation more than 32 data_types
- MINOR: stick-table: make skttable_data_cast to use only std types
- MEDIUM: stick-table: handle arrays of standard types into stick-tables
- MEDIUM: peers: handle arrays of std types in peers protocol
- DOC: stick-table: add missing documentation about gpt0 stored type
- MEDIUM: stick-table: add the new array of gpt data_type
- MEDIUM: stick-table: make the use of 'gpt' excluding the use of 'gpt0'
- MEDIUM: stick-table: add the new arrays of gpc and gpc_rate
- MEDIUM: stick-table: make the use of 'gpc' excluding the use of 'gpc0/1''
- BUG/MEDIUM: sock: make sure to never miss early connection failures
- BUG/MINOR: cli: fix server name output in "show fd"
- Revert "MINOR: tcp-act: Add set-src/set-src-port for "tcp-request content" rules"
- MEDIUM: stats: include disabled proxies that hold active sessions to stats
- BUILD: stick-table: shut up invalid "uninitialized" warning in gcc 8.3
- MINOR: http: implement http_get_scheme
- MEDIUM: http: implement scheme-based normalization
- MEDIUM: h1-htx: apply scheme-based normalization on h1 requests
- MEDIUM: h2: apply scheme-based normalization on h2 requests
- REGTESTS: add http scheme-based normalization test
- BUILD: http_htx: fix ci compilation error with isdigit for Windows
- MINOR: http: implement http uri parser
- MINOR: http: use http uri parser for scheme
- MINOR: http: use http uri parser for authority
- REORG: http_ana: split conditions for monitor-uri in wait for request
- MINOR: http: use http uri parser for path
- BUG/MEDIUM: http_ana: fix crash for http_proxy mode during uri rewrite
- MINOR: mux_h2: define config to disable h2 websocket support
- CLEANUP: applet: remove unused thread_mask
- BUG/MINOR: ssl: Default-server configuration ignored by server
- BUILD: add detection of missing important CFLAGS
- BUILD: lua: silence a build warning with TCC
- MINOR: srv: extract tracking server config function
- MINOR: srv: do not allow to track a dynamic server
- MEDIUM: server: support track keyword for dynamic servers
- REGTESTS: test track support for dynamic servers
- MINOR: init: verify that there is a single word on "-cc"
- MINOR: init: make -cc support environment variables expansion
- MINOR: arg: add a free_args() function to free an args array
- CLEANUP: config: use free_args() to release args array in cfg_eval_condition()
- CLEANUP: hlua: use free_args() to release args arrays
- REORG: config: move the condition preprocessing code to its own file
- MINOR: cfgcond: start to split the condition parser to introduce terms
- MEDIUM: cfgcond: report invalid trailing chars after expressions
- MINOR: cfgcond: remerge all arguments into a single line
- MINOR: cfgcond: support negating conditional expressions
- MINOR: cfgcond: make the conditional term parser automatically allocate nodes
- MINOR: cfgcond: insert an expression between the condition and the term
- MINOR: cfgcond: support terms made of parenthesis around expressions
- REGTEST: make check_condition.vtc fail as soon as possible
- REGTESTS: add more complex check conditions to check_conditions.vtc
- BUG/MEDIUM: init: restore behavior of command-line "-m" for memory limitation
The removal for the shared inter-process cache in commit 6fd0450b4
("CLEANUP: shctx: remove the different inter-process locking techniques")
accidentally removed the enforcement of rlimit_memmax_all which
corresponds to what is passed to the command-line "-m" argument.
Let's restore it.
Thanks to @nafets227 for spotting this. This fixes github issue #1319.
Now that we support logic expressions, variables and parenthesis, let's
add a few more tests to check_conditions.vtc. The tests are conditionned
by the version being at least 2.5-dev2 so that it will not cause failures
during a possible later bisect session or if backported.
The test verifies that exported variables are seen, that operators precedence
works as expected, that parenthesis work at least through two levels, that an
empty condition is false while a negative number is true, and that extraneous
chars in an expression, or unfinished strings are properly caught.
The test consists in a sequence of shell commands, but the shell is not
necessarily started with strict errors enabled, so only the last command
provides the verdict. Let's add "set -e" to make it fail on the first
test that fails.
Now it's possible to form a term using parenthesis around an expression.
This will soon allow to build more complex expressions. For now they're
still pretty limited but parenthesis do work.
Now evaluating a condition will rely on an expression (or an empty string),
and this expression will support ORing a sub-expression with another
optional expression. The sub-expressions ANDs a term with another optional
sub-expression. With this alone precedence between && and || is respected,
and the following expression:
A && B && C || D || E && F || G
will naturally evaluate as:
(A && B && C) || D || (E && F) || G
It's not convenient to let the caller be responsible for node allocation,
better have the leaf function do that and implement the accompanying free
call. Now only a pointer is needed instead of a struct, and the leaf
function makes sure to leave the situation in a consistent way.
Till now we were dealing with single-word expressions but in order to
extend the configuration condition language a bit more, we'll need to
support slightly more complex expressions involving operators, and we
must absolutely support spaces around them to keep them readable.
As all arguments are pointers to the same line with spaces replaced by
zeroes, we can trivially rebuild the whole line before calling the
condition evaluator, and remove the test for extraneous argument. This
is what this patch does.
Random characters placed after a configuration predicate currently do
not report an error. This is a problem because extra parenthesis,
commas or even other random left-over chars may accidently appear there.
Let's now report an error when this happens.
This is marked MEDIUM because it may break otherwise working configs
which are faulty.
The purpose is to build a descendent parser that will split conditions
into expressions made of terms. There are two phases, a parsing phase
and an evaluation phase. Strictly speaking it's not required to cut
that in two right now, but it's likely that in the future we won't want
certain predicates to be evaluated during the parsing (e.g. file system
checks or execution of some external commands).
The cfg_eval_condition() function is now much simpler, it just tries to
parse a single term, and if OK evaluates it, then returns the result.
Errors are unchanged and may still be reported during parsing or
evaluation.
It's worth noting that some invalid expressions such as streq(a,b)zzz
continue to parse correctly for now (what remains after the parenthesis
is simply ignored as not necessary).
The .if/.else/.endif and condition evaluation code is quite dirty and
was dumped into cfgparse.c because it was easy. But it should be tidied
quite a bit as it will need to evolve.
Let's move all that to cfgcond.{c,h}.
Argument arrays used in hlua_lua2arg_check() as well as in the functions
used to call sample fetches and converters were manually released, let's
use the cleaner and more reliable free_args() instead. The prototype of
hlua_lua2arg_check() was amended to mention that the function relies on
the final ARGT_STOP, which is already the case, and the pointless test
for this was removed.
make_arg_list() can create an array of arguments, some of which remain
to be resolved, but all users had to deal with their own roll back on
error. Let's add a free_args() function to release all the array's
elements and let the caller deal with the array itself (sometimes it's
allocated in the stack).
I found myself a few times testing some conditoin examples from the doc
against command line's "-cc" to see that they didn't work with environment
variables expansion. Not being documented as being on purpose it looks like
a miss, so let's add PARSE_OPT_ENV and PARSE_OPT_WORD_EXPAND to be able to
test for example -cc "streq(${WITH_SSL},yes)" to help debug expressions.
This adds the exact same restriction as commit 5546c8bdc ("MINOR:
cfgparse: Fail when encountering extra arguments in macro") but for
the "-cc" command line argument, for the sake of consistency.
Create a regtest for the 'track' keyword support by dynamic servers.
First checks are executed to ensure that tracking cannot be activated on
non-check server or dynamic servers.
Then, 3 scenarii are written to ensure that the deletion of a dynamic
server with track is properly handled and other servers in the track
chain are properly maintained.
Allow the usage of the 'track' keyword for dynamic servers. On server
deletion, the server is properly removed from the tracking chain to
prevents NULL pointer dereferencing.
Prevents the use of the "track" keyword for a dynamic server. This
simplifies the deletion of a dynamic server, without having to worry
about servers which might tracked it.
A BUG_ON is present in the dynamic server delete function to validate
this assertion.
TCC doesn't have the equivalent of __builtin_unreachable() and complains
that hlua_panic_ljmp() may return no value. Let's add a return 0 there.
All compilers that know that longjmp() doesn't return will see no change
and tcc will be happy.
Modern compilers love to break existing code, and some options detected
at build time (such as -fwrapv) are absolutely critical otherwise some
bad code can be generated.
Given that some users rely on packages that force CFLAGS without being
aware of this and can be hit by runtime bugs, we have to help packagers
figure that they need to be careful about their build options.
The test here consists in detecting correct wrapping of signed integers.
Some of the old code relies on it, and modern compilers recently decided
to break it. It's normally addressed using -fwrapv which users will
rarely enforce in their own flags. Thus it is a good indicator of missing
critical CFLAGS, and it happens to be very easy to detect at run time.
Note that the test uses argc in order to have a variable. While gcc
ignores wrapping even for constants, clang only ignores it for variables.
The way the code is constructed doesn't result in code being emitted for
optimized builds thanks to value range propagation.
This should address GitHub issue #1315, and should be backported to all
stable versions. It may result in instantly breaking binaries that seemed
to work fine (typically the ones suddenly showing a busy loop after a few
weeks of uptime), and require packagers to fix their flags. The vast
majority of distro packages are fine and will not be affected though.