This file includes everything that must be guaranteed to be available to
any buildable file in the project (including the contrib/ subdirs). For
now it includes <haproxy/api-t.h> so that standard integer types and
compiler macros are known, <common/initcall.h> to ease dynamic registration
of init functions, and <common/tools.h> for a few MIN/MAX macros.
version.h should probably also be added, though at the moment it doesn't
bring a great value.
All files which currently include the ones above should now switch to
haproxy/api.h or haproxy/api-t.h instead. This should also reduce build
time by having a single guard for several files at once.
This file is at the lowest level of the include tree. Its purpose is to
make sure that common types are known pretty much everywhere, particularly
in structure declarations. It will essentially cover integer types such as
uintXX_t via inttypes.h, "size_t" and "ptrdiff_t" via stddef.h, and various
type modifiers such as __maybe_unused or ALIGN() via compiler.h, compat.h
and defaults.h.
It could be enhanced later if required, for example if some macros used
to compute array sizes are needed.
The only leftovers were the unused compiler.h file and the LICENSE file
which is already mentioned in each and every ebtree file header.
A few build paths were updated in the contrib/ directory not to mention
this directory anymore, and all its occurrences were dropped from the
main makefile. From now on no other include path but include/ will be
needed anymore to build any file.
This is where other imported components are located. All files which
used to directly include ebtree were touched to update their include
path so that "import/" is now prefixed before the ebtree-related files.
The ebtree.h file was slightly adjusted to read compiler.h from the
common/ subdirectory (this is the only change).
A build issue was encountered when eb32sctree.h is loaded before
eb32tree.h because only the former checks for the latter before
defining type u32. This was addressed by adding the reverse ifdef
in eb32tree.h.
No further cleanup was done yet in order to keep changes minimal.
As part of the include files cleanup, we're going to kill the ebtree
directory. For this we need to host its C files in a different location
and src/ is the right one.
Fix a trash buffer leak when we can't take the lock of the ckch, or when
"set ssl cert" is wrongly used.
The bug was mentionned in this thread:
https://www.mail-archive.com/haproxy@formilux.org/msg37539.html
The bug was introduced by commit bc6ca7c ("MINOR: ssl/cli: rework 'set
ssl cert' as 'set/commit'").
Must be backported in 2.1.
When HAProxy is started with a '--' option, all following parameters are
considered configuration files. You can't add new options after a '--'.
The current reload system of the master-worker adds extra options at the
end of the arguments list. Which is a problem if HAProxy was started wih
'--'.
This patch fixes the issue by copying the new option at the beginning of
the arguments list instead of the end.
This patch must be backported as far as 1.8.
There is no reason the -S option can't take an argument which starts with
a -. This limitation must only be used for options that take a
non-finite list of parameters (-sf/-st)
This can be backported only if the previous patch which fixes
copy_argv() is backported too.
Could be backported as far as 1.9.
There is no reason the -x option can't take an argument which starts with
a -. This limitation must only be used for options that take a
non-finite list of parameters (-sf/-st)
This can be backported only if the previous patch which fixes
copy_argv() is backported too.
Could be backported as far as 1.8.
The copy_argv() function, which is used to copy and remove some of the
arguments of the command line in order to re-exec() the master process,
is poorly implemented.
The function tries to remove the -x and the -sf/-st options but without
taking into account that some of the options could take a parameter
starting with a dash.
In issue #644, haproxy starts with "-L -xfoo" which is perfectly
correct. However, the re-exec is done without "-xfoo" because the master
tries to remove the "-x" option. Indeed, the copy_argv() function does
not know how much arguments an option can have, and just assume that
everything starting with a dash is an option. So haproxy is exec() with
"-L" but without a parameter, which is wrong and leads to the exit of
the master, with usage().
To fix this issue, copy_argv() must know how much parameters an option
takes, and copy or skip the parameters correctly.
This fix is a first step but it should evolve to a cleaner way of
declaring the options to avoid deduplication of the parsing code, so we
avoid new bugs.
Should be backported with care as far as 1.8, by removing the options
that does not exists in the previous versions.
When the metrics are dumped, in the main function promex_dump_metrics(), the
appctx flags are set before entering in a new scope, among others things to know
which metrics names and descriptions to use. But, those flags are not restored
when the dump is interrupted because of a full output buffer. If this happens
after the dump of global metrics, it may only lead to extra #TYPE and #HELP
lines. But if this happens during the dump of global metrics, the following
dumps of frontends, backends and servers metrics use names and descriptions of
global ones with the unmatching indexes. This first leads to unexisting metrics
names. For instance, "haproxy_frontend_nbproc". But also to out-of-bound
accesses to name and description arrays because there are more stats fields than
info fields.
It is easy to reproduce the bug using small buffers, setting tune.bufsize to
8192 for instance.
This patch should fix the issue #666. It must be backported as far as 2.0.
When checking the config validity of the http-check rulesets, the test on the
ruleset type is inverted. So a warning about ignored directive is emitted when
the config is valid and omitted when it should be reported.
No backport needed.
The pattern references lock must be hold to perform set/add/del
operations. Unfortunately, it is not true for the lua functions manipulating acl
and map files.
This patch should fix the issue #664. It must be backported as far as 1.8.
Before executing a lua action, the analyse expiration timeout of the
corresponding channel must be reset. Otherwise, when it expires, for instance
because of a call to core.msleep(), if the action yields, an expired timeout
will be used for the stream's task, leading to a loop.
This patch should fix the issue #661. It must be backported in all versions
supporting the lua.
By default, HAProxy is able to implicitly upgrade an H1 client connection to an
H2 connection if the first request it receives from a given HTTP connection
matches the HTTP/2 connection preface. This way, it is possible to support H1
and H2 clients on a non-SSL connections. It could be a problem if for any
reason, the H2 upgrade is not acceptable. "option disable-h2-upgrade" may now be
used to disable it, per proxy. The main puprose of this option is to let an
admin to totally disable the H2 support for security reasons. Recently, a
critical issue in the HPACK decoder was fixed, forcing everyone to upgrade their
HAProxy version to fix the bug. It is possible to disable H2 for SSL
connections, but not on clear ones. This option would have been a viable
workaround.
This reverts commit 4fed93eb72.
This commit was simplifying the certificate chain loading with
SSL_CTX_add_extra_chain_cert() which is available in all SSL libraries.
Unfortunately this function is not compatible with the
multi-certificates bundles, which have the effect of concatenating the
chains of all certificate types instead of creating a chain for each
type (RSA, ECDSA etc.)
Should fix issue #655.
The support for reqrep and friends was removed in 2.1 but the
chain_regex() function and the "action" field in the regex struct
was still there. This patch removes them.
One point worth mentioning though. There is a check_replace_string()
function whose purpose was to validate the replacement strings passed
to reqrep. It should also be used for other replacement regex, but is
never called. Callers of exp_replace() should be checked and a call to
this function should be added to detect the error early.
Network types were directly and mistakenly mapped on sample types:
This patch fix the doc with values effectively used to keep backward
compatiblitiy on existing implementations.
In addition it adds an internal/network mapping for key types to avoid
further mistakes adding or modifying internals types.
This patch should be backported on all maintained branches,
particularly until v1.8 included for documentation part.
The recently added ring section post-processing added this bening
warning on 32-bit archs:
src/sink.c: In function 'cfg_post_parse_ring':
src/sink.c:994:15: warning: format '%lu' expects argument of type 'long unsigned int', but argument 4 has type 'size_t {aka unsigned int}' [-Wformat=]
ha_warning("ring '%s' event max length '%u' exceeds size, forced to size '%lu'.\n",
^
Let's just cast b_size() to unsigned long here.
Since 8177ad9 ("MINOR: ssl: split config and runtime variable for
ssl-{min,max}-ver"), the dump for ssl-min-ver and ssl-max-ver is fixed,
so we can remove the comment.
Using ssl-max-ver without ssl-min-ver is ambiguous.
When the ssl-min-ver is not configured, and ssl-max-ver is set to a
value lower than the default ssl-min-ver (which is TLSv1.2 currently),
set the ssl-min-ver to the value of ssl-max-ver, and emit a warning.
log-proto <logproto>
The "log-proto" specifies the protocol used to forward event messages to
a server configured in a ring section. Possible values are "legacy"
and "octet-count" corresponding respectively to "Non-transparent-framing"
and "Octet counting" in rfc6587. "legacy" is the default.
Notes: a separated io_handler was created to avoid per messages test
and to prepare code to set different log protocols such as
request- response based ones.
This patch adds new statement "server" into ring section, and the
related "timeout connect" and "timeout server".
server <name> <address> [param*]
Used to configure a syslog tcp server to forward messages from ring buffer.
This supports for all "server" parameters found in 5.2 paragraph.
Some of these parameters are irrelevant for "ring" sections.
timeout connect <timeout>
Set the maximum time to wait for a connection attempt to a server to succeed.
Arguments :
<timeout> is the timeout value specified in milliseconds by default, but
can be in any other unit if the number is suffixed by the unit,
as explained at the top of this document.
timeout server <timeout>
Set the maximum time for pending data staying into output buffer.
Arguments :
<timeout> is the timeout value specified in milliseconds by default, but
can be in any other unit if the number is suffixed by the unit,
as explained at the top of this document.
Example:
global
log ring@myring local7
ring myring
description "My local buffer"
format rfc3164
maxlen 1200
size 32764
timeout connect 5s
timeout server 10s
server mysyslogsrv 127.0.0.1:6514
It just appeared that the tar.gz we put online are not reproducible
because a timestamp is put by default into the archive. Passing "-n"
to gzip is sufficient to remove this timestamp, so let's do it, and
also make the gzip command configurable for more flexibility. Now
issuing the commands multiple times finally results in the same
archives being produced.
This should be backported to supported stable branches.
Commit 04f5fe87d3 introduced an rwlock in the pools to deal with the risk
that pool_flush() dereferences an area being freed, and commit 899fb8abdc
turned it into a spinlock. The pools already contain a spinlock in case of
locked pools, so let's use the same and simplify the code by removing ifdefs.
At this point I'm really suspecting that if pool_flush() would instead
rely on __pool_get_first() to pick entries from the pool, the concurrency
problem could never happen since only one user would get a given entry at
once, thus it could not be freed by another user. It's not certain this
would be faster however because of the number of atomic ops to retrieve
one entry compared to a locked batch.
Since HAProxy 1.8, the TLS default minimum version was set to TLSv1.0 to
avoid using the deprecated SSLv3.0. Since then, the standard changed and
the recommended TLS version is now TLSv1.2.
This patch changes the minimum default version to TLSv1.2 on bind lines.
If you need to use prior TLS version, this is still possible by
using the ssl-min-ver keyword.
When a tcpcheck ruleset is created, it is automatically inserted in a global
tree. Unfortunately for applicative health checks (redis, mysql...), the created
ruleset is inserted a second time during the directive parsing. The leads to a
infinite loop when haproxy is stopped when we try to scan the tree to release
all tcpcheck rulesets.
Now, only the function responsible to create the tcpcheck ruleset insert it into
the tree.
No backport needed.
In issue #657, Coverity found a bug in the "nameserver" parser for the
resolv.conf when "parse-resolv-conf" is set. What happens is that if an
unparsable address appears on a "nameserver" line, it will destroy the
previously allocated pointer before reporting the warning, then the next
"nameserver" line will dereference it again and wlil cause a crash. If
the faulty nameserver is the last one, it will only be a memory leak.
Let's just make sure we preserve the pointer when handling the error.
The patch also fixes a typo in the warning.
The bug was introduced in 1.9 with commit 44e609bfa ("MINOR: dns:
Implement `parse-resolv-conf` directive") so the fix needs to be backported
up to 1.9 or 2.0.
This patch removes all trailing LFs and Zeros from
log messages. Previously only the last LF was removed.
It's a regression from e8ea0ae6f6 "BUG/MINOR: logs:
prevent double line returns in some events."
This should fix github issue #654
In hlua_stktable_lookup(), the key length is never set so all
stktable:lookup("key") calls return nil from lua.
This patch must be backported as far as 1.9.
[Cf: I slightly updated the patch to use lua_tolstring() instead of
luaL_checkstring() + strlen()]
Most of code in event_srv_chk_io() function is inherited from the checks before
the recent refactoring. Now, it is enough to only call wake_srv_chk(). Since the
refactoring, the removed code is dead and never called. wake_srv_chk() may only
return 0 if tcpcheck_main() returns 0 and the check status is unknown
(CHK_RES_UNKNOWN). When this happens, nothing is performed in event_srv_chk_io().
When an health check is waiting for a connection establishment, it subscribe for
receive or send events, depending on the next rule to be evaluated. For
subscription for send events, there is no problem. It works as expected. For
subscription for receive events, It only works for HTTP checks because the
underlying multiplexer takes care to do a receive before subscribing again,
updating the fd state. For TCP checks, the PT multiplexer only forwards
subscriptions at the transport layer. So the check itself is woken up. This
leads to a subscribe/notify loop waiting the connection establishment or a
timeout, uselessly eating CPU.
Thus, when a check is waiting for a connect, instead of blindly resubscribe for
receive events when it is woken up, we now try to receive data.
This patch should fix the issue #635. No backport needed.
HTTP_1XX, HTTP_3XX and HTTP_4XX message templates are no longer used. Only
HTTP_302 and HTTP_303 are used during configuration parsing by "errorloc" family
directives. So these templates are removed from the generic http code. And
HTTP_302 and HTTP_303 templates are moved as static strings in the function
parsing "errorloc" directives.
Now http-request auth rules are evaluated in a dedicated function and no longer
handled "in place" during the HTTP rules evaluation. Thus the action name
ACT_HTTP_REQ_AUTH is removed. In additionn, http_reply_40x_unauthorized() is
also removed. This part is now handled in the new action_ptr callback function.
There is no reason to not use proxy's error replies to emit 401/407
responses. The function http_reply_40x_unauthorized(), responsible to emit those
responses, is not really complex. It only adds a
WWW-Authenticate/Proxy-Authenticate header to a generic message.
So now, error replies can be defined for 401 and 407 status codes, using
errorfile or http-error directives. When an http-request auth rule is evaluated,
the corresponding error reply is used. For 401 responses, all occurrences of the
WWW-Authenticate header are removed and replaced by a new one with a basic
authentication challenge for the configured realm. For 407 responses, the same
is done on the Proxy-Authenticate header. If the error reply must not be
altered, "http-request return" rule must be used instead.
This reg-test checks that sending unique IDs via PPv2 works for servers
with the `alpn` option specified (issue #640). As a side effect it also
checks that PPv2 works with ALPN (issue #651).
It has been verified that the test fails without the following commits
applied and succeeds with them applied.
1f9a4ecea BUG/MEDIUM: backend: set the connection owner to the session when using alpn.
083fd42d5 BUG/MEDIUM: connection: Ignore PP2 unique ID for stream-less connections
eb9ba3cb2 BUG/MINOR: connection: Always get the stream when available to send PP2 line
Without the first two commits HAProxy crashes during execution of the
test. Without the last commit the test will fail, because no unique ID
is received.
During pool_free(), when the ->allocated value is 125% of needed_avg or
more, instead of putting the object back into the pool, it's immediately
freed using free(). By doing this we manage to significantly reduce the
amount of memory pinned in pools after transient traffic spikes.
During a test involving a constant load of 100 concurrent connections
each delivering 100 requests per second, the memory usage was a steady
21 MB RSS. Adding a 1 minute parallel load of 40k connections all looping
on 100kB objects made the memory usage climb to 938 MB before this patch.
With the patch it was only 660 MB. But when this parasit load stopped,
before the patch the RSS would remain at 938 MB while with the patch,
it went down to 480 then 180 MB after a few seconds, to stabilize around
69 MB after about 20 seconds.
This can be particularly important to improve reloads where the memory
has to be shared between the old and new process.
Another improvement would be welcome, we ought to have a periodic task
to check pools usage and continue to free up unused objects regardless
of any call to pool_free(), because the needed_avg value depends on the
past and will not cover recently refilled objects.
This adds a sliding estimate of the pools' usage. The goal is to be able
to use this to start to more aggressively free memory instead of keeping
lots of unused objects in pools. The average is calculated as a sliding
average over the last 1024 consecutive measures of ->used during calls to
pool_free(), and is bumped up for 1/4 of its history from ->allocated when
allocation from the pool fails and results in a call to malloc().
The result is a floating value between ->used and ->allocated, that tries
to react fast to under-estimates that result in expensive malloc() but
still maintains itself well in case of stable usage, and progressively
goes down if usage shrinks over time.
This new metric is reported as "needed_avg" in "show pools".
Sadly due to yet another include dependency hell, we couldn't reuse the
functions from freq_ctr.h so they were temporarily duplicated into memory.h.
In connect_server(), if we can't create the mux immediately because we have
to wait until the alpn is negociated, store the session as the connection's
owner. conn_create_mux() expects it to be set, and provides it to the mux
init() method. Failure to do so will result to crashes later if the
connection is private, and even if we didn't do so it would prevent connection
reuse for private connections.
This should fix github issue #651.
When a PROXY protocol line must be sent, it is important to always get the
stream if it exists. It is mandatory to send an unique ID when the unique-id
option is enabled. In conn_si_send_proxy(), to get the stream, we first retrieve
the conn-stream attached to the backend connection. Then if the conn-stream data
callback is si_conn_cb, it is possible to get the stream. But for now, it only
works for connections with a multiplexer. Thus, for mux-less connections, the
unique ID is never sent. This happens for all SSL connections relying on the
alpn to choose the good multiplexer. But it is possible to use the context of
such connections to get the conn-stream.
The bug was introduced by the commit cf6e0c8a8 ("MEDIUM: proxy_protocol: Support
sending unique IDs using PPv2"). Thus, this patch must be backported to the same
versions as the commit above.
It is possible to send a unique ID when the PROXY protocol v2 is used. It relies
on the stream to do so. So we must be sure to have a stream. Locally initiated
connections may not be linked to a stream. For instance, outgoing connections
created by health checks have no stream. Moreover, the stream is not retrieved
for mux-less connections (this bug will be fixed in another commit).
Unfortunately, in make_proxy_line_v2() function, the stream is not tested before
generating the unique-id. This bug leads to a segfault when a health check is
performed for a server with the PROXY protocol v2 and the unique-id option
enabled. It also crashes for servers using SSL connections with alpn. The bug
was introduced by the commit cf6e0c8a8 ("MEDIUM: proxy_protocol: Support sending
unique IDs using PPv2")
This patch should fix the issue #640. It must be backported to the same versions
as the commit above.
spoa server fails to build when python3.8 is not available. If
python3-config --embed fails, the output of the command is registered in
check_python_config. However when it's later used to define
PYTHON_DEFAULT_INC and PYTHON_DEFAULT_LIB it's content does not match
and fallback to python2.7
Content of check_python_config when building with python3.6:
Usage: bin/python3-config --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir python3
As we are only looking for return code, this commit ensure we always
ignore the output of python3-config or hash commands.
During an health check execution, the conn-stream and the conncetion may only be
NULL before the evaluation of the first rule, which is always a connect, or if
the first conn-stream allocation failed. Thus, in tcpcheck_main(), useless tests
on the conn-stream or on the connection have been removed. A comment has been
added to make it clear.
No backport needed.