The domain option will be used to have statistics attached to other
objects than proxies/listeners/servers. At the moment, only the PROXY
domain is available.
Add an argument 'domain' on the 'show stats' cli command to specify the
domain. Only 'domain proxy' is available now. If not specified, proxy
will be considered the default domain.
For HTML output, only proxy statistics will be displayed.
Debug Messages emitted in lua using core.Debug() or core.log() are now only
displayed on stderr if HAProxy is started in debug mode (-d parameter on the
command line). There is no change for other message levels.
This patch should fix the issue #879. It may be backported to all stable
versions.
Create a dedicated function to loop on proxies and dump them. This will
be clearer when other object will be dump as well.
This patch is needed to extend stat support to components other than
proxies objects.
Create a dedicated function to dump a proxy as a json content. This
patch will be needed when other types of objects will be available for
json dump.
This patch is needed to extend stat support to components other than
proxies objects.
Use an opaque pointer to store proxy instance. Regroup server/listener
as a single opaque pointer. This has the benefit to render the structure
more evolutive to support statistics on other types of objects in the
future.
This patch is needed to extend stat support for components other than
proxies objects.
The prometheus module has been adapted for these changes.
Render the stats size parametric in csv/json dump functions. This is
needed for the future patch which provides dynamic stats. For now the
static value ST_F_TOTAL_FIELDS is provided.
Remove unused parameter px on stats_dump_one_line.
This patch is needed to extend stat support to components other than
proxies objects.
Un-mark stats_dump_one_line and stats_putchk as static and export them
in the header file. These functions will be reusable by other components to
print their statistics.
This patch is needed to extend stat support to components other than
proxies objects.
There is a confusion between the HAProxy bundle and OpenSSL. OpenSSL
does not have "bundles" but multiple certificates in the same store.
Fix a commentary in the crt-list code.
A crash reported in github issue #880 looks impossible unless
pendconn_cond_unlink() occasionally sees a null leaf_p when attempting
to remove an entry, which seems to be confirmed by the reporter. What
seems to be happening is that depending on compiler optimizations,
this pointer can appear as null while pointers are moved if one of
the node's parents is removed from or inserted into the tree. There's
no explicit null of the pointer during these operations but those
pointers are rewritten in multiple steps and nothing prevents this
situation from happening, and there are no particular barrier nor
atomic ops around this.
This test was used to avoid unnecessary locking, for already deleted
entries, but looking at the code it appears that pendconn_free() already
resets s->pend_pos that's used as <p> there, and that the other call
reasons are after an error where the connection will be dropped as
well. So we don't save anything by doing this test, and make it
unsafe. The older code used to check for list emptiness there and
not inside pendconn_unlink(), which explains why the code has stayed
there. Let's just remove this now.
Thanks to @jaroslawr for reporting this issue in great details and for
testing the proposed fix.
This should be backpored to 1.8, where the test on LIST_ISEMPTY should
be moved to pendconn_unlink() instead (inside the lock, just like 2.0+).
Update the documentation with the new bundle behavior which does not use
the same OpenSSL certificate store anymore but loads the PEM separately
as multiple "crt" were specified.
It should fix issue #872.
Since the health-check refactoring in the 2.2, the checks through a socks4 proxy
are broken. To fix this bug, CO_FL_SOCKS4 flag must be set on the connection
before calling the connect() callback function because this flags is checked to
use the right destination address. The same is done for the CO_FL_SEND_PROXY
flag for a consistency purpose.
A reg-test has been added to test the "check-via-socks4" directive.
This patch must be backported to 2.2.
The warning is only emitted for HTTP frontend. Idea is to encourage the usage of
"tcp-request session" rules to track counters that does not depend on the
request content. The documentation has been updated accordingly.
The warning is important because since the multiplexers were added in the
processing chain, the HTTP parsing is performed at a lower level. Thus parsing
errors are detected in the multiplexers, before the stream creation. In HTTP/2,
the error is reported by the multiplexer itself and the stream is never
created. This difference has a certain number of consequences, one of which is
that HTTP request counting in stick tables only works for valid H2 request, and
HTTP error tracking in stick tables never considers invalid H2 requests but only
invalid H1 ones. And the aim is to do the same with the mux-h1. This change will
not be done for the 2.3, but the 2.4. At the end, H1 and H2 parsing errors will
be caught by the multiplexers, at the session level. Thus, tracking counters at
the content level should be reserved for rules using a key based on the request
content or those using ACLs based on the request content.
To be clear, a warning will be emitted for the following rules :
tcp-request content track-sc0 src
tcp-request content track-sc0 src if ! { src 10.0.0.0/24 }
tcp-request content track-sc0 src if { ssl_fc }
But not for the following ones :
tcp-request content track-sc0 req.hdr(host)
tcp-request content track-sc0 src if { req.hdr(host) -m found }
Because the parsing of HTTP message is now performed in the HTTP multiplexers,
the content is immediatly available when "tcp-request content" rules are
evaluated for an HTTP frontend. So, it is a good idea to make the documentation
explicit on this point. In addition, because in all cases, the parsing is
already performed, there is no reason to still use "tcp-request content" rules
based on L7 matching, although it is still valid. The recommended way is to use
"http-request" rules instead. Again, it is a good idea to update the
documentation on this point.
We use chunk_initstr() to store the program name as the default log-tag.
If we use the log-tag directive in the config file, this chunk will be
destroyed and replaced. chunk_initstr() sets the chunk size to 0 so we
will free the chunk itself, but not its content.
This happens for a global section and also for a proxy.
We fix this by using chunk_initlen() instead of chunk_initstr().
We also check that the memory allocation was successfull, otherwise we quit.
This fixes github issue #850.
It can be backported as far as 1.9, with minor adjustments to includes.
this condition is never true as we either break or goto error, so those
two lines could be removed in the current state of the code.
this is fixing github issue #862
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Similar to warning during the parsing of the regular configuration file
that was added in 2fd5bdb439 this patch adds
a warning to the parsing of a crt-list if the file does not end in a
newline (and thus might have been truncated).
The logic essentially just was copied over. It might be good to refactor
this in the future, allowing easy re-use within all line-based config
parsers.
see https://github.com/haproxy/haproxy/issues/860#issuecomment-693422936
see 0354b658f0
This should be backported as a warning to 2.2.
Solaris 9 (released 2002) added support for closefrom().
I bumped the version in the comment to 10 as the default feature
flags already has event ports enabled which were introduced in
Solaris 10.
Previous commit fa41cb679 ("MINOR: tools: support for word expansion
of environment in parse_line") introduced two new isspace() on a char
and broke the build on systems using an array disguised in a macro
instead of a function (like cygwin). Just use the usual cast.
Allow the syntax "${...[*]}" to expand an environment variable
containing several values separated by spaces as individual arguments. A
new flag PARSE_OPT_WORD_EXPAND has been added to toggle this feature on
parse_line invocation. In case of an invalid syntax, a new error
PARSE_ERR_WRONG_EXPAND will be triggered.
This feature has been asked on the github issue #165.
Sometimes it's desirable to append local version naming to packages,
and currently it can only be done using SUBVERS which is already set
by default to the git commit ID and patch count since last known tag,
making the addition a bit complicated.
Let's just add a new EXTRAVERSION field that is empty by default, and
systematically appended verbatim to the version string everywhere. This
way it becomes trivial to append some local strings, such as:
make TARGET=foo EXTRAVERSION=+$(quilt applied|wc -l)
-> 2.3-dev5-5018aa-15+1
or :
make TARGET=foo EXTRAVERSION=-$(date +%F)
-> 2.3-dev5-5018aa-15-20200110
Let's be careful not to add double quotes (used as the string delimiter)
nor spaces (which can confuse version parsers on the output). The extra
version is also used to name a tarball. It's always pre-initialized to an
empty string so that it's not accidently inherited from the environment.
It's not reported in "make version" to avoid fooling tools (it would be
pointless anyway).
As a side effect it also becomes possible to force VERSION and SUBVERS
to an empty string and use EXTRAVERSION alone to force a specific version
(could possibly be useful when bisecting from patch queues outside of Git
for example).
For some algos (roundrobin, static-rr, leastconn, first) we know that
if there is any request queued in the backend, it's because a previous
attempt failed at finding a suitable server after trying all of them.
This alone is sufficient to decide that the next request will skip the
LB algo and directly reach the backend's queue. Doing this alone avoids
an O(N) lookup when load-balancing on a saturated farm of N servers,
which starts to be very expensive for hundreds of servers, especially
under the lbprm lock. This change alone has increased the request rate
from 110k to 148k RPS for 200 saturated servers on 8 threads, and
fwlc_reposition_srv() doesn't show up anymore in perf top. See github
issue #880 for more context.
It could have been the same for random, except that random is performed
using a consistent hash and it only considers a small set of servers (2
by default), so it may result in queueing at the backend despite having
some free slots on unknown servers. It's no big deal though since random()
only performs two attempts by default.
For hashing algorithms this is pointless since we don't queue at the
backend, except when there's no hash key found, which is the least of
our concerns here.
If random() returns a server whose maxconn is reached or the queue is
used, instead of adding the request to the server's queue, better add
it to the backend queue so that it can be served by any server (hence
the fastest one).
We used to set it to ${h1_px_addr} but it randomly fails on certain
hosts (FreeBSD and OSX) where the address is surprisingly set to "::1"
while the Host field contains 127.0.0.1 (hence two different address
families). While this is likely a minor issue in vtest, we don't need
to depend on this and can easily hard-code 127.0.0.1 which is already
used in other tests.
We should not exits on error out of the crtlist_parse_line() function.
The cfgerr error must be checked with the ERR_CODE mask.
Must be backported in 2.2.
especially when starting to use `new ssl cert` runtime API, it might
become a bit confusing for users to mix bundle and single cert,
especially when it comes to use the commit command:
e.g.:
- start the process with `crt` loading a bundle
- use `set ssl cert my_cert.pem.ecdsa`: API detects it as a replacement
of a bundle.
- `commit` has to be done on the bundle: `commit ssl cert my_cert.pem`
however:
- add a new cert: `new ssl cert my_cert.pem.rsa`: added as a single
certificate
- `commit` has to be done on the certificate: `commit ssl cert
my_cert.pem.rsa`
this should resolve github issue #872
this should probably be backported in >= v2.2 in order to encourage
people to move away from bundle certificates loading.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
`tcpcheck_agent_expect_reply` expects "fail" not "failed"
This should fix github issue #876
This can be backported to all maintained versions (i.e >= 1.6) as of
today.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Update the OpenBSD target features being enabled.
I updated the list of features after noticing
"BUILD: makefile: disable threads by default on OpenBSD".
The Makefile utilizing gcc(1) by default resulted in utilizing
our legacy and obsolete compiler (GCC 4.2.1) instead of the
proper system compiler (Clang), which does support TLS. With
"BUILD: makefile: change default value of CC from gcc to cc"
that is resolved.
Released version 2.3-dev5 with the following main changes :
- DOC: Fix typo in iif() example
- CLEANUP: Update .gitignore
- BUILD: introduce possibility to define ABORT_NOW() conditionally
- CI: travis-ci: help Coverity to recognize abort()
- BUG/MINOR: Fix type passed of sizeof() for calloc()
- CLEANUP: Do not use a fixed type for 'sizeof' in 'calloc'
- CLEANUP: tree-wide: use VAR_ARRAY instead of [0] in various definitions
- BUILD: connection: fix build on clang after the VAR_ARRAY cleanup
- BUG/MINOR: ssl: verifyhost is case sensitive
- BUILD: makefile: change default value of CC from gcc to cc
- CI: travis-ci: split asan step out of running tests
- BUG/MINOR: server: report correct error message for invalid port on "socks4"
- BUG/MEDIUM: ssl: Don't call ssl_sock_io_cb() directly.
- BUG/MINOR: ssl/crt-list: crt-list could end without a \n
- BUG/MINOR: log-forward: fail on unknown keywords
- MEDIUM: log-forward: use "dgram-bind" instead of "bind" for the listener
- BUG/MEDIUM: log-forward: always quit on parsing errors
- MEDIUM: ssl: remove bundle support in crt-list and directories
- MEDIUM: ssl/cli: remove support for multi certificates bundle
- MINOR: ssl: crtlist_dup_ssl_conf() duplicates a ssl_bind_conf
- MINOR: ssl: crtlist_entry_dup() duplicates a crtlist_entry
- MEDIUM: ssl: emulates the multi-cert bundles in the crtlist
- MEDIUM: ssl: emulate multi-cert bundles loading in standard loading
- CLEANUP: ssl: remove test on "multi" variable in ckch functions
- CLEANUP: ssl/cli: remove test on 'multi' variable in CLI functions
- CLEANUP: ssl: remove utility functions for bundle
- DOC: explain bundle emulation in configuration.txt
- BUILD: fix build with openssl < 1.0.2 since bundle removal
- BUG/MINOR: log: gracefully handle the "udp@" address format for log servers
- BUG/MINOR: dns: gracefully handle the "udp@" address format for nameservers
- MINOR: listener: create a new struct "settings" in bind_conf
- MINOR: listener: move bind_proc and bind_thread to struct settings
- MINOR: listener: move the interface to the struct settings
- MINOR: listener: move the network namespace to the struct settings
- REORG: listener: create a new struct receiver
- REORG: listener: move the listening address to a struct receiver
- REORG: listener: move the receiving FD to struct receiver
- REORG: listener: move the listener's proto to the receiver
- MINOR: listener: make sock_find_compatible_fd() check the socket type
- REORG: listener: move the receiver part to a new file
- MINOR: receiver: link the receiver to its settings
- MINOR: receiver: link the receiver to its owner
- MINOR: listener: prefer to retrieve the socket's settings via the receiver
- MINOR: receiver: add a receiver-specific flag to indicate the socket is bound
- MINOR: listener: move the INHERITED flag down to the receiver
- MINOR: receiver: move the FOREIGN and V6ONLY options from listener to settings
- MINOR: sock: make sock_find_compatible_fd() only take a receiver
- MINOR: protocol: rename the ->bind field to ->listen
- MINOR: protocol: add a new ->bind() entry to bind the receiver
- MEDIUM: sock_inet: implement sock_inet_bind_receiver()
- MEDIUM: tcp: make use of sock_inet_bind_receiver()
- MEDIUM: udp: make use of sock_inet_bind_receiver()
- MEDIUM: sock_unix: implement sock_unix_bind_receiver()
- MEDIUM: uxst: make use of sock_unix_bind_receiver()
- MEDIUM: sockpair: implement sockpair_bind_receiver()
- MEDIUM: proto_sockpair: make use of sockpair_bind_receiver()
- MEDIUM: protocol: explicitly start the receiver before the listener
- MEDIUM: protocol: do not call proto->bind() anymore from bind_listener()
- MINOR: protocol: add a new proto_fam structure for protocol families
- MINOR: protocol: retrieve the family-specific fields from the family
- CLEANUP: protocol: remove family-specific fields from struct protocol
- MINOR: protocol: add a real family for existing FDs
- CLEANUP: tools: make str2sa_range() less awful for fd@ and sockpair@
- MINOR: tools: make str2sa_range() take more options than just resolve
- MINOR: tools: add several PA_O_PORT_* flags in str2sa_range() callers
- MEDIUM: tools: make str2sa_range() validate callers' port specifications
- MEDIUM: config: remove all checks for missing/invalid ports/ranges
- MINOR: tools: add several PA_O_* flags in str2sa_range() callers
- MINOR: listener: remove the inherited arg to create_listener()
- MINOR: tools: make str2sa_range() optionally return the fd
- MINOR: log: detect LOG_TARGET_FD from the fd and not from the syntax
- MEDIUM: tools: make str2sa_range() resolve pre-bound listeners
- MINOR: config: do not test an inherited socket again
- MEDIUM: tools: make str2sa_range() check for the sockpair's FD usability
- MINOR: tools: start to distinguish stream and dgram in str2sa_range()
- MEDIUM: tools: make str2sa_range() only report AF_CUST_UDP on listeners
- MINOR: tools: remove the central test for "udp" in str2sa_range()
- MINOR: cfgparse: add str2receiver() to parse dgram receivers
- MINOR: log-forward: use str2receiver() to parse the dgram-bind address
- MEDIUM: config: make str2listener() not accept datagram sockets anymore
- MINOR: listener: pass the chosen protocol to create_listeners()
- MINOR: tools: make str2sa_range() directly return the protocol
- MEDIUM: tools: make str2sa_range() check that the protocol has ->connect()
- MINOR: protocol: add the control layer type in the protocol struct
- MEDIUM: protocol: store the socket and control type in the protocol array
- MEDIUM: tools: make str2sa_range() use protocol_lookup()
- MEDIUM: proto_udp: replace last AF_CUST_UDP* with AF_INET*
- MINOR: tools: drop listener detection hack from str2sa_range()
- BUILD: sock_unix: add missing errno.h
- MINOR: sock_inet: report the errno string in binding errors
- MINOR: sock_unix: report the errno string in binding errors
- BUILD: sock_inet: include errno.h
- MINOR: h2/trace: also display the remaining frame length in traces
- BUG/MINOR: h2/trace: do not display "stream error" after a frame ACK
- BUG/MEDIUM: h2: report frame bits only for handled types
- BUG/MINOR: http-fetch: Don't set the sample type during the htx prefetch
- BUG/MINOR: Fix memory leaks cfg_parse_peers
- BUG/MINOR: config: Fix memory leak on config parse listen
- MINOR: backend: make the "whole" option of balance uri take only one bit
- MINOR: backend: add a new "path-only" option to "balance uri"
- REGTESTS: add a few load balancing tests
- BUG/MEDIUM: listeners: do not pause foreign listeners
- BUG/MINOR: listeners: properly close listener FDs
- BUILD: trace: include tools.h
If the TRACE option is used when compiling the haproxy source,
the following error occurs on debian 9.13:
src/calltrace.o: In function `make_line':
.../src/calltrace.c:204: undefined reference to `rdtsc'
src/calltrace.o: In function `calltrace':
.../src/calltrace.c:277: undefined reference to `rdtsc'
collect2: error: ld returned 1 exit status
Makefile:866: recipe for target 'haproxy' failed
The code dealing with zombie proxies in soft_stop() is bogus, it uses
close() instead of fd_delete(), leaving a live entry in the fdtab with
a dangling pointer to a free memory location. The FD might be reassigned
for an outgoing connection for the time it takes the proxy to completely
stop, or could be dumped on the CLI's "show fd" command. In addition,
the listener's FD was not even reset, leaving doubts about whether or
not it will happen again in deinit().
And in deinit(), the loop in charge of closing zombie FDs is particularly
unsafe because it closes the fd then calls unbind_listener() then
delete_listener() hoping none of them will touch it again. Since it
requires some mental efforts to figure what's done there, let's correctly
reset the fd here as well and close it using fd_delete() to eliminate any
remaining doubts.
It's uncertain whether this should be backported. Zombie proxies are rare
and the situations capable of triggering such issues are not trivial to
setup. However it's easy to imagine how things could go wrong if backported
too far. Better wait for any matching report if at all (this code has been
there since 1.8 without anobody noticing).
There's a nasty case with listeners that belong to foreign processes.
If a proxy is defined this way:
global
nbproc 2
frontend f
bind :1111 process 1
bind :2222 process 2
and if stats expose-fd listeners is set, the listeners' FDs will not
be closed on the processes that don't use them. At this point it's not
a big deal, except that they're shared between processes and that a
"disable frontend f" issued on one process will pause all of them and
cause the other process to see accept() fail, turning its own listener
to state LI_LIMITED to try to leave it some time to recover. But it
will never recover, even after an enable.
The root cause of the issue is that the ZOMBIE state doesn't cover
this situation since it's only for a proxy being entirely bound to a
process.
What we do here to address this is that we refrain from pausing a
file descriptor that belongs to a foreign process in pause_listener().
This definitely solves the problem. A similar test is present in
resume_listener() and is the reason why the FD doesn't recover upon the
"enable" action by the way.
This ought to be backported to 1.8 where seamless reload was integrated.
The config above should be sufficient to validate that the fix works;
after a pair of "disable/enable frontend" no process will handle the
traffic to one of the ports anymore.
This adds "balance-rr" to test round robin, "balance-uri" to test the
default balance-uri method, and "balance-uri-path-only" which mixes H1
and H2 through "balance uri path-only" and verifies that they reach
the same server.
Note that for the latter, "proto h2" explicitly had to be placed on
the listening socket otherwise it would timeout. This may indicate an
issue in the H1->H2 upgrade depending how the H2 preface is sent maybe.
Since we've fixed the way URIs are handled in 2.1, some users have started
to experience inconsistencies in "balance uri" between requests received
over H1 and the same ones received over H2. This is caused by the fact
that H1 rarely uses absolute URIs while H2 always uses them. Similar
issues were reported already around replace-uri etc, leading to "pathq"
recently being introduced, so this isn't new.
Here what this patch does is add a new option to "balance uri" to indicate
that the hashing should only start at the path and not cover the authority.
This makes H1 relative URIs and H2 absolute URI hashes equally again.
Some extra options could be added to normalize URIs by always hashing the
authority (or host) in front of them, which would make sure that both
absolute and relative requests provide the same hash. This is left for
later if needed.
This memory leak happens if there is two or more defaults section. When
the default proxy is reinitialized, the structure member containing the
config filename must be freed.
Fix github issue #851.
Should be backported as far as 1.6.
When memory allocation fails in cfg_parse_peers or when an error occurs
while parsing a stick-table, the temporary table and its id must be freed.
This fixes github issue #854. It should be backported as far as 2.0.
A subtle bug was introduced by the commit a6d9879e6 ("BUG/MEDIUM: htx:
smp_prefetch_htx() must always validate the direction"), for the "method"
sample fetch only. The sample data type and the method id are always
overwritten because smp_prefetch_htx() function is called later in the
sample fetch evaluation. The bug is in the smp_prefetch_htx() function but
it is only visible for the "method" sample fetch, for an unknown method.
In fact, when smp_prefetch_htx() is called, the sample object is
altered. The data type is set to SMP_T_BOOL and, on success, the data value
is set to 1. Thus, if the caller has already set some infos into the sample
object, they may be lost. AFAIK, there is no reason to do so. It is
inherited from the legacy HTTP code and I honestely don't known why it was
done this way. So, instead of fixing the "method" sample fetch to set useful
info after the call to smp_prefetch_htx() function, I prefer to not alter
the sample object in smp_prefetch_htx().
This patch must be backported as far as 2.0. On the 2.0, only the HTX part
must be fixed.
As part of his GREASE experiments on Chromium, Bence Béky reported in
https://lists.w3.org/Archives/Public/ietf-http-wg/2020JulSep/0202.html
and https://bugs.chromium.org/p/chromium/issues/detail?id=1127060 that
a certain combination of frame type and frame flags was causing an error
on app.slack.com. It turns out that it's haproxy that is causing this
issue because the frame type is wrongly assumed to support padding, the
frame flags indicate padding is present, and the frame is too short for
this, resulting in an error.
The reason why only some frame types are affected is due to the frame
type being used in a bit shift to match against a mask, and where the
5 lower bits of the frame type only are used to compute the frame bit.
If the resulting frame bit matches a DATA, HEADERS or PUSH_PROMISE frame
bit, then padding support is assumed and the test is enforced, resulting
in a PROTOCOL_ERROR or FRAME_SIZE_ERROR depending on the payload size.
We must never match any such bit for unsupported frame types so let's
add a check for this. This must be backported as far as 1.8.
Thanks to Cooper Bethea for providing enough context to help narrow the
issue down and to Bence Béky for creating a simple reproducer.
When sending a frame ACK, the parser state is not equal to H2_CS_FRAME_H
and we used to report it as an error, which is not true. In fact we should
only indicate when we skip remaining data.
This may be backported as far as 2.1.