A peer connection status must be considered as valid only if there is an applet
which has been instantiated for the connection to the peer. So, ->statuscode
should be considered as the last known peer connection status from the last
connection to this peer if any. To reflect this, "statuscode" field of peer dump
is renamed to "last_statuscode".
This patch also add "active"/"inactive" field after the peer location type
("remote" or "local") if an applet has been instantiated for this peer connection
or not.
Thank you to Emeric for having noticed this issue.
Must be backported in >=1.9 version.
one of the last I saw in this section while working on github issue #872
might be backported in all still supported versions
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Remove variable declaration inside a for-loop. This was introduced by my
patches serie of the implementation of dynamic stats. This is not
supported by older gcc, notably on the freebsd environment of the ci.
Use the new stats module API to integrate the dns counters in the
standard stats. This is done in order to avoid code duplication, keep
the code related to cli out of dns and use the full possibility of the
stats function, allowing to print dns stats in csv or json format.
Integrate the additional proxy stats on the html stats page. For each
module, a new column is displayed with the individual stats available as
a tooltip.
Add a boolean 'clearable' on stats module structure. If set, it forces
all the counters to be reset on 'clear counters' cli command. If not,
the counters are reset only when 'clear counters all' is used.
This is executed on startup with the registered statistics module. The
existing statistics have been merged in a list containing all
statistics for each domain. This is useful to print all available
statistics in a generic way.
Allocate extra counters for all proxies/servers/listeners instances.
These counters are allocated with the counters from the stats modules
registered on startup.
Implement a small API to easily add extra counters inside a structure
instance. This will be used to implement dynamic statistics linked on
every type of object as needed.
The counters are stored in a dynamic array inside the relevant objects.
A stat module can be registered to quickly add new statistics on
haproxy. It must be attached to one of the available stats domain. The
register must be done using INITCALL on STG_REGISTER.
The stat module has a name which should be unique for each new module in
a domain. It also contains a statistics list with their name/desc and a
pointer to a function used to fill the stats from the module counters.
The module also provides the initial counters values used on
automatically allocated counters. The offset for these counters
are stored in the module structure.
Use the character '-' to mark the end of static statistics on proxy
domain. After this marker, the order of the fields is not guaranteed and
should be parsed with care.
This flag can be used to determine on what type of proxy object the
statistics should be relevant. It will be useful when adding dynamic
statistics. Currently, this flag is not used.
The domain option will be used to have statistics attached to other
objects than proxies/listeners/servers. At the moment, only the PROXY
domain is available.
Add an argument 'domain' on the 'show stats' cli command to specify the
domain. Only 'domain proxy' is available now. If not specified, proxy
will be considered the default domain.
For HTML output, only proxy statistics will be displayed.
Debug Messages emitted in lua using core.Debug() or core.log() are now only
displayed on stderr if HAProxy is started in debug mode (-d parameter on the
command line). There is no change for other message levels.
This patch should fix the issue #879. It may be backported to all stable
versions.
Create a dedicated function to loop on proxies and dump them. This will
be clearer when other object will be dump as well.
This patch is needed to extend stat support to components other than
proxies objects.
Create a dedicated function to dump a proxy as a json content. This
patch will be needed when other types of objects will be available for
json dump.
This patch is needed to extend stat support to components other than
proxies objects.
Use an opaque pointer to store proxy instance. Regroup server/listener
as a single opaque pointer. This has the benefit to render the structure
more evolutive to support statistics on other types of objects in the
future.
This patch is needed to extend stat support for components other than
proxies objects.
The prometheus module has been adapted for these changes.
Render the stats size parametric in csv/json dump functions. This is
needed for the future patch which provides dynamic stats. For now the
static value ST_F_TOTAL_FIELDS is provided.
Remove unused parameter px on stats_dump_one_line.
This patch is needed to extend stat support to components other than
proxies objects.
Un-mark stats_dump_one_line and stats_putchk as static and export them
in the header file. These functions will be reusable by other components to
print their statistics.
This patch is needed to extend stat support to components other than
proxies objects.
There is a confusion between the HAProxy bundle and OpenSSL. OpenSSL
does not have "bundles" but multiple certificates in the same store.
Fix a commentary in the crt-list code.
A crash reported in github issue #880 looks impossible unless
pendconn_cond_unlink() occasionally sees a null leaf_p when attempting
to remove an entry, which seems to be confirmed by the reporter. What
seems to be happening is that depending on compiler optimizations,
this pointer can appear as null while pointers are moved if one of
the node's parents is removed from or inserted into the tree. There's
no explicit null of the pointer during these operations but those
pointers are rewritten in multiple steps and nothing prevents this
situation from happening, and there are no particular barrier nor
atomic ops around this.
This test was used to avoid unnecessary locking, for already deleted
entries, but looking at the code it appears that pendconn_free() already
resets s->pend_pos that's used as <p> there, and that the other call
reasons are after an error where the connection will be dropped as
well. So we don't save anything by doing this test, and make it
unsafe. The older code used to check for list emptiness there and
not inside pendconn_unlink(), which explains why the code has stayed
there. Let's just remove this now.
Thanks to @jaroslawr for reporting this issue in great details and for
testing the proposed fix.
This should be backpored to 1.8, where the test on LIST_ISEMPTY should
be moved to pendconn_unlink() instead (inside the lock, just like 2.0+).
Update the documentation with the new bundle behavior which does not use
the same OpenSSL certificate store anymore but loads the PEM separately
as multiple "crt" were specified.
It should fix issue #872.
Since the health-check refactoring in the 2.2, the checks through a socks4 proxy
are broken. To fix this bug, CO_FL_SOCKS4 flag must be set on the connection
before calling the connect() callback function because this flags is checked to
use the right destination address. The same is done for the CO_FL_SEND_PROXY
flag for a consistency purpose.
A reg-test has been added to test the "check-via-socks4" directive.
This patch must be backported to 2.2.
The warning is only emitted for HTTP frontend. Idea is to encourage the usage of
"tcp-request session" rules to track counters that does not depend on the
request content. The documentation has been updated accordingly.
The warning is important because since the multiplexers were added in the
processing chain, the HTTP parsing is performed at a lower level. Thus parsing
errors are detected in the multiplexers, before the stream creation. In HTTP/2,
the error is reported by the multiplexer itself and the stream is never
created. This difference has a certain number of consequences, one of which is
that HTTP request counting in stick tables only works for valid H2 request, and
HTTP error tracking in stick tables never considers invalid H2 requests but only
invalid H1 ones. And the aim is to do the same with the mux-h1. This change will
not be done for the 2.3, but the 2.4. At the end, H1 and H2 parsing errors will
be caught by the multiplexers, at the session level. Thus, tracking counters at
the content level should be reserved for rules using a key based on the request
content or those using ACLs based on the request content.
To be clear, a warning will be emitted for the following rules :
tcp-request content track-sc0 src
tcp-request content track-sc0 src if ! { src 10.0.0.0/24 }
tcp-request content track-sc0 src if { ssl_fc }
But not for the following ones :
tcp-request content track-sc0 req.hdr(host)
tcp-request content track-sc0 src if { req.hdr(host) -m found }
Because the parsing of HTTP message is now performed in the HTTP multiplexers,
the content is immediatly available when "tcp-request content" rules are
evaluated for an HTTP frontend. So, it is a good idea to make the documentation
explicit on this point. In addition, because in all cases, the parsing is
already performed, there is no reason to still use "tcp-request content" rules
based on L7 matching, although it is still valid. The recommended way is to use
"http-request" rules instead. Again, it is a good idea to update the
documentation on this point.
We use chunk_initstr() to store the program name as the default log-tag.
If we use the log-tag directive in the config file, this chunk will be
destroyed and replaced. chunk_initstr() sets the chunk size to 0 so we
will free the chunk itself, but not its content.
This happens for a global section and also for a proxy.
We fix this by using chunk_initlen() instead of chunk_initstr().
We also check that the memory allocation was successfull, otherwise we quit.
This fixes github issue #850.
It can be backported as far as 1.9, with minor adjustments to includes.
this condition is never true as we either break or goto error, so those
two lines could be removed in the current state of the code.
this is fixing github issue #862
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Similar to warning during the parsing of the regular configuration file
that was added in 2fd5bdb439 this patch adds
a warning to the parsing of a crt-list if the file does not end in a
newline (and thus might have been truncated).
The logic essentially just was copied over. It might be good to refactor
this in the future, allowing easy re-use within all line-based config
parsers.
see https://github.com/haproxy/haproxy/issues/860#issuecomment-693422936
see 0354b658f0
This should be backported as a warning to 2.2.
Solaris 9 (released 2002) added support for closefrom().
I bumped the version in the comment to 10 as the default feature
flags already has event ports enabled which were introduced in
Solaris 10.
Previous commit fa41cb679 ("MINOR: tools: support for word expansion
of environment in parse_line") introduced two new isspace() on a char
and broke the build on systems using an array disguised in a macro
instead of a function (like cygwin). Just use the usual cast.
Allow the syntax "${...[*]}" to expand an environment variable
containing several values separated by spaces as individual arguments. A
new flag PARSE_OPT_WORD_EXPAND has been added to toggle this feature on
parse_line invocation. In case of an invalid syntax, a new error
PARSE_ERR_WRONG_EXPAND will be triggered.
This feature has been asked on the github issue #165.
Sometimes it's desirable to append local version naming to packages,
and currently it can only be done using SUBVERS which is already set
by default to the git commit ID and patch count since last known tag,
making the addition a bit complicated.
Let's just add a new EXTRAVERSION field that is empty by default, and
systematically appended verbatim to the version string everywhere. This
way it becomes trivial to append some local strings, such as:
make TARGET=foo EXTRAVERSION=+$(quilt applied|wc -l)
-> 2.3-dev5-5018aa-15+1
or :
make TARGET=foo EXTRAVERSION=-$(date +%F)
-> 2.3-dev5-5018aa-15-20200110
Let's be careful not to add double quotes (used as the string delimiter)
nor spaces (which can confuse version parsers on the output). The extra
version is also used to name a tarball. It's always pre-initialized to an
empty string so that it's not accidently inherited from the environment.
It's not reported in "make version" to avoid fooling tools (it would be
pointless anyway).
As a side effect it also becomes possible to force VERSION and SUBVERS
to an empty string and use EXTRAVERSION alone to force a specific version
(could possibly be useful when bisecting from patch queues outside of Git
for example).
For some algos (roundrobin, static-rr, leastconn, first) we know that
if there is any request queued in the backend, it's because a previous
attempt failed at finding a suitable server after trying all of them.
This alone is sufficient to decide that the next request will skip the
LB algo and directly reach the backend's queue. Doing this alone avoids
an O(N) lookup when load-balancing on a saturated farm of N servers,
which starts to be very expensive for hundreds of servers, especially
under the lbprm lock. This change alone has increased the request rate
from 110k to 148k RPS for 200 saturated servers on 8 threads, and
fwlc_reposition_srv() doesn't show up anymore in perf top. See github
issue #880 for more context.
It could have been the same for random, except that random is performed
using a consistent hash and it only considers a small set of servers (2
by default), so it may result in queueing at the backend despite having
some free slots on unknown servers. It's no big deal though since random()
only performs two attempts by default.
For hashing algorithms this is pointless since we don't queue at the
backend, except when there's no hash key found, which is the least of
our concerns here.
If random() returns a server whose maxconn is reached or the queue is
used, instead of adding the request to the server's queue, better add
it to the backend queue so that it can be served by any server (hence
the fastest one).
We used to set it to ${h1_px_addr} but it randomly fails on certain
hosts (FreeBSD and OSX) where the address is surprisingly set to "::1"
while the Host field contains 127.0.0.1 (hence two different address
families). While this is likely a minor issue in vtest, we don't need
to depend on this and can easily hard-code 127.0.0.1 which is already
used in other tests.
We should not exits on error out of the crtlist_parse_line() function.
The cfgerr error must be checked with the ERR_CODE mask.
Must be backported in 2.2.
especially when starting to use `new ssl cert` runtime API, it might
become a bit confusing for users to mix bundle and single cert,
especially when it comes to use the commit command:
e.g.:
- start the process with `crt` loading a bundle
- use `set ssl cert my_cert.pem.ecdsa`: API detects it as a replacement
of a bundle.
- `commit` has to be done on the bundle: `commit ssl cert my_cert.pem`
however:
- add a new cert: `new ssl cert my_cert.pem.rsa`: added as a single
certificate
- `commit` has to be done on the certificate: `commit ssl cert
my_cert.pem.rsa`
this should resolve github issue #872
this should probably be backported in >= v2.2 in order to encourage
people to move away from bundle certificates loading.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
`tcpcheck_agent_expect_reply` expects "fail" not "failed"
This should fix github issue #876
This can be backported to all maintained versions (i.e >= 1.6) as of
today.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Update the OpenBSD target features being enabled.
I updated the list of features after noticing
"BUILD: makefile: disable threads by default on OpenBSD".
The Makefile utilizing gcc(1) by default resulted in utilizing
our legacy and obsolete compiler (GCC 4.2.1) instead of the
proper system compiler (Clang), which does support TLS. With
"BUILD: makefile: change default value of CC from gcc to cc"
that is resolved.