The old message "No more space left available" was redundant with "left
available". This commit is to rephrase that sentence and make it more
explicit we are talking about memory
This patch must be backported as far as 2.0.
As per https://docs.python.org/3/c-api/none.html, None requires to be
incremented before being returned to prevent deallocating none
This patch must be backported as far as 2.0.
This flags is now unused. It was used in REQ_WAIT_HTTP analyser, when a
stream was waiting for a request, to set the keep-alive timeout or to avoid
to send HTTP errors to client.
This tool monitors the HAProxy stable branches and calculates a proposed
release date for the next minor release based on the bug fixes that are in
the queue.
Print only:
./release-estimator.py --print
Send email:
./release-estimator.py --send-mail --from-email from@domain.local --to-email to@domain.local
See contrib/release-estimator/README.md for details.
This performs a connect(AF_UNSPEC) over an existing connection. This is
mainly for compatibility testing. At this step it only seems to work on
linux for TCP sockets (both listening and established), while SO_LINGER
successfully resets established connections on freebsd and aix.
The remaining proxy states were only used to distinguish an enabled
proxy from a disabled one. Due to the initialization order, both
PR_STNEW and PR_STREADY were equivalent after startup, and they
would only differ from PR_STSTOPPED when the proxy is disabled or
shutdown (which is effectively another way to disable it).
Now we just have a "disabled" field which allows to distinguish them.
It's becoming obvious that start_proxies() is only used to print a
greeting message now, that we'd rather get rid of. Probably that
zombify_proxy() and stop_proxy() should be merged once their
differences move to the right place.
Since v1.4 or so, it's almost not possible anymore to set this state. The
only exception is by using the CLI to change a frontend's maxconn setting
below its current usage. This case makes no sense, and for other cases it
doesn't make sense either because "full" is a vague concept when only
certain listeners are full and not all. Let's just remove this unused
state and make it clear that it's not reported. The "ready" or "open"
states will continue to be reported without being misleading as they
will be opposed to "stop".
Use an opaque pointer to store proxy instance. Regroup server/listener
as a single opaque pointer. This has the benefit to render the structure
more evolutive to support statistics on other types of objects in the
future.
This patch is needed to extend stat support for components other than
proxies objects.
The prometheus module has been adapted for these changes.
When we encounter a failure, all previously borrowed references should
be freed. Especially if the program is not failing immediately
This patch must be backported as far as 2.0.
IP addresses references passed in argument for ps_python are not freed after
they have been used. Leading to a small chance of mem leak if a lot of ip
addresses are passed around
This patch must be backported as far as 2.0.
The result from spoa evaluation of the user provided python code is
never passed back to the main spoa process nor freed.
Same for the keyword list passed.
This results into the elements never freed by Python as reference count
never goes down.
https://docs.python.org/3/extending/extending.html#reference-counting-in-python
This patch must be backported as far as 2.0.
From https://www.python.org/dev/peps/pep-0353/
"A new type Py_ssize_t is introduced, which has the same size as the
compiler's size_t type, but is signed. It will be a typedef for ssize_t
where available."
For integer types, causes printf to expect a size_t-sized integer
argument.
This should fix github issue #702
This should be backported to >= v2.2
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
A workaround for some difficulties encountered to anticipate end of
messages was addressed by commit 810df0614 ("MEDIUM: htx: Add a flag on
a HTX message when no more data are expected"), but there were 3 issues
in it (with minor impact):
- the flag was mistakenly set before an EOH in Lua, which would only
cause incomplete packets to be emitted for now but could cause
truncated responses in the future. It's not needed to add it on
the next EOM block as http_forward_proxy_resp() already does it.
- one was still missing in hlua_applet_http_fct(), possibly causing
delays on Lua services
- one was missing in the Prometheus exporter.
All this simply shows that this mechanism is still quite fragile and
not trivial to use, especially in order to deal with the impossibility
to append the EOM, so we'll need to improve the solution in the future
and future backports should not be completely ruled out.
This fix must be backported where the patch above is backported,
typically 2.1 and later as it was required for a set of fixes.
Following metrics are now exported by the prometheus exporter to reflect recent
changes on HAProxy :
* haproxy_process_failed_resolutions
* haproxy_process_bytes_out_total
* haproxy_process_spliced_bytes_out_total
* haproxy_process_bytes_out_rate
and
* haproxy_server_unsafe_idle_connections_current
* haproxy_server_safe_idle_connections_current
* haproxy_server_used_connections_current
* haproxy_server_need_connections_current
The dummy function takes care of doing a bit of work using a malloc()
to avoid returning a constant but it doesn't free the tested pointer,
which coverity noticed in issue #741. Let's free it before testing it
for the return value.
This may be backported but is not important since this code is only
present to allow to build the device detection code and not to actually
run it.
Change systemd service file to wait for network to be completely
online. This solves two problems:
If haproxy is configured to bind to IP address(es) that are not yet
assigned, haproxy would previously fail. The workaround is to use
"option transparent".
If haproxy us configured to use a resolver to resolve servers via DNS,
haproxy would previously fail due to the fact that the network is not
fully online yet. This is the most compelling reason for this patch.
Signed-off-by: Ryan O'Hara <rohara@redhat.com>
Acked-by: Lukas Tribus <lukas@ltri.eu>
Apparently Cygwin requires sys/types.h before netinet/tcp.h but doesn't
include it by itself, as shown here:
https://github.com/haproxy/haproxy/actions/runs/131943890
This patch makes sure it's always present, which is in server.c and
the SPOA example.
This patch fixes all the leftovers from the include cleanup campaign. There
were not that many (~400 entries in ~150 files) but it was definitely worth
doing it as it revealed a few duplicates.
Most of the files dealing with error reports have to include log.h in order
to access ha_alert(), ha_warning() etc. But while these functions don't
depend on anything, log.h depends on a lot of stuff because it deals with
log-formats and samples. As a result it's impossible not to embark long
dependencies when using ha_warning() or qfprintf().
This patch moves these low-level functions to errors.h, which already
defines the error codes used at the same places. About half of the users
of log.h could be adjusted, sometimes revealing other issues such as
missing tools.h. Interestingly the total preprocessed size shrunk by
4%.
There's no point splitting the file in two since only cfgparse uses the
types defined there. A few call places were updated and cleaned up. All
of them were in C files which register keywords.
There is nothing left in common/ now so this directory must not be used
anymore.
This one was not easy because it was embarking many includes with it,
which other files would automatically find. At least global.h, arg.h
and tools.h were identified. 93 total locations were identified, 8
additional includes had to be added.
In the rare files where it was possible to finalize the sorting of
includes by adjusting only one or two extra lines, it was done. But
all files would need to be rechecked and cleaned up now.
It was the last set of files in types/ and proto/ and these directories
must not be reused anymore.
extern struct dict server_name_dict was moved from the type file to the
main file. A handful of inlined functions were moved at the bottom of
the file. Call places were updated to use server-t.h when relevant, or
to simply drop the entry when not needed.
The files remained mostly unchanged since they were OK. However, half of
the users didn't need to include them, and about as many actually needed
to have it and used to find functions like srv_currently_usable() through
a long chain that broke when moving the file.
This one is particularly difficult to split because it provides all the
functions used to manipulate a proxy state and to retrieve names or IDs
for error reporting, and as such, it was included in 73 files (down to
68 after cleanup). It would deserve a small cleanup though the cut points
are not obvious at the moment given the number of structs involved in
the struct proxy itself.
The current state of the logging is a real mess. The main problem is
that almost all files include log.h just in order to have access to
the alert/warning functions like ha_alert() etc, and don't care about
logs. But log.h also deals with real logging as well as log-format and
depends on stream.h and various other things. As such it forces a few
heavy files like stream.h to be loaded early and to hide missing
dependencies depending where it's loaded. Among the missing ones is
syslog.h which was often automatically included resulting in no less
than 3 users missing it.
Among 76 users, only 5 could be removed, and probably 70 don't need the
full set of dependencies.
A good approach would consist in splitting that file in 3 parts:
- one for error output ("errors" ?).
- one for log_format processing
- and one for actual logging.
It was moved without any change, however many callers didn't need it at
all. This was a consequence of the split of proto_http.c into several
parts that resulted in many locations to still reference it.
Just some minor reordering, and the usual cleanup of call places for
those which didn't need it. We don't include the whole tools.h into
stats-t anymore but just tools-t.h.
The type file was slightly tidied. The cli-specific APPCTX_CLI_ST1_* flag
definitions were moved to cli.h. The type file was adjusted to include
buf-t.h and not the huge buf.h. A few call places were fixed because they
did not need this include.
The type file is becoming a mess, half of it is for the proxy protocol,
another good part describes conn_streams and mux ops, it would deserve
being split again. At least it was reordered so that elements are easier
to find, with the PP-stuff left at the end. The MAX_SEND_FD macro was moved
to compat.h as it's said to be the value for Linux.
The TASK_IS_TASKLET() macro was moved to the proto file instead of the
type one. The proto part was a bit reordered to remove a number of ugly
forward declaration of static inline functions. About a tens of C and H
files had their dependency dropped since they were not using anything
from task.h.
global.h was one of the messiest files, it has accumulated tons of
implicit dependencies and declares many globals that make almost all
other file include it. It managed to silence a dependency loop between
server.h and proxy.h by being well placed to pre-define the required
structs, forcing struct proxy and struct server to be forward-declared
in a significant number of files.
It was split in to, one which is the global struct definition and the
few macros and flags, and the rest containing the functions prototypes.
The UNIX_MAX_PATH definition was moved to compat.h.
This one is particularly tricky to move because everyone uses it
and it depends on a lot of other types. For example it cannot include
arg-t.h and must absolutely only rely on forward declarations to avoid
dependency loops between vars -> sample_data -> arg. In order to address
this one, it would be nice to split the sample_data part out of sample.h.
List.h was missing for LIST_ADDQ(). A few unneeded includes of action.h
were removed from certain files.
This one still relies on applet.h and stick-table.h.
A few includes had to be added, namely list-t.h in the type file and
types/proxy.h in the proto file. actions.h was including http-htx.h
but didn't need it so it was dropped.
And also rename standard.c to tools.c. The original split between
tools.h and standard.h dates from version 1.3-dev and was mostly an
accident. This patch moves the files back to what they were expected
to be, and takes care of not changing anything else. However this
time tools.h was split between functions and types, because it contains
a small number of commonly used macros and structures (e.g. name_desc)
which in turn cause the massive list of includes of tools.h to conflict
with the callers.
They remain the ugliest files of the whole project and definitely need
to be cleaned and split apart. A few types are defined there only for
functions provided there, and some parts are even OS-specific and should
move somewhere else, such as the symbol resolution code.
The various hpack files are self-contained, but hpack-tbl was one of
those showing difficulties when pools were added because that began
to add quite some dependencies. Now when built in standalone mode,
it still uses the bare minimum pool definitions and doesn't require
to know the prototypes anymore when only the structures are needed.
Thus the files were moved verbatim except for hpack-tbl which was
split between types and prototypes.
Most of the file was a large set of HTX elements manipulation functions
and few types, so splitting them allowed to further reduce dependencies
and shrink the build time. Doing so revealed that a few files (h2.c,
mux_pt.c) needed haproxy/buf.h and were previously getting it through
htx.h. They were fixed.
So the enums and structs were placed into http-t.h and the functions
into http.h. This revealed that several files were dependeng on http.h
but not including it, as it was silently inherited via other files.
The pretty confusing "buffer.h" was in fact not the place to look for
the definition of "struct buffer" but the one responsible for dynamic
buffer allocation. As such it defines the struct buffer_wait and the
few functions to allocate a buffer or wait for one.
This patch moves it renaming it to dynbuf.h. The type definition was
moved to its own file since it's included in a number of other structs.
Doing this cleanup revealed that a significant number of files used to
rely on this one to inherit struct buffer through it but didn't need
anything from this file at all.
Now the file is ready to be stored into its final destination. A few
minor reorderings were performed to keep the file properly organized,
making the various sections more visible (cache & lockless).
In addition and to stay consistent, memory.c was renamed to pool.c.
This one is included almost everywhere and used to rely on a few other
.h that are not needed (unistd, stdlib, standard.h). It could possibly
make sense to split it into multiple parts to distinguish operations
performed on timers and the internal time accounting, but at this point
it does not appear much important.
Half of the users of this include only need the type definitions and
not the manipulation macros nor the inline functions. Moves the various
types into mini-clist-t.h makes the files cleaner. The other one had all
its includes grouped at the top. A few files continued to reference it
without using it and were cleaned.
In addition it was about time that we'd rename that file, it's not
"mini" anymore and contains a bit more than just circular lists.
Fortunately that file wasn't made dependent upon haproxy since it was
integrated, better isolate it before it's too late. Its dependency on
api.h was the result of the change from config.h, which in turn wasn't
correct. It was changed back to stddef.h for size_t and sys/types.h for
ssize_t. The recently added reference to MAX() was changed as it was
placed only to avoid a zero length in the non-free-standing version and
was causing a build warning in the hpack encoder.
All files that were including one of the following include files have
been updated to only include haproxy/api.h or haproxy/api-t.h once instead:
- common/config.h
- common/compat.h
- common/compiler.h
- common/defaults.h
- common/initcall.h
- common/tools.h
The choice is simple: if the file only requires type definitions, it includes
api-t.h, otherwise it includes the full api.h.
In addition, in these files, explicit includes for inttypes.h and limits.h
were dropped since these are now covered by api.h and api-t.h.
No other change was performed, given that this patch is large and
affects 201 files. At least one (tools.h) was already freestanding and
didn't get the new one added.
The only leftovers were the unused compiler.h file and the LICENSE file
which is already mentioned in each and every ebtree file header.
A few build paths were updated in the contrib/ directory not to mention
this directory anymore, and all its occurrences were dropped from the
main makefile. From now on no other include path but include/ will be
needed anymore to build any file.
This is where other imported components are located. All files which
used to directly include ebtree were touched to update their include
path so that "import/" is now prefixed before the ebtree-related files.
The ebtree.h file was slightly adjusted to read compiler.h from the
common/ subdirectory (this is the only change).
A build issue was encountered when eb32sctree.h is loaded before
eb32tree.h because only the former checks for the latter before
defining type u32. This was addressed by adding the reverse ifdef
in eb32tree.h.
No further cleanup was done yet in order to keep changes minimal.
As part of the include files cleanup, we're going to kill the ebtree
directory. For this we need to host its C files in a different location
and src/ is the right one.
When the metrics are dumped, in the main function promex_dump_metrics(), the
appctx flags are set before entering in a new scope, among others things to know
which metrics names and descriptions to use. But, those flags are not restored
when the dump is interrupted because of a full output buffer. If this happens
after the dump of global metrics, it may only lead to extra #TYPE and #HELP
lines. But if this happens during the dump of global metrics, the following
dumps of frontends, backends and servers metrics use names and descriptions of
global ones with the unmatching indexes. This first leads to unexisting metrics
names. For instance, "haproxy_frontend_nbproc". But also to out-of-bound
accesses to name and description arrays because there are more stats fields than
info fields.
It is easy to reproduce the bug using small buffers, setting tune.bufsize to
8192 for instance.
This patch should fix the issue #666. It must be backported as far as 2.0.
spoa server fails to build when python3.8 is not available. If
python3-config --embed fails, the output of the command is registered in
check_python_config. However when it's later used to define
PYTHON_DEFAULT_INC and PYTHON_DEFAULT_LIB it's content does not match
and fallback to python2.7
Content of check_python_config when building with python3.6:
Usage: bin/python3-config --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir python3
As we are only looking for return code, this commit ensure we always
ignore the output of python3-config or hash commands.
As reported in oss-fuzz issue 22388, the contrib/hpack code doesn't
build anymore for now. Thanks to previous patch we can now just define
HPACK_STANDALONE in the few files which include hpack-tbl.h and we can
declare a dummy pool to hold the DHT size. This is enough to make the
code work again outside of the haproxy process. It was successfully
tested using:
$ echo 82 84 87 90 | ./decode
TX_CLDENY, TX_CLALLOW, TX_SVDENY and TX_SVALLOW flags are unused. Only
TX_CLTARPIT is used to make the difference between an http deny rule and an http
tarpit rule. So these unused flags are removed.
Expose native cum_req metric for a server: so far it was calculated as a
sum or all responses. Rename it from Cum. HTTP Responses to Cum. HTTP
Requests to be consistent with Frontend and Backend.
The wireshark dissector could only be build within wireshark, which
means maintaining a wireshark binary just for this dissector. It was not
really convenient to update wireshark because of this.
This patch converts the dissector into a .so plugin which is built with
the .h found in distributions instead of the whole wireshark sources.
The url_decode() function used by the url_dec converter and a few other
call points is ambiguous on its processing of the '+' character which
itself isn't stable in the spec. This one belongs to the reserved
characters for the query string but not for the path nor the scheme,
in which it must be left as-is. It's only in argument strings that
follow the application/x-www-form-urlencoded encoding that it must be
turned into a space, that is, in query strings and POST arguments.
The problem is that the function is used to process full URLs and
paths in various configs, and to process query strings from the stats
page for example.
This patch updates the function to differentiate the situation where
it's parsing a path and a query string. A new argument indicates if a
query string should be assumed, otherwise it's only assumed after seeing
a question mark.
The various locations in the code making use of this function were
updated to take care of this (most call places were using it to decode
POST arguments).
The url_dec converter is usually called on path or url samples, so it
needs to remain compatible with this and will default to parsing a path
and turning the '+' to a space only after a question mark. However in
situations where it would explicitly be extracted from a POST or a
query string, it now becomes possible to enforce the decoding by passing
a non-null value in argument.
It seems to be what was reported in issue #585. This fix may be
backported to older stable releases.
Ryan O'Hara reported that haproxy breaks on fedora-32 using gcc-10
(pre-release). It turns out that constructs such as:
while (item != head) {
item = LIST_ELEM(item.n);
}
loop forever, never matching <item> to <head> despite a printf there
showing them equal. In practice the problem is that the LIST_ELEM()
macro is wrong, it assigns the subtract of two pointers (an integer)
to another pointer through a cast to its pointer type. And GCC 10 now
considers that this cannot match a pointer and silently optimizes the
comparison away. A tested workaround for this is to build with
-fno-tree-pta. Note that older gcc versions even with -ftree-pta do
not exhibit this rather surprizing behavior.
This patch changes the test to instead cast the null-based address to
an int to get the offset and subtract it from the pointer, and this
time it works. There were just a few places to adjust. Ideally
offsetof() should be used but the LIST_ELEM() API doesn't make this
trivial as it's commonly called with a typeof(ptr) and not typeof(ptr*)
thus it would require to completely change the whole API, which is not
something workable in the short term, especially for a backport.
With this change, the emitted code is subtly different even on older
versions. A code size reduction of ~600 bytes and a total executable
size reduction of ~1kB are expected to be observed and should not be
taken as an anomaly. Typically this loop in dequeue_proxy_listeners() :
while ((listener = MT_LIST_POP(...)))
used to produce this code where the comparison is performed on RAX
while the new offset is assigned to RDI even though both are always
identical:
53ded8: 48 8d 78 c0 lea -0x40(%rax),%rdi
53dedc: 48 83 f8 40 cmp $0x40,%rax
53dee0: 74 39 je 53df1b <dequeue_proxy_listeners+0xab>
and now produces this one which is slightly more efficient as the
same register is used for both purposes:
53dd08: 48 83 ef 40 sub $0x40,%rdi
53dd0c: 74 2d je 53dd3b <dequeue_proxy_listeners+0x9b>
Similarly, retrieving the channel from a stream_interface using si_ic()
and si_oc() used to cause this (stream-int in rdi):
1cb7: c7 47 1c 00 02 00 00 movl $0x200,0x1c(%rdi)
1cbe: f6 47 04 10 testb $0x10,0x4(%rdi)
1cc2: 74 1c je 1ce0 <si_report_error+0x30>
1cc4: 48 81 ef 00 03 00 00 sub $0x300,%rdi
1ccb: 81 4f 10 00 08 00 00 orl $0x800,0x10(%rdi)
and now causes this:
1cb7: c7 47 1c 00 02 00 00 movl $0x200,0x1c(%rdi)
1cbe: f6 47 04 10 testb $0x10,0x4(%rdi)
1cc2: 74 1c je 1ce0 <si_report_error+0x30>
1cc4: 81 8f 10 fd ff ff 00 orl $0x800,-0x2f0(%rdi)
There is extremely little chance that this fix wakes up a dormant bug as
the emitted code effectively does what the source code intends.
This must be backported to all supported branches (dropping MT_LIST_ELEM
and the spoa_example parts as needed), since the bug is subtle and may
not always be visible even when compiling with gcc-10.
ST_F_CHECK_DURATION is now part of exported server metrics, named
haproxy_server_check_duration_seconds and expressed in seconds. For a given
server, this value is exported only if the healthcheck is finished (the status
is greater or equal to HCHK_STATUS_CHECKED).
This patch fixes the issue #519. It may be backported as fat as 2.0.
Historically we used to require that the connections held the desired
polling states for the data layer and the socket layer. Then with muxes
these were more or less merged into the transport layer, and now it
happens that with all transport layers having their own state, the
"transport layer state" as we have it in the connection (XPRT_RD_ENA,
XPRT_WR_ENA) is only an exact copy of the undelying file descriptor
state, but with a delay. All of this is causing some difficulties at
many places in the code because there are still some locations which
use the conn_want_* API to remain clean and only rely on connection,
and count on a later collection call to conn_cond_update_polling(),
while others need an immediate action and directly use the FD updates.
Since our updates are now much cheaper, most of them being only an
atomic test-and-set operation, and since our I/O callbacks are deferred,
there's no benefit anymore in trying to "cache" the transient state
change in the connection flags hoping to cancel them before they
become an FD event. Better make such calls transparent indirections
to the FD layer instead and get rid of the deferred operations which
needlessly complicate the logic inside.
This removes flags CO_FL_XPRT_{RD,WR}_ENA and CO_FL_WILL_UPDATE.
A number of functions related to polling updates were either greatly
simplified or removed.
Two places were using CO_FL_XPRT_WR_ENA as a hint to know if more data
were expected to be sent after a PROXY protocol or SOCKSv4 header. These
ones were simply replaced with a check on the subscription which is
where we ought to get the autoritative information from.
Now the __conn_xprt_want_* and their conn_xprt_want_* counterparts
are the same. conn_stop_polling() and conn_xprt_stop_both() are the
same as well. conn_cond_update_polling() only causes errors to stop
polling. It also becomes way more obvious that muxes should not at
all employ conn_xprt_{want|stop}_{recv,send}(), and that the call
to __conn_xprt_stop_recv() in case a mux failed to allocate a buffer
is inappropriate, it ought to unsubscribe from reads instead. All of
this definitely requires a serious cleanup.
This is convenient when processing large dumps, it allows to copy-paste
values to inspect from one window to another, or to directly transfer
a "show fd"/"show stream" output through sed. In order to do this, simply
pass "-" alone instead of the value and they will all be read one line at
a time from stdin. For example, in order to quickly print the different
set of connection flags from "show fd", this is sufficient:
sed -ne 's/^.* cflg=\([^ ]*\).*/\1/p' | contrib/debug/flags conn -
It's often convenient, for example to dump two channels or two stream-int
at once. Now all input values are decoded and the value is recalled before
the dump when there is more than one to display.
It's often confusing to have a whole dump on the screen while only
checking for a set of task or stream flags, and appending "|grep ^chn"
isn't very convenient to repeat the opeation. Instead let's add the
ability to filter the output as certain types only by prepending their
name(s) before the value.
Commit 477902bd2e ("MEDIUM: connections: Get ride of the xprt_done
callback.") broke the master CLI for a very obscure reason. It happens
that short requests immediately terminated by a shutdown are properly
received, CS_FL_EOS is correctly set, but in si_cs_recv(), we refrain
from setting CF_SHUTR on the channel because CO_FL_CONNECTED was not
yet set on the connection since we've not passed again through
conn_fd_handler() and it was not done in conn_complete_session(). While
commit a8a415d31a ("BUG/MEDIUM: connections: Set CO_FL_CONNECTED in
conn_complete_session()") fixed the issue, such accident may happen
again as the root cause is deeper and actually comes down to the fact
that CO_FL_CONNECTED is lazily set at various check points in the code
but not every time we drop one wait bit. It is not the first time we
face this situation.
Originally this flag was used to detect the transition between WAIT_*
and CONNECTED in order to call ->wake() from the FD handler. But since
at least 1.8-dev1 with commit 7bf3fa3c23 ("BUG/MAJOR: connection: update
CO_FL_CONNECTED before calling the data layer"), CO_FL_CONNECTED is
always synchronized against the two others before being checked. Moreover,
with the I/Os moved to tasklets, the decision to call the ->wake() function
is performed after the I/Os in si_cs_process() and equivalent, which don't
care about this transition either.
So in essence, checking for CO_FL_CONNECTED has become a lazy wait to
check for (CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN), but that always
relies on someone else having synchronized it.
This patch addresses it once for all by killing this flag and only checking
the two others (for which a composite mask CO_FL_WAIT_L4L6 was added). This
revealed a number of inconsistencies that were purposely not addressed here
for the sake of bisectability:
- while most places do check both L4+L6 and HANDSHAKE at the same time,
some places like assign_server() or back_handle_st_con() and a few
sample fetches looking for proxy protocol do check for L4+L6 but
don't care about HANDSHAKE ; these ones will probably fail on TCP
request session rules if the handshake is not complete.
- some handshake handlers do validate that a connection is established
at L4 but didn't clear CO_FL_WAIT_L4_CONN
- the ->ctl method of mux_fcgi, mux_pt and mux_h1 only checks for L4+L6
before declaring the mux ready while the snd_buf function also checks
for the handshake's completion. Likely the former should validate the
handshake as well and we should get rid of these extra tests in snd_buf.
- raw_sock_from_buf() would directly set CO_FL_CONNECTED and would only
later clear CO_FL_WAIT_L4_CONN.
- xprt_handshake would set CO_FL_CONNECTED itself without actually
clearing CO_FL_WAIT_L4_CONN, which could apparently happen only if
waiting for a pure Rx handshake.
- most places in ssl_sock that were checking CO_FL_CONNECTED don't need
to include the L4 check as an L6 check is enough to decide whether to
wait for more info or not.
It also becomes obvious when reading the test in si_cs_recv() that caused
the failure mentioned above that once converted it doesn't make any sense
anymore: having CS_FL_EOS set while still waiting for L4 and L6 to complete
cannot happen since for CS_FL_EOS to be set, the other ones must have been
validated.
Some of these parts will still deserve further cleanup, and some of the
observations above may induce some backports of potential bug fixes once
totally analyzed in their context. The risk of breaking existing stuff
is too high to blindly backport everything.
The failed_secu counter is only used for the servers stats. It is used to report
the number of denied responses. On proxies, the same info is stored in the
denied_resp counter. So, it is more consistent to use the same field for
servers.
ST_F_CHECK_STATUS and ST_F_CHECK_CODE are now part of exported server metrics:
* haproxy_server_check_status
* haproxy_server_check_code
The heathcheck status is an integer corresponding to HCHK_STATUS value.
These ones used to serve as a set of switches between CO_FL_SOCK_* and
CO_FL_XPRT_*, and now that the SOCK layer is gone, they're always a
copy of the last know CO_FL_XPRT_* ones that is resynchronized before
I/O events by calling conn_refresh_polling_flags(), and that are pushed
back to FDs when detecting changes with conn_xprt_polling_changes().
While these functions are not particularly heavy, what they do is
totally redundant by now because the fd_want_*/fd_stop_*() actions
already perform test-and-set operations to decide to create an entry
or not, so they do the exact same thing that is done by
conn_xprt_polling_changes(). As such it is pointless to call that
one, and given that the only reason to keep CO_FL_CURR_* is to detect
changes there, we can now remove them.
Even if this does only save very few cycles, this removes a significant
complexity that has been responsible for many bugs in the past, including
the last one affecting FreeBSD.
All tests look good, and no performance regressions were observed.
we were decoding all substring and then parsing; this could lead to
consider & and = in decoding result as delimiters where it should not.
this patch reverses the order by first parsing and then decoding each key
and value separately.
we also stop parsing after number sign (#).
This patch should be backported to 2.1 and 2.0
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
By passing the parameter "no-maint" in the query-string, it is now possible to
ignore servers in maintenance. It means that the metrics for servers in this
state will not be exported.
Now, the prometheus exporter parses the HTTP query-string to filter or to adapt
the exported metrics. In this first version, it is only possible select the
scopes of metrics to export. To do so, one or more parameters with "scope" as
name must be passed in the query-string, with one of those values: global,
frontend, backend, server or '*' (means all). A scope parameter with no value
means to filter out all scopes (nothing is returned). The scope parameters are
parsed in their appearance order in the query-string. So an empty scope will
reset all scopes already parsed. But it can be overridden by following scope
parameters in the query-string. By default everything is exported.
The filtering can also be done on prometheus scraping configuration, but general
aim is to optimise the source of data to improve load and scraping time. This is
particularly true for huge configuration with thousands of backends and servers.
Also note that this configuration was possible on the previous official haproxy
exporter but with even more parameters to select the needed metrics. Here we
thought it was sufficient to simply avoid a given type of metric. However, more
filters are still possible.
Thanks to William Dauchy. This patch is based on his work.
This adds two extra metrics per server, one for the current number of idle
connections and one for the configured limit :
* haproxy_server_idle_connections_current
* haproxy_server_idle_connections_limit
The following metrics have been renamed without the "_http" part :
* http_queue_time_average_seconds => queue_time_average_seconds
* http_connect_time_average_seconds => connect_time_average_seconds
* http_response_time_average_seconds => response_time_average_seconds
* http_total_time_average_seconds => total_time_average_seconds
These metrics are reported per backend and per server and are not specific to
HTTP sessions.