It is possible to hit a fairness issue in the scheduler when a local
task runs for a long time (i.e. process_stream() returns running), and
a global task wants to run on the same thread and remains in the global
queue. What happens in this case is that the condition to extract tasks
from the global queue will rarely be satisfied for very low task counts
since whatever non-null queue size multiplied by a thread count >1 is
always greater than the small remaining number of tasks in the queue.
In theory another thread should pick the task but we do have some mono
threaded tasks in the global queue as well during inter-thread wakeups.
Note that this can only happen with task counts lower than the thread
counts, typically one task in each queue for more than two threads.
This patch works around the problem by allowing a very small unfairness,
making sure that we can always pick at least one task from the global
queue even if there is already one in the local queue.
A better approach will consist in scanning the two trees in parallel
and always pick the best task. This will be more complex and will
constitute a separate patch.
This fix must be backported to 1.9.
If the interface is not in state SI_ST_CON or SI_ST_EST, don't bother
trying to send/recv data, we can't do it anyway, and if we're in SI_ST_TAR,
that may lead to adding the SI_FL_ERR flag back on the stream_interface,
while we don't want it.
This should be backported to 1.9.
In process_stream(), only try again when there's the SI_FL_ERR flag and we're
in a connected state, otherwise we can loop forever.
It used to work because si_update_both() bogusly removed the SI_FL_ERR flag,
and it would never be set at this point. Now it does, so take that into
account.
Many, many thanks to Maciej Zdeb for reporting the problem, and helping
investigating it.
This should be backported to 1.9.
For LOG_FMT_TS (%Ts), the tm variable is not used, so save some cycles
on the call to get_gmtime.
Backport: 1.9 1.8
Signed-off-by: Robin H. Johnson <rjohnson@digitalocean.com>
Pavlos Parissis reported an interesting case where some map identifiers
were not assigned (appearing as -1 in show map). It turns out that it
only happens for log-format expressions parsed in check_config_validity()
that involve maps (log-format, use_backend, unique-id-header), as in the
sample configuration below :
frontend foo
bind :8001
unique-id-format %[src,map(addr.lst)]
log-format %[src,map(addr.lst)]
use_backend %[src,map(addr.lst)]
The reason stems from the initial introduction of unique IDs in 1.5 via
commit af5a29d5f ("MINOR: pattern: Each pattern is identified by unique
id.") : the unique_id assignment was done before calling
check_config_validity() so all maps loaded after this call are not
properly configured. From what the function does, it seems they will not
be able to use a cache, will not have a unique_id assigned and will not
be updatable from the CLI.
This fix must be backported to all supported versions.
Use cpuset_getaffinity() to implement thread_cpus_enabled() on FreeBSD, so
that we can know the number of CPUs available, and automatically launch as
much threads if nbthread isn't specified.
When creating a new initcall, don't forget to define the symbols, as it may
not be done automatically and that would lead to undefined symbols.
This should be backported to 1.9.
In commit d7704b534, we introduced and expiration flag on the stream interface,
which is used for the connect, the queue and the turn around. Because the
turn around state isn't an error, the flag was reset in process_stream(), and
later in commit cff6411f9 when introducing the SI_FL_ERR flag, the cleanup
of the flag at this place was erroneously generalized.
To fix this, the SI_FL_EXP flag is only cleared at the end of the turn around
state, and nobody should clear the stream interface flags anymore.
This should be backported to 1.9, it has no known impact on older versions.
As si_update_both() sets prev_state to state for each stream_interface, if
we want to check it changed, copy it before calling si_update_both().
This should be backported to 1.9.
Don't inconditionally remove the SI_FL_ERR code in si_update_both(), which
is called at the end of process_stream(). Doing so was a bug that was there
since the flag was introduced, because we were always setting si->flags to
SI_FL_NONE, however we don't want to lose that one, except if we will retry
connecting, so only remove it in sess_update_st_cer().
This should be backported to 1.9.
It can happen in some cases that the last block of an H2 transfer over
HTX is truncated. This was tracked down to a leftover of an earlier
implementation of htx_xfer_blks() causing the computed size of a block
to be incorrectly calculated if a data block doesn't completely fit into
the target buffer. In practice it causes the EOM block to be attempted to
be emitted with a wrong size and the message to be truncated. One way to
reproduce this is to chain two haproxy instances in h1->h2->h1 with
httpterm as the server and h2load as the client, making many requests
between 8 and 10kB over a single connection. Usually one of the very
first requests will fail.
This fix must be backported to 1.9.
Modify h2c_restart_reading() to add a new parameter, to let it know if it
should consider if the buffer isn't empty when retrying to read or not, and
call h2c_restart_reading() using 0 as a parameter from h2_process_demux().
If we're leaving h2_process_demux() with a non-empty buffer, it means the
frame is incomplete, and we're waiting for more data, and if we already
subscribed, we'll be waken when more data are available.
Failing to do so means we'll be waken up in a loop until more data are
available.
This should be backported to 1.9.
The deinit took place in only peer_session_release, but in the a case of a
previous call to peer_session_forceshutdown, the session cursors
won't be reset, resulting in a bad state for new session of the same
peer. For instance, a table definition message could be dropped and
so all update messages will be dropped by the remote peer.
This patch move the deinit processing directly in the force shutdown
funtion. Killed session remains in "ST_END" state but ref on peer was
reset to NULL and deinit will be skipped on session release function.
The session release continue to assure the deinit for "active" sessions.
This patch should be backported on all stable version since proto
peers v2.
To do so, the server does not send anything. Instead it waits 2ms before
closing. The client, on its side, will wait for a response. So it will be
blocked. Becauase the client timeout is set to 1ms, HAProxy should always close
the client connection because it times out.
Because HAProxy may decide to close 301 responses, as others internal responses,
it is safer to use a different client for these requests. This is not the
purpose of this test to verify the keep-alive in such cases.
The way unexpected bodies are handled for responses to HEAD requests differs from
the legacy HTTP to the HTX. While it is dropped wih the legacy HTTP, in HTX, it
is parsed as the response to the next request. So, in HTX, a 502 error is
returned to the client and the connexion is closed.
This test has been modified to pass in both mode.
Size reported in logs may differ between legacy HTTP and HTX, at least for
now. So in the regtest http-capture/h00000.vtc, we need to relax the regex
matching the log message.
Because we try to forward infinitly message body, when its state is set to DONE,
we must be sure to reset to_foward value of the corresponding
channel. Otherwise, some errors can be errornously triggered.
No need to backport this patch.
Displays a prefix for every addresses in 'show cli sockets'.
It could be 'unix@', 'ipv4@', 'ipv6@', 'abns@' or 'sockpair@'.
Could be backported in 1.9 and 1.8.
The 'show cli sockets' was not handling the abns sockets. This is a
problem since it uses the AF_UNIX family, it displays nothing
in the path column because the path starts by \0.
Should be backported to 1.9 and 1.8.
This patch implements the external binary support in the master worker.
To configure an external process, you need to use the program section,
for example:
program dataplane-api
command ./dataplane_api
Those processes are launched at the same time as the workers.
During a reload of HAProxy, those processes are dealing with the same
sequence as a worker:
- the master is re-executed
- the master sends a USR1 signal to the program
- the master launches a new instance of the program
During a stop, or restart, a SIGTERM is sent to the program.
The children variable is still used in haproxy, it is not required
anymore since we have the information about the current workers in the
mworker_proc linked list.
The oldpids array is also replaced by this linked list when we
generated the arguments for the master reexec.
The converter can be used to decrypt the raw byte input using the
AES-GCM algorithm, using provided nonce, key and AEAD tag. This can
be useful to decrypt encrypted cookies for example and make decisions
based on the content.
AIX 5.1 is missing the following builtins used for atomic locking of the
shared inter-process cache :
.__sync_val_compare_and_swap_4
.__sync_lock_test_and_set_4
.__sync_sub_and_fetch_4
Let's simply use the private cache by default since nobody cares on
such old systems. No test was made on a more recent version.
AIX defines ip_v as ip_ff.ip_fv in netinet/ip.h using a macro, and
unfortunately we do have a local variable with such a name and which
uses the same header file. Let's rename the variable to ip_ver to fix
this.
I found on an (old) AIX 5.1 machine that stdint.h didn't exist while
inttypes.h which is expected to include it does exist and provides the
desired functionalities.
As explained here, stdint being just a subset of inttypes for use in
freestanding environments, it's probably always OK to switch to inttypes
instead:
https://pubs.opengroup.org/onlinepubs/009696799/basedefs/stdint.h.html
Also it's even clearer here in the autoconf doc :
https://www.gnu.org/software/autoconf/manual/autoconf-2.61/html_node/Header-Portability.html
"The C99 standard says that inttypes.h includes stdint.h, so there's
no need to include stdint.h separately in a standard environment.
Some implementations have inttypes.h but not stdint.h (e.g., Solaris
7), but we don't know of any implementation that has stdint.h but not
inttypes.h"
The current initcall implementation relies on dedicated sections (one
section per init stage) to store the initcall descriptors. Then upon
startup, these sections are scanned from beginning to end and all items
found there are called in sequence.
On platforms like AIX or Cygwin it seems difficult to figure the
beginning and end of sections as the linker doesn't seem to provide
the corresponding symbols. In order to replace this, this patch
simply implements an array of single linked (one per init stage)
which are fed using constructors for each register call. These
constructors are declared static, with a name depending on their
line number in the file, in order to avoid name clashes. The final
effect is the same, except that the method is slightly more expensive
in that it explicitly produces code to register these initcalls :
$ size haproxy.sections haproxy.constructor
text data bss dec hex filename
4060312 249176 1457652 5767140 57ffe4 haproxy.sections
4062862 260408 1457652 5780922 5835ba haproxy.constructor
This mechanism is enabled as an alternative to the default one when
build option USE_OBSOLETE_LINKER is set. This option is currently
enabled by default only on AIX and Cygwin, and may be attempted for
any target which fails to build complaining about missing symbols
__start_init_* and/or __stop_init_*.
Once confirmed as a reliable fix, this will likely have to be backported
to 1.9 where AIX and Cygwin do not build anymore.
Older Solaris and AIX versions do not have unsetenv(). This adds a
fairly simple implementation which scans the environment, for use
with those systems. It will simply require to pass the define in
the "DEFINE" macro at build time like this :
DEFINE="-Dunsetenv=my_unsetenv"
Most modern platforms don't touch the output buffer when the size
argument is null, but there exist a few old ones (like AIX 5 and
possibly Tru64) where the output will be dereferenced anyway, probably
to write the trailing null, crashing the process. memprintf() uses this
to measure the desired length.
There is a very simple workaround to this consisting in passing a pointer
to a character instead of a NULL pointer. It was confirmed to fix the issue
on AIX 5.1.
The struct http_cache_applet was fully declared at the beginning
instead of just doing a forward declaration using an extern modifier.
Some linkers report warnings about a redefined symbol since these
really are two complete declarations.
The proper way to do this is to use extern on the first one and to
have a full declaration later. However it's not permitted to have
both static and extern so the change done in commit 0f2229943
("CLEANUP: cache: don't export http_cache_applet anymore") has to
be partially undone.
This should be backported to 1.9 for sanity but has no effet on
most platforms. However on 1.9 the extern keyword must also be
added to include/types/cache.h.
http_known_methods, HTTP_100 and HTTP_103 were not declared extern and
as such were multiply defined since they were in http.h. There was
apparently no more side effect but it may depend on the platform and
the linker.
This needs to be backported to 1.9.
GNU make 3.80 has an issue with calls to functions inside an if block,
which is just what we recently introduced to simplify the targets
declaration. The fix is easy, it simply consists in assigning the
command to a variable inside the if block and evaluating this command
after the block. This also makes the code slightly more readable so we
can keep compatibility with 3.80 for now.
No backport is needed.