We start implementing some postparsing compatibility checks for log
backends.
Here we report a warning if user tries to use tcp-{request,response} rules
with log backend, and we properly ignore such rules when inherited from
defaults section.
add proxy_cfg_ensure_no_log() function (similar to
proxy_cfg_ensure_no_http()) to ensure at the end of proxy parsing that
no log exclusive options are found if the proxy is not in log mode.
"log-balance" directive was recently introduced to configure the
balancing algorithm to use when in a log backend. However, it is
confusing and it causes issues when used in default section.
In this patch, we take another approach: first we remove the
"log-balance" directive, and instead we rely on existing "balance"
directive to configure log load balancing in log backend.
Some algorithms such as roundrobin can be used as-is in a log backend,
and for log-only algorithms, they are implemented as "log-$name" inside
the "backend" directive.
The documentation was updated accordingly.
Make sure lbprm.algo can store 32bits by declaring it as uint32_t
Then, use all 32 available bits to offer 4 extra bits for the BE_LB_NEED
inputs. This will allow new required inputs to be easily added (up to 4
new ones, plus one that wasn't used yet if we keep them exclusive)
This required some cleanup: all ALGO bitfields were rewritten in the
32bits format and the high ones were shifted to make room for the
new BE_LB_NEED bits.
BE_LB_HASH_RND was introduced with 760e81d35 ("MINOR: backend: implement
random-based load balancing") but was never used since. Removing it
to regain an extra slot for future types.
In 1b8e68e ("MEDIUM: stick-table: Stop handling stick-tables as proxies.")
we forgot to free the table pointer which is now dynamically allocated.
Let's take this opportunity to also fix a missing free in the table itself
(the table expire task wasn't properly destroyed)
This patch depends on:
- "MINOR: stktable: add sktable_deinit function"
It should be backported in every stable versions.
This one reports streams considered as "suspicious", i.e. those with
no expiration dates or dates in the past, or those without a front
endpoint. More criteria could be added in the future.
It's often needed to be able to refine "show sess" when debugging, and
very often a first glance at old streams is performed, but that's a
difficult task in large dumps, and it takes lots of resources to dump
everything.
This commit adds "older <age>" to "show sess" in order to specify the
minimum age of streams that will be dumped. This should simplify the
identification of blocked ones.
Since 2.4-dev2 with commit 15e525f49 ("MINOR: stream: Don't retrieve
anymore timing info from the mux csinfo"), we don't replace the
tv_accept (now accept_ts) anymore with the current request's, so that
it properly reflects the session's accept date and not the request's
date. However, since then we failed to update "show sess" to make use
of the request's timestamp instead of the session's timestamp, resulting
in fantasist values in the "age" field of "show sess" for the task.
Indeed, the session's age is displayed instead of the stream's, which
leads to great confusion when debugging, particularly when it comes to
multiplexed inter-proxy connections which are kept up forever.
Let's fix this now. This must be backported as far as 2.4. However,
for 2.7 and older, the field was named tv_request and was a timeval.
If less connections than threads are established on a reverse-http gateway
and these servers have a non-nul pool-min-conn, then conn_backend_get()
will refrain from picking available connections from other threads. But
this makes no sense for protocols for which there is no ->connect(),
since there's no way the current thread will manage to establish its own
connection. For such situations we should always accept to use another
thread's connection. That's precisely what this patch does.
A dummy connect() function previously had to be installed for the log
server so that a reverse-http address could be referenced on a "server"
line, but after the recent rework of the server line parsing, this is
no longer needed, and this is actually annoying as it makes one believe
there is a way to connect outside, which is not true. Let's now get rid
of this function.
This is the equivalent of the previous "BUG/MEDIUM: mux-h1: fail earlier
on malloc in takeover()".
Connection takeover was implemented for fcgi in 2.2 by commit a41bb0b6c
("MEDIUM: mux_fcgi: Implement the takeover() method."). It does have one
corner case related to memory allocation failure: in case the task or
tasklet allocation fails, the connection gets released synchronously.
Unfortunately the situation is bad there, because the lower layers are
already switched to the new thread while the tasklet is either NULL or
still the old one, and calling fcgi_release() will also result in
touching the thread-local list of buffer waiters, calling unsubscribe(),
There are even code paths where the thread will try to grab the lock of
its own idle conns list, believing the connection is there while it has
no useful effect. However, if the owner thread was doing the same at the
same moment, and ended up trying to pick from the current thread (which
could happen if picking a connection for a different name), the two
could even deadlock.
No tests were made to try to reproduce the problem, but the description
above is sufficient to see that nothing can guarantee against it.
This patch takes a simple but radically different approach. Instead of
starting to migrate the connection before risking to face allocation
failures, it first pre-allocates a new task and tasklet, then assigns
them to the connection if the migration succeeds, otherwise it just
frees them. This way it's no longer needed to manipulate the connection
until it's fully migrated, and as a bonus this means the connection will
continue to exist and the use-after-free condition is solved at the same
time.
This should be backported to 2.2. Thanks to Fred for the initial analysis
of the problem!
This is the h1 equivalent of previous "BUG/MEDIUM: mux-h2: fail earlier
on malloc in takeover()".
Connection takeover was implemented for H1 in 2.2 by commit f12ca9f8f1
("MEDIUM: mux_h1: Implement the takeover() method."). It does have one
corner case related to memory allocation failure: in case the task or
tasklet allocation fails, the connection gets released synchronously.
Unfortunately the situation is bad there, because the lower layers are
already switched to the new thread while the tasklet is either NULL or
still the old one, and calling h1_release() will call some unsubscribe
and and possibly other things whose safety is not guaranteed (and the
ambiguity here alone is sufficient to be careful). There are even code
paths where the thread will try to grab the lock of its own idle conns
list, believing the connection is there while it has no useful effect.
However, if the owner thread was doing the same at the same moment, and
ended up trying to pick from the current thread (which could happen if
picking a connection for a different name), the two could even deadlock.
Contrary to mux-h2, a few tests were not sufficient to try to crash the
process, but there's nothing that indicates it couldn't happen based on
the description above.
This patch takes a simple but radically different approach. Instead of
starting to migrate the connection before risking to face allocation
failures, it first pre-allocates a new task and tasklet, then assigns
them to the connection if the migration succeeds, otherwise it just
frees them. This way it's no longer needed to manipulate the connection
until it's fully migrated, and as a bonus this means the connection will
continue to exist and the use-after-free condition is solved at the same
time.
This should be backported to 2.2. Thanks to Fred for the initial analysis
of the problem!
Connection takeover was implemented for H2 in 2.2 by commit cd4159f03
("MEDIUM: mux_h2: Implement the takeover() method."). It does have one
corner case related to memory allocation failure: in case the task or
tasklet allocation fails, the connection gets released synchronously.
Unfortunately the situation is bad there, because the lower layers are
already switched to the new thread while the tasklet is either NULL or
still the old one, and calling h2_release() will also result in
h2_process() and h2_process_demux() that may process any possibly
pending frames. Even the session remains the old one on the old thread,
so that some sess_log() that are called when facing certain demux errors
will be associated with the previous thread, possibly accessing a number
of elements belonging to another thread. There are even code paths where
the thread will try to grab the lock of its own idle conns list, believing
the connection is there while it has no useful effect. However, if the
owner thread was doing the same at the same moment, and ended up trying
to pick from the current thread (which could happen if picking a connection
for a different name), the two could even deadlock.
The risk is extremely low, but Fred managed to reproduce use-after-free
errors in conn_backend_get() after a takeover() failed by playing with
-dMfail, indicating that h2_release() had been successfully called. In
practise it's sufficient to have h2 on the server side with reuse-always
and to inject lots of request on it with -dMfail.
This patch takes a simple but radically different approach. Instead of
starting to migrate the connection before risking to face allocation
failures, it first pre-allocates a new task and tasklet, then assigns
them to the connection if the migration succeeds, otherwise it just
frees them. This way it's no longer needed to manipulate the connection
until it's fully migrated, and as a bonus this means the connection will
continue to exist and the use-after-free condition is solved at the same
time.
This should be backported to 2.2. Thanks to Fred for the initial analysis
of the problem!
There was still a totally outdated comment speaking about issues
affecting solaris on 1.1.8pre4 (April 2002, 21 year-old)! This
proves that comments in headers are never read, so let's take this
opportunity for also removing the outdated one recommending to read
the "updated" RFC7230.
Document the "handshake" timeout new setting available one frontend side.
This should at least be helpful for QUIC client connections to prevent
an attacker from refreshing plenty of connections without completing
the handshake step, leading haproxy to consume memory for nothing.
Adapt session_accept_fd() called on accept() to set the handshake timeout from
"hanshake-timeout" setting if set by configuration. If not set, continue to use
the "client" timeout setting.
This bug arrived with this commit:
MINOR: quic: Avoid zeroing frame structures
Before this latter, the CONNECTION_CLOSE was zeroed, especially the "reason phrase
length".
Restablish this behavior.
No need to backport.
This date is shared between the idle timer and hanshake timeout. So, it should be
useful to dump the expiration date of the idle timer task itself, in place of the
idle timer expiration date. This way, the handshake timeout value will be visible
during the handshake from CLI "show quic full" command.
The idle timer task may be used to trigger the client handshake timeout.
The hanshake timeout expiration date (qc->hs_expire) is initialized when the
connection is allocated. Obviously, this timeout is taken into an account only
during the handshake by qc_idle_timer_do_rearm() whose job is to rearm the idle timer.
The idle timer expiration date could be initialized only one time, then
never updated until the hanshake completes. But this only works if the
handshake timeout is smaller than the idle timer task timeout. If the handshake
timeout is set greater than the idle timeout, this latter may expire before the
handshake timeout.
This patch may have an impact on the L1/C1 interop tests (with heavy packet loss
or corruption). This is why I guess some implementations with a hanshake timeout
support set a big timeout during this test. This is at least the case for ngtcp2
which sets a 180s hanshake timeout! haproxy will certainly have to proceed the
same way if it wants to have a chance to pass this test as before this handshake
timeout.
Add a new timeout for the handshake, on the frontend side only. Such a hanshake
will be typically used for TLS hanshakes during client connections to TLS/TCP or
QUIC frontends.
Since the reload is now synchronous over the master CLI, try to reload
with it. This was a problem before with the signals because it wasn't
possible to wait for the end of the reload before sending the requests.
This activate again this test, we will see if it's more stable or we
will deactivate it again..
The shctx lock was changed from a SPINLOCK to a RWLOCK in commit ed35b94
"MEDIUM: cache: Switch shctx spinlock to rwlock and restrict its scope"
but a SPIN_INIT was left behind.
This patch does not need to be backported.
Partial sends is an activity, not a full blocking. Thus a read activity must
be reported for non-independent stream. It is especially important for very
congested stream where full sends are uncommon.
This patch must be backported to 2.8.
For applets and connection, when a send attempt is performed, we must be
sure to not report a send activity if there was no output data at all before
the attempt.
It is not important for the <fsb> date itself but for the <lra> date for
non-independent stream.
This patch must be backported to 2.8.
Some channel function are used to check if the channel's buffer is full, not
empty or if there are input data. However, functions used are not
HTX-aware. So it is not accurate and may prevent some actions to be
performed (However, not sure there are really issues). Because HTX-aware
versions now exist, use them instead.
This patch may be backported as far as 2.2. It relies on
* "MINOR: channel: Add functions to get info on buffers and deal with HTX streams"
* "MINOR: htx: Use a macro for overhead induced by HTX"
Since the HTX was introduced, the streamer detection is broken for HTX
streams because the HTX overhead was not counted in the test to set
CF_STREAMER and CF_STREAMER_FAST flags.
The consequence was that the consumer side was no longer able to send more
than tune.ssl.maxrecord at a time in SSL.
To fix the issue, we now count the HTX overhead of HTX streams to be able to
set CF_STREAMER/CF_STREAMER_FAST flags on a channel.
This patch relies on folloing commits:
* "MINOR: channel: Add functions to get info on buffers and deal with HTX streams"
* "MINOR: htx: Use a macro for overhead induced by HTX"
The series must be backported as far as 2.2.
This patch adds HXT-aware versions of the functions c_data(), ci_data() and
c_empty(). channel_data() function returns the amount of data in the
channel, channel_input_data() returns the amount of input data and
channel_empty() returns true if the channel's buffer is empty. These
functions handles HTX buffers.
In addition, channel_data_limit() function, still HTX-aware, can be used to
get the maximum absolute amount of data that can be copied in a buffer,
independently on data already present in the buffer.
The overhead induced by the HTX format was set to the HTX structure itself
and two HTX blocks. It was set this way to optimize zero-copy during
transfers. This value may (and will) be used at different places. Thus we
now use a macro, called HTX_BUF_OVERHEAD.
The first-send-blocked date was originally designed to save the date of the
first send of a series where some data remain blocked. It was relaxed
recently (3083fd90e "BUG/MEDIUM: stconn: Report a send activity everytime
data were sent") to save the date of the first full blocked send. However,
it is not accurrate.
When all data are sent, the fsb value must be reset to TICK_ETERNITY. When
nothing is sent and if it is not already set, it must be set. But when data
are partially sent, the value must be updated and not reset. Otherwise the
write timeout may be ignored because fsb date is never set.
So, changes brought by the patch above are reverted and
sc_ep_report_blocked_send() was changed to know if some data were sent or
not. This way we are able to update fsb value.
l
This patch must be backported to 2.8.
Some functions are built on the fact that the cache lock must be already
taken by the caller. This patch adds this information in the functions'
descriptions.
This global variable was used to avoid using locks on shared_contexts in
the unlikely case of nbthread==1. Since the locks do not do anything
when USE_THREAD is not defined, it will be more beneficial to simply
remove this variable and the systematic test on its value in the shared
context locking functions.
A reference counter on the cache_entry was added in a previous commit.
Its value is atomically increased and decreased via the retain_entry and
release_entry functions.
This is needed because of the latest cache and shared_context
modifications that introduced two separate locks instead of the
preexisting single shctx_lock one.
With the new logic, we have two main blocks competing for the two locks:
- the one in the http_action_req_cache_use that performs a lookup in the
cache tree (locked by the cache lock) and then tries to remove the
corresponding blocks from the shared_context's 'avail' list until the
response is sent to the client by the cache applet,
- the shctx_row_reserve_hot that traverses the 'avail' list and gives
them back to the caller, while removing previous row heads from the
cache tree
Those two blocks require the two locks but one of them would take the
cache lock first, and the other one the shctx_lock first, which would
end in a deadlock without the current patch.
The way this conflict is resolved in this patch is by ensuring that at
least one of those uses works without taking the two locks at the same
time.
The solution found was to keep taking the two locks in the cache_use
case. We first lock the cache to lookup for an entry and we then take
the shctx lock as well to detach the corresponding blocks from the
'avail' list. The subtlety is that between the cache lookup and the
actual locking of the shctx, another thread might have called the
reserve_hot function in which we only take the shctx lock.
In this function we traverse the 'avail' list to remove blocks that are
then given to the caller. If one of those blocks corresponds to a
previous row head, we call the 'free_blocks' callback that used to
delete the cache entry from the tree.
We now avoid deleting directly the cache entries in reserve_hot and we
rather set the cache entries 'complete' param to 0 so that no other
thread tries to work with this entry. This way, when we release the
shctx lock in reserve_hot, the first thread that had performed the cache
lookup and had found an entry that we just gave to another thread will
see that the 'complete' field is 0 and it won't try to work with this
response.
The actual removal of entries from the cache tree will now be performed
in the new 'reserve_finish' callback called at the end of the
shctx_row_reserve_hot function. It will iterate on all the row head that
were inserted in a dedicated list in the 'free_block' callback and
perform the actual delete.
This patch adds a reserve_finish callback that can be defined by the
subsystems that require a shared_context. It is called at the end of
shctx_row_reserve_hot after the shared_context lock is released.
Descend the shctx_lock calls into the shctx_row_reserve_hot so that the
cases when we don't need to lock anything (enough space in the current
row or not enough space in the 'avail' list) do not take the lock at
all.
In sh_ssl_sess_new_cb the lock had to be descended into
sh_ssl_sess_store in order not to cover the shctx_row_reserve_hot call
anymore.
Add a reference counter on the cache_entry. Its value will be atomically
increased and decreased via the retain_entry and release_entry
functions.
The release_entry function has two distinct versions,
release_entry_locked and release_entry_unlocked that should be called
when the cache lock is already taken in write mode or not
(respectively). In the unlocked case the cache lock will only be taken
in write mode on the last reference of the entry (before calling
delete_entry). This allows to limit the amount of times when we need to
take the cache lock during a release operation.
Since a lock on the cache tree was added in the latest cache changes, we
do not need to use the shared_context's lock to lock more than pure
shared_context related data anymore. This already existing lock will now
only cover the 'avail' list from the shared_context. It can then be
changed to a rwlock instead of a spinlock because we might want to only
run through the avail list sometimes.
Apart form changing the type of the shctx lock, the main modification
introduced by this patch is to limit the amount of code covered by the
shctx lock. This lock does not need to cover any code strictly related
to the cache tree anymore.