Document the "handshake" timeout new setting available one frontend side.
This should at least be helpful for QUIC client connections to prevent
an attacker from refreshing plenty of connections without completing
the handshake step, leading haproxy to consume memory for nothing.
Adapt session_accept_fd() called on accept() to set the handshake timeout from
"hanshake-timeout" setting if set by configuration. If not set, continue to use
the "client" timeout setting.
This bug arrived with this commit:
MINOR: quic: Avoid zeroing frame structures
Before this latter, the CONNECTION_CLOSE was zeroed, especially the "reason phrase
length".
Restablish this behavior.
No need to backport.
This date is shared between the idle timer and hanshake timeout. So, it should be
useful to dump the expiration date of the idle timer task itself, in place of the
idle timer expiration date. This way, the handshake timeout value will be visible
during the handshake from CLI "show quic full" command.
The idle timer task may be used to trigger the client handshake timeout.
The hanshake timeout expiration date (qc->hs_expire) is initialized when the
connection is allocated. Obviously, this timeout is taken into an account only
during the handshake by qc_idle_timer_do_rearm() whose job is to rearm the idle timer.
The idle timer expiration date could be initialized only one time, then
never updated until the hanshake completes. But this only works if the
handshake timeout is smaller than the idle timer task timeout. If the handshake
timeout is set greater than the idle timeout, this latter may expire before the
handshake timeout.
This patch may have an impact on the L1/C1 interop tests (with heavy packet loss
or corruption). This is why I guess some implementations with a hanshake timeout
support set a big timeout during this test. This is at least the case for ngtcp2
which sets a 180s hanshake timeout! haproxy will certainly have to proceed the
same way if it wants to have a chance to pass this test as before this handshake
timeout.
Add a new timeout for the handshake, on the frontend side only. Such a hanshake
will be typically used for TLS hanshakes during client connections to TLS/TCP or
QUIC frontends.
Since the reload is now synchronous over the master CLI, try to reload
with it. This was a problem before with the signals because it wasn't
possible to wait for the end of the reload before sending the requests.
This activate again this test, we will see if it's more stable or we
will deactivate it again..
The shctx lock was changed from a SPINLOCK to a RWLOCK in commit ed35b94
"MEDIUM: cache: Switch shctx spinlock to rwlock and restrict its scope"
but a SPIN_INIT was left behind.
This patch does not need to be backported.
Partial sends is an activity, not a full blocking. Thus a read activity must
be reported for non-independent stream. It is especially important for very
congested stream where full sends are uncommon.
This patch must be backported to 2.8.
For applets and connection, when a send attempt is performed, we must be
sure to not report a send activity if there was no output data at all before
the attempt.
It is not important for the <fsb> date itself but for the <lra> date for
non-independent stream.
This patch must be backported to 2.8.
Some channel function are used to check if the channel's buffer is full, not
empty or if there are input data. However, functions used are not
HTX-aware. So it is not accurate and may prevent some actions to be
performed (However, not sure there are really issues). Because HTX-aware
versions now exist, use them instead.
This patch may be backported as far as 2.2. It relies on
* "MINOR: channel: Add functions to get info on buffers and deal with HTX streams"
* "MINOR: htx: Use a macro for overhead induced by HTX"
Since the HTX was introduced, the streamer detection is broken for HTX
streams because the HTX overhead was not counted in the test to set
CF_STREAMER and CF_STREAMER_FAST flags.
The consequence was that the consumer side was no longer able to send more
than tune.ssl.maxrecord at a time in SSL.
To fix the issue, we now count the HTX overhead of HTX streams to be able to
set CF_STREAMER/CF_STREAMER_FAST flags on a channel.
This patch relies on folloing commits:
* "MINOR: channel: Add functions to get info on buffers and deal with HTX streams"
* "MINOR: htx: Use a macro for overhead induced by HTX"
The series must be backported as far as 2.2.
This patch adds HXT-aware versions of the functions c_data(), ci_data() and
c_empty(). channel_data() function returns the amount of data in the
channel, channel_input_data() returns the amount of input data and
channel_empty() returns true if the channel's buffer is empty. These
functions handles HTX buffers.
In addition, channel_data_limit() function, still HTX-aware, can be used to
get the maximum absolute amount of data that can be copied in a buffer,
independently on data already present in the buffer.
The overhead induced by the HTX format was set to the HTX structure itself
and two HTX blocks. It was set this way to optimize zero-copy during
transfers. This value may (and will) be used at different places. Thus we
now use a macro, called HTX_BUF_OVERHEAD.
The first-send-blocked date was originally designed to save the date of the
first send of a series where some data remain blocked. It was relaxed
recently (3083fd90e "BUG/MEDIUM: stconn: Report a send activity everytime
data were sent") to save the date of the first full blocked send. However,
it is not accurrate.
When all data are sent, the fsb value must be reset to TICK_ETERNITY. When
nothing is sent and if it is not already set, it must be set. But when data
are partially sent, the value must be updated and not reset. Otherwise the
write timeout may be ignored because fsb date is never set.
So, changes brought by the patch above are reverted and
sc_ep_report_blocked_send() was changed to know if some data were sent or
not. This way we are able to update fsb value.
l
This patch must be backported to 2.8.
Some functions are built on the fact that the cache lock must be already
taken by the caller. This patch adds this information in the functions'
descriptions.
This global variable was used to avoid using locks on shared_contexts in
the unlikely case of nbthread==1. Since the locks do not do anything
when USE_THREAD is not defined, it will be more beneficial to simply
remove this variable and the systematic test on its value in the shared
context locking functions.
A reference counter on the cache_entry was added in a previous commit.
Its value is atomically increased and decreased via the retain_entry and
release_entry functions.
This is needed because of the latest cache and shared_context
modifications that introduced two separate locks instead of the
preexisting single shctx_lock one.
With the new logic, we have two main blocks competing for the two locks:
- the one in the http_action_req_cache_use that performs a lookup in the
cache tree (locked by the cache lock) and then tries to remove the
corresponding blocks from the shared_context's 'avail' list until the
response is sent to the client by the cache applet,
- the shctx_row_reserve_hot that traverses the 'avail' list and gives
them back to the caller, while removing previous row heads from the
cache tree
Those two blocks require the two locks but one of them would take the
cache lock first, and the other one the shctx_lock first, which would
end in a deadlock without the current patch.
The way this conflict is resolved in this patch is by ensuring that at
least one of those uses works without taking the two locks at the same
time.
The solution found was to keep taking the two locks in the cache_use
case. We first lock the cache to lookup for an entry and we then take
the shctx lock as well to detach the corresponding blocks from the
'avail' list. The subtlety is that between the cache lookup and the
actual locking of the shctx, another thread might have called the
reserve_hot function in which we only take the shctx lock.
In this function we traverse the 'avail' list to remove blocks that are
then given to the caller. If one of those blocks corresponds to a
previous row head, we call the 'free_blocks' callback that used to
delete the cache entry from the tree.
We now avoid deleting directly the cache entries in reserve_hot and we
rather set the cache entries 'complete' param to 0 so that no other
thread tries to work with this entry. This way, when we release the
shctx lock in reserve_hot, the first thread that had performed the cache
lookup and had found an entry that we just gave to another thread will
see that the 'complete' field is 0 and it won't try to work with this
response.
The actual removal of entries from the cache tree will now be performed
in the new 'reserve_finish' callback called at the end of the
shctx_row_reserve_hot function. It will iterate on all the row head that
were inserted in a dedicated list in the 'free_block' callback and
perform the actual delete.
This patch adds a reserve_finish callback that can be defined by the
subsystems that require a shared_context. It is called at the end of
shctx_row_reserve_hot after the shared_context lock is released.
Descend the shctx_lock calls into the shctx_row_reserve_hot so that the
cases when we don't need to lock anything (enough space in the current
row or not enough space in the 'avail' list) do not take the lock at
all.
In sh_ssl_sess_new_cb the lock had to be descended into
sh_ssl_sess_store in order not to cover the shctx_row_reserve_hot call
anymore.
Add a reference counter on the cache_entry. Its value will be atomically
increased and decreased via the retain_entry and release_entry
functions.
The release_entry function has two distinct versions,
release_entry_locked and release_entry_unlocked that should be called
when the cache lock is already taken in write mode or not
(respectively). In the unlocked case the cache lock will only be taken
in write mode on the last reference of the entry (before calling
delete_entry). This allows to limit the amount of times when we need to
take the cache lock during a release operation.
Since a lock on the cache tree was added in the latest cache changes, we
do not need to use the shared_context's lock to lock more than pure
shared_context related data anymore. This already existing lock will now
only cover the 'avail' list from the shared_context. It can then be
changed to a rwlock instead of a spinlock because we might want to only
run through the avail list sometimes.
Apart form changing the type of the shctx lock, the main modification
introduced by this patch is to limit the amount of code covered by the
shctx lock. This lock does not need to cover any code strictly related
to the cache tree anymore.
After the latest changes in the cache/shared_context mechanism, the
cache and shared_context logic were decorrelated and in some unlikely
cases we might end up using the "show cache" command while some regular
cache processing is occurring (a response being stored in the cache for
instance). In such a case, because we used the same 'trash' buffer in
those two contexts, we could end up with the contents of a response in
the ouput of the "show cache" command.
This patch fixes this problem by allocating a dedicated trash for the
CLI command.
The "hot" list stored in a shared_context was used to keep a reference
to shared blocks that were currently being used and were thus removed
from the available list (so that they don't get reused for another cache
response). This 'hot' list does not ever need to be shared across
threads since every one of them only works on their current row.
The main need behind this 'hot' list was to detach the corresponding
blocks from the 'avail' list and to have a known list root when calling
list_for_each_entry_from in shctx_row_data_append (for instance).
Since we actually never need to iterate over all members of the 'hot'
list, we can remove it and replace the inc_hot/dec_hot logic by a
detach/reattach one.
When looking for a valid entry in the cache tree in
http_action_req_cache_use, we do not need to delete an expired entry at
once because even if an expired entry exists, since the request will be
forwarded to the server, then the expired entry will be overwritten when
the updated response is seen. We can then use a simpler rdlock during
cache_use operation.
Any lookup in the cache tree done through entry_exist or
secondary_entry_exist functions could end up deleting the corresponding
entry if it is expired which prevents from using a rdlock on code paths
that would just perform a lookup on the tree (in
http_action_req_cache_use for instance).
Adding a 'delete_expired' boolean as a parameter allows for "pure"
lookups and thus it will allow to perform operations on the tree that
simply require a rdlock instead of a "heavier" wrlock.
The "show cache" CLI command iterates over all the entries of the cache
tree and it used this opportunity to remove expired entries from the
cache. This behavior was completely undocumented and does not seem that
necessary. By removing it we can take the cache lock in read mode only
which limits the impact on the other threads.
Every use of the cache tree was covered by the shctx lock even when no
operations were performed on the shared_context lists (avail and hot).
This patch adds a dedicated RW lock for the cache so that blocks of code
that work on the cache tree only can use this lock instead of the
superseding shctx one. This is useful for operations during which the
concerned blocks are already in the hot list.
When the two locks need to be taken at the same time, in
http_action_req_cache_use and in shctx_row_reserve_hot, the shctx one
must be taken first.
A new parameter needed to be added to the shared_context's free_block
callback prototype so that cache_free_block can take the cache lock and
release it afterwards.
The shctx_row_reserve_hot relied on two loop levels in order to first
look for the first block of a preused row and then iterate on all the
blocks of this row to reserve them for the new row. This was not the
simplest nor the easiest to read way so this logic could be replaced by
a single iteration on the avail list members.
The two use cases of calling this function with or without a preexisting
"first" member were a bit cumbersome as well and were replaced by a more
straightforward approach.
Instead of iterating over all the elements of a given row when moving it
between the hot and available lists, we can make use of the last_reserved
pointer that already points to the last block of the list to perform the
move in O(1).
Ensure that the last_append pointer is always set to NULL on first block
of rows reserved by the subsystems using the shctx (cache for instance).
This pointer will be used directly in shctx_row_data_append instead of
the 'from' param which will simplify its uses.
A backend connection is inserted in server idle list via
srv_add_to_idle_list(). This function has several conditions which may
cause the connection to be rejected instead.
One of this condition is based on the current estimate count of needed
connections for the server. If the count of idle connections stored has
already reached this estimation, the new connection is rejected. This is
in opposition with the purpose of reverse HTTP. On active reverse,
haproxy can instantiate several connections to properly serve the future
traffic. However, the opposite passive haproxy will have only a low
estimate of needed connection and will reject most of them.
To fix this, simply check CO_FL_REVERSED connection flag on
srv_add_to_idle_list(). If set, the connection is inserted without
checking for estimate count. Note that all other conditions are not
impacted, so it's still possible to reject a connection, for example if
process FD limit is reached.
This commit relies on recent patch which change CO_FL_REVERSED flag for
connection after passive reverse.
On passive reverse, H2 mux is responsible to insert the connection in
the server idle list. This is done via srv_add_to_idle_list(). However,
this function may fail for various reason, such as FD usage limit
reached.
Handle properly this error case. H2 mux flags the connection on error
which will cause its release. Prior to this patch, the connection was
only released on server timeout.
This bug was found inspecting server curr_used_conns counter. Indeed, on
connection reverse, this counter is first incremented. It is decremented
just after on srv_add_to_idle_list() if insertion is validated. However,
if insertion is rejected, the connection was not released which cause
curr_used_conns to remains positive. This has the major downside to
break the reusing of idle connection on rhttp causing spurrious 503
errors.
No need to backport.
Change the flags used for reversed connection :
* CO_FL_REVERSED is now put after reversal for passive connect. For
active connect, it is delayed when accept is completed after reversal.
* CO_FL_ACT_REVERSING replace the old CO_FL_REVERSED. It is put only for
active connect on reversal and removes once accept is done.
This allows to identify a connection as reversed during its whole
lifetime. This should be useful to extend reverse connect.
The commit 5ff7d2276 ("BUG/MEDIUM: stream: Properly handle abortonclose when set
on backend only") introduced a regression. Not all multiplexer implement the
.ctl() callback function. Thus we must be sure this callback function is defined
first to call it.
This patch should fix a crash reported by Tristan in the issue #2095. It must be
backported as far as 2.2, with the commit above.
Since 2.7 and the mcli_reload_bind_conf (56f73b21a5), upon a reload
failure because of a bind error, the mcli_reload_bind_conf go through a
sock_unbind((). This is not supposed to do anything when a listener is
RX_F_INHERITED in the master, but unfortunately this is done too early
and provokes an exit of the master.
We already suspected in the past that setting the 'master' variable this
late could have negative impact.
The fix sets the master variable earlier before the bind.
This must be backported at least to 2.7. This could be backported
earlier but better wait any feedbacks on the fix.
Seeing counters in "show profiling" is not always very helpful without
an indication of how long the analysis lasted nor if it's still active
or not. Let's add a pair of start/stop timers for tasks and memory so
that we can now indicate how long the measurements lasted and when they
ended (or 0 if still running).
Note that for tasks profiling set to "auto", the measurement is considered
enabled since it can automatically switch on and off on a per-thread
basis.
We now take care to properly handle the abortonclose close option if it is
set on the backend and be sure we ignore it when it is set on the frontend
(inherited from the defaults section).
Since the 2.2 and the commit dedd30610 ("MEDIUM: h1: Don't wake the H1 tasklet
if we got the whole request."), we avoid to subscribe for reads if the H1
message is fully received. However, this broke the abortonclose option. To fix
the issue, a CO_RFL flag was added to instruct the mux it should still wait for
read events to properly handle read0. Only the H1 mux was concerned.
But since then, most of time, the option is only handled if it is set on the
frontend proxy because the request is fully received before selecting the
backend. If the backend is selected before the end of the request there is no
issue. But otherwise, because the backend is not known yet, we are unable to
properly handle the option and we miss to subscribe for reads.
Of course the option cannot be set on a frontend proxy. So concretly it means
the option is properly handled if it is enabled in the defaults section (if
common to frontend and backend) or a listen proxy, but it is ignored if it is
set on backend only.
Thanks to previous patches, we can now instruct the mux it should subscribe for
reads if not already done. We use this mechanism in process_stream() when the
connection is set up, ie when backend SC is set to SC_ST_REQ state.
This patch relies on following patches:
* MINOR: connection: Add a CTL flag to notify mux it should wait for reads again
* MEDIUM: mux-h1: Handle MUX_SUBS_RECV flag in h1_ctl() and susbscribe for reads
This patch should be the issue #2344. All the series must be backported as far
as 2.2.
The H1 mux now handle MUX_SUBS_RECV flag in h1_ctl(). If it is not already
subscribed for reads, it does so. This patch will be mandatory to properly
handle abortonclose option.
abortonclose option is a backend option, it should not be handle on frontend
side. Of course a frontend can also be a backend but the option should not
be handled too early because it is not necessarily the selected backend
(think about a listen proxy routing requests to another backend).
It is especially an issue when the abortonclose option is enabled in the
defaults section and disabled by the selected backend. Because in this case,
the option may still be enabled while it should not.
Thus, now we wait the backend connection was set up to handle the option. To
do so, we check the backend SC state. The option is ignored if it is in
ST_CS_INI state. For all other states, it means the backend was already
selected.
This patch could be backported as far as 2.2.
An annoying issue was met when testing the reverse-http mechanism, by
which failed connection attempts would apparently not be attempted again
when there was no connect timeout. It turned out to be more generalized
than the rhttp system, and actually affects all outgoing connections
relying on NPN or ALPN to choose the mux, on which no mux is installed
and for which the subscriber (ssl_sock) must be notified instead.
The problem appeared during 2.2-dev1 development. First, commit
062df2c23 ("MEDIUM: backend: move the connection finalization step to
back_handle_st_con()") broke the error reporting by testing CO_FL_ERROR
only under CO_FL_CONNECTED. While it still worked OK for cases where a
mux was present, it did not for this specific situation because no
single error path would be considered when no mux was present. Changing
the CO_FL_CONNECTED test to also include CO_FL_ERROR did work, until
a few commits later with 477902bd2 ("MEDIUM: connections: Get ride of
the xprt_done callback.") which removed the xprt_done callback that was
used to indicate success or failure of the transport layer setup, since,
as the commit explains, we can report this via the mux. What this last
commit says is true, except when there is no mux.
For this, however, the sock_conn_iocb() function (formerly conn_fd_handler)
is called for such errors, evaluates a number of conditions, none of which
is matched in this error condition case, since sock_conn_check() instantly
reports an error causing a jump to the leave label. There, the mux is
notified if installed, and the function returns. In other error condition
cases, readiness and activity are checked for both sides, the tasklets
woken up and the corresponding subscriber flags removed. This means that
a sane (and safe) approach would consist in just notifying the subscriber
in case of error, if such a subscriber still exists: if still there, it
means the event hasn't been caught earlier, then it's the right moment
to report it. And since this is done after conn_notify_mux(), it still
leaves all control to the mux once it's installed.
This commit should be progressively backported as far as 2.2 since it's
where the problem was introduced. It's important to clearly check the
error path in each function to make sure the fix still does what it's
supposed to.
This bug arrived with this commit:
MINOR: quic: Add a max window parameter to congestion control algorithms
The documentation was been modified with missing/wrong modifications in the code part.
The 'g' suffix must be accepted to parse value in gigabytes. And exctly 4g is
also accepted.
No need to backport.
Make all the congestion support the maximum congestion control window
set by configuration. There is nothing special to explain. For each
each algo, each time the window is incremented it is also bounded.