Most of the invalid or unknow field in the stats-file parser are ignored
silently, which is not the case of the frontend/backend mismatch on a
guid, which is kind of strange.
Since this is ""documented"" to be ignored in the
reg-tests/stats/sample-stats-file file, let's also ignore this kind of
line. This will allow to run the associated reg-test with -dW.
To improve debugging, extend "show quic" output to report if pacing is
activated on a connection. Two values will be displayed for pacing :
* a new counter paced_sent_ctr is defined in QCC structure. It will be
incremented each time an emission is interrupted due to pacing.
* pacing engine now saves the number of datagrams sent in the last paced
emission. This will be helpful to ensure burst parameter is valid.
Define a new QUIC congestion algorithm token 'cubic-pacing' for
quic-cc-algo bind keyword. This is identical to default cubic
implementation, except that pacing is used for STREAM frames emission.
This algorithm supports an extra argument to specify a burst size. This
is stored into a new bind_conf member named quic_pacing_burst which can
be reuse to initialize quic path.
Pacing support is still considered experimental. As such, 'cubic-pacing'
can only be used with expose-experimental-directives set.
Modify quic-cc-algo for better extensability of optional parameters
parsing. This will be useful to support a new parameter for maximum
allowed pacing burst size.
Take this opportunity to refine quic-cc-algo documentation. Optional
parameters are now presented as a list which would be soon extended.
A QUIC congestion algorithm can be specified on the bind line via
keyword quic-cc-algo. As such, bind_conf structure has a member
quic_cc_algo.
Previously, if quic-cc-algo was set, bind_conf member was initialized to
one of the globally defined CC algo structure. This patch changes
bind_conf quic_cc_algo initialization to point to a dynamically
allocated copy of CC algo structure.
With this change, it will be possible to tweak individually each CC algo
of a bind line. This will be used to activate pacing on top of the
congestion algorithm.
As bind_conf member is dynamically allocated now, its member is now
freed via free_proxy() to prevent any leak.
Support pacing emission for STREAM frames at the QUIC MUX layer. This is
implemented by adding a quic_pacer engine into QCC structure.
The main changes have been written into qcc_io_send(). It now
differentiates cases when some frames have been rejected by transport
layer. This can occur as previously due to congestion or FD buffer full,
which requires subscribing on transport layer. The new case is when
emission has been interrupted due to pacing timing. In this case, QUIC
MUX I/O tasklet is rescheduled to run with the flag TASK_F_USR1.
On tasklet execution, if TASK_F_USR1 is set, all standard processing for
emission and reception is skipped. Instead, a new function
qcc_purge_sending() is called. Its purpose is to retry emission with the
saved STREAM frames list. Either all remaining frames can now be send,
subscribe is done on transport error or tasklet must be rescheduled for
pacing purging.
In the meantime, if tasklet is rescheduled due to other conditions,
TASK_F_USR1 is reset. This will trigger a full regeneration of STREAM
frames. In this case, pacing expiration must be check before calling
qcc_send_frames() to ensure emission is now allowed.
QUIC MUX will be responsible to drive emission with pacing. This will be
implemented via setting TASK_F_USR1 before I/O tasklet wakeup. To
prepare this, encapsulate each I/O tasklet wakeup into a new function
qcc_wakeup().
This commit is purely refactoring prior to pacing implementation into
QUIC MUX.
For STREAM emission, MUX QUIC previously used a local list defined under
qcc_io_send(). This was suitable as either all frames were sent, or
emission must be interrupted due to transport congestion or fatal error.
In the latter case, the list was emptied anyway and a new frame list was
built on future qcc_io_send() invokation.
For pacing, MUX QUIC may have to save the frame list if pacing should be
applied across emission. This is necessary to avoid to unnecessarily
rebuilt stream frame list between each paced emission. To support this,
STREAM list is now stored as a member of QCC structure.
Ensure frame list is always deleted, even on QCC release, using newly
defined utility function qcc_tx_frms_free().
qc_send_mux() has been extended previously to support pacing emission.
This will ensure that no more than one datagram will be emitted during
each invokation. However, to achieve better performance, it may be
necessary to emit a batch of several datagrams one one turn.
A so-called burst value can be specified by the user in the
configuration. However, some congestion control algos may defined their
owned dynamic value. As such, a new CC callback pacing_burst is defined.
quic_cc_default_pacing_burst() can be used for algo without pacing
interaction, such as cubic. It will returns a static value based on user
selected configuration.
Pacing will be implemented for STREAM frames emission. As such,
qc_send_mux() API has been extended to add an argument to a quic_pacer
engine.
If non NULL, engine will be used to pace emission. In short, no more
than one datagram will be emitted for each qc_send_mux() invokation.
Pacer is then notified about the emission and a timer for a future
emission is calculated. qc_send_mux() will return PACING error value, to
inform QUIC MUX layer that it will be responsible to retry emission
after some delay.
Extend quic_pacer engine to support pacing emission. Several functions
are defined.
* quic_pacing_sent_done() to notify engine about an emission of one or
several datagrams
* quic_pacing_expired() to check if emission should be delayed or can be
conducted immediately
This commit is part of a adjustment on QUIC transport send API to
support pacing. Here, qc_send_mux() return type has been changed to use
a new enum quic_tx_err.
This is useful to explain different failure causes of emission. For now,
only two values have been defined : NONE and FATAL. When pacing will be
implemented, a new value would be added to specify that emission was
interrupted on pacing. This won't be a fatal error as this allows to
retry emission but not immediately.
Extend QUIC transport emission function to support a maximum datagram
argument. The purpose is to ensure that qc_send() won't emit more than
the specified value, unless it is 0 which is considered as unlimited.
In qc_prep_pkts(), a counter of built datagram has been added to support
this. The packet building loop is interrupted if it reaches a specified
maximum value. Also, its return value has been changed to the number of
prepared datagrams. This is reused by qc_send() to interrupt its work if
a specified max datagram argument value is reached over one or several
iteration of prepared/sent datagrams.
This change is necessary to support pacing emission. Note that ideally,
the total length in bytes of emitted datagrams should be taken into
account instead of the raw number of datagrams. However, for a first
implementation, it was deemed easier to implement it with the latter.
To prepare pacing support, qc_prep_pkts() exit path have been rewritten
to be easily modified. This is purely refactoring which should not have
any functional change :
* a dedicated error path has been added
* ensure qc_txb_store() is always called to finalize datagram on normal
exit path if first_pkt is not NULL. Needed to support breaking from
packet building loop in a easier way.
Add QCC QC_CF_WAIT_FOR_HS and QCS QC_SF_TXBUB_OOB flags to their
respective show_flags to be able to decipher them via dev flags utility.
These values have been added in the current dev version, thus no need to
backport this patch.
Rename 'tune.quic.frontend.max-window-size' with the prefix 'default-'.
This highlights the fact that it is not a hard limit, as it can be
overriden if specifying an optional window size via quic-cc-algo on a
bind line.
No need to backport as this keyword was added on the current dev
version.
The server response time is erroneously reported as -1 when it is
intercepted by HAProxy.
As stated in the documentation, the server response time is reported as -1
when the last response header was never seen. It happens when a server
timeout is triggered before the server managed to process the request. It
also happens if the response is invalid. This may be reported by the mux
during the response parsing, but also by the HTTP analyzers. However, in
this last case, the response time must only be reported as -1 on 502.
This patch must be backported to all stable versions. It should fix the
issue #2384.
When the request is too large to fit in a buffer a 414 or a 431 error
message is returned depending on the error state of the request parser. A
414 is returned if the URI is too long, otherwise a 431 is returned.
This patch should fix the issue #1309.
414-Uri-Too-Long and 431-Request-Header-Fields-Too-Large are now part of
supported status codes that can be define as error files. The hash table
defined in http_get_status_idx() was updated accordingly.
"show sess" command now supports a list of options that can be set after all
other possible arguments (<id>, all...). For now, "show-uri" is the only
supported option. With this options, the captured URI, if non-null, is added
to the dump of a stream, complete or now. The URI may be anonymized if
necessary.
This patch should fix the issue #663.
Historically, an agent-check program is only able to set a proportial weight
to the initial server's weight. However, it could be handy to also set an
absolute value. It is the purpose of this patch.
Instead of changing the current way to set a server's weight, a new
agent-check command is introduced. The string "weight:", followed by an
positive interger or a positive interger percentage, can now be used. If the
value ends with the '%' sign, then the new weight will be proportional to
the initially weight of the server. Otherwise, the value is considered as an
absolute weight and must be between 0 and 256.
This patch should fix the issue #360.
It is now possible to use a log-format string to define the "Set-Cookie"
header value of a response generated by a redirect rule. There is no special
check on the result format and it is not possible during the configuration
parsing. It is proably not a big deal because already existing "set-cookie"
and "clear-cookie" options don't perform any check.
Here is an example:
http-request redirect location https://someurl.com/ set-cookie haproxy="%[var(txn.var)]"
This patch should fix the issue #1784.
On prefix-based redirect, there is an option to drop the query-string of the
location. Here it is the opposite. an option is added to preserve the
query-string of the original URI for a localtion-based redirect.
By setting "keep-query" option, for a location-based redirect only, the
query-string of the original URI is appended to the location. If there is no
query-string, nothing is added (no empty '?'). If there is already a
non-empty query-string on the localtion, the original one is appended with
'&' separator.
This patch should fix issue #2728.
Before this patch HAPROXY_BRANCH was unset just after configuration parsing.
Let's keep it, as it could be used in conditional blocks and some
configuration directives and it's handy to check its runtime value via "show
env".
In master-worker mode, this variable is set to the same value for both
processes.
Before this patch, we have need to put the master CLI in debug mode to be able
to issue 'show env' command for the master process. Output of this command is
handy even for the master process context, as it allows to control its
environment variables, which could be used/modified in the 'global' section.
So, let's provide in 'show env' command structure the level ACCESS_MASTER.
This allows to see and to access this command in master CLI without putting it
in debug mode.
As master parses now expose-deprecated-directives option, let's emit warning
about deprecated 'progam' section only in case, if this option wasn't set in
the 'global' section. This allows to people, who don't prefer to remove the
'program' section immediately to continue to start the process in zero-warning
mode.
Adjust the warning message accordingly and mcli_start_progs.vtc test. As
expose-deprecated-directives option is a 'global' section keyword, this section
must always precede any 'program' section, if users still continue to keep
'program' section.
This doesn't need to be backported, as related to the latest changes in
the master-worker architecture.
'Program' section is considered as deprecated now, see the commit 581c8a27d9
("MEDIUM: mworker: depreciate the 'program' section"). So, the 'program'
section parser emits a warning every time since this commit, if its section is
presented. This makes impossible to launch the process in zero-warning mode.
After master-worker refactoring only the master process parses the 'program'
section. So, at first, in order to be able to start in zero-warning mode, we
need to parse in master process option, which allows deprecated keywords. Thus,
let's set in this commit KWF_DISCOVERY flag to
cfg_parse_global_non_std_directives parser, which parses
'expose-deprecated-directives' and 'expose-deprecated-directives' options.
The ring size used to take only numbers and silently ignore letters (due
to atol()), resulting it tiny buffers when trying to collect traces and
using e.g. "size 10g". Let's make use of parse_size_err() to properly
parse units.
parse_size_err() currently is a function working only on an uint. It's
not convenient for certain elements such as rings on large machines.
This commit addresses this by having one function for uints and one
for ullong, and making parse_size_err() a macro that automatically
calls one or the other. It also has the benefit of automatically
supporting compatible types (long, size_t etc).
From time to time we face a configuration with very small timeouts which
look accidental because there could be expectations that they're expressed
in seconds and not milliseconds.
This commit adds a check for non-nul unitless values smaller than 100
and emits a warning suggesting to append an explicit unit if that was
the intent.
Only the common timeouts, the server check intervals and the resolvers
hold and timeout values were covered for now. All the code needs to be
manually reviewed to verify if it supports emitting warnings.
This may break some configs using "zero-warning", but greps in existing
configs indicate that these are extremely rare and solely intentionally
done during tests. At least even if a user leaves that after a test, it
will be more obvious when reading 10ms that something's probably not
correct.
Now that warnings were almost all removed, let's enable zero-warning
via -dW. All tests were adjusted, but two:
- mcli/mcli_start_progs.vtc:
the programs section currently cannot be silenced
- stats/stats-file.vtc:
the warning comes from the stats file itself on comment lines.
All other ones are now OK.
No less than 30 tests were missing timeouts, preventing them from being
started with zero-warning. Since they were not supposed to trigger, they
have been set to 30s so as never to trigger, and now they do not produce
any warning anymore.
This reg-test uses req.len in an HTTP backend. It does work but emits a
warning suggesting that this is ignored, so most likely its days are
counted now. Let's just use req.hdrs,length instead.
The following rules are triggering warnings about content-type being
ignored:
http-request return content-type "text/plain" if { path /def-4 }
http-request return content-type "text/plain" file /dev/null hdr "x-custom-hdr" "%[url]" if { path /empty-file }
Annoyingly, the content-type is mandatory when the file is not empty,
that might be something to revisit in the future to relax at least one
of the rules so that the config doesn't strictly require to know the
file contents upfront.
Two tests were using "timeout {client,server} 1" to forcefully trigger
them, but a forthcoming patch will emit a warning for such small unitless
values, so let's be explicit about the unit.
The regtest "h1or2_to_h1c" contains both an allow and a deny at the end,
likely to help catch rare bugs. But this triggers a warning that we can
silence by placing a condition on the penultimate rule.
Till now this value was parsed as raw integer using atol() and would
silently ignore any trailing suffix, causing unexpected behaviors when
set, e.g. to "4k". Let's make use of parse_size_err() on it so that
units are supported. This requires to turn it to uint as well, which
was verified to be OK.
Till now this value was parsed as raw integer using atol() and would
silently ignore any trailing suffix, preventing from starting when set
e.g. to "64k". Let's make use of parse_size_err() on it so that units are
supported. This requires to turn it to uint as well, and to explicitly
limit its range to INT_MAX - 2*sizeof(void*), which was previously
partially handled as part of the sign check.