A new XXH3 variant of hash functions shows a noticeable improvement in
performance (especially on small data), and also brings 128-bit support,
better inlining and streaming capabilities.
Performance comparison is available here:
https://github.com/Cyan4973/xxHash/wiki/Performance-comparison
Allow the user to specify a custom Connection header for http-check
send. This is useful for example to implement a websocket upgrade check.
If no connection header has been set, a 'Connection: close' header is
automatically appended to allow the server to close the connection
immediately after the request/response.
Update the documentation related to http-check send.
This fixes the github issue #1009.
This is a regression in 7838a79ba ("MEDIUM: mux-h2/trace: add lots of traces
all over the code"). The issue was found using -Wmisleading-indentation.
This patch fixes GitHub issue #1015.
The impact of this bug is that it could in theory cause occasional delays
on some long responses for connections having otherwise no traffic.
This patch should be backported to 2.1+, the commit was first tagged in
v2.1-dev2.
This bug happens when a service has multiple records on the same host
and the server provides the A/AAAA resolution in the response as AR
(Additional Records).
In such condition, the first occurence of the host will be taken from
the Additional section, while the second (and next ones) will be process
by an independent resolution task (like we used to do before 2.2).
This can lead to a situation where the "synchronisation" of the
resolution may diverge, like described in github issue #971.
Because of this behavior, HAProxy mixes various type of requests to
resolve the full list of servers: SRV+AR for all "first" occurences and
A/AAAA for all other occurences of an existing hostname.
IE: with the following type of response:
;; ANSWER SECTION:
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A2.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 86 A3.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A1.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 85 A3.tld.
;; ADDITIONAL SECTION:
A2.tld. 3600 IN A 192.168.0.2
A3.tld. 3600 IN A 192.168.0.3
A1.tld. 3600 IN A 192.168.0.1
A3.tld. 3600 IN A 192.168.0.3
the first A3 host is resolved using the Additional Section and the
second one through a dedicated A request.
When linking the SRV records to their respective Additional one, a
condition was missing (chek if said SRV record is already attached to an
Additional one), leading to stop processing SRV only when the target
SRV field matches the Additional record name. Hence only the first
occurence of a target was managed by an additional record.
This patch adds a condition in this loop to ensure the record being
parsed is not already linked to an Additional Record. If so, we can
carry on the parsing to find a possible next one with the same target
field value.
backport status: 2.2 and above
SSL_CTX_get0_privatekey is openssl/boringssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
Since commit 8a069eb9a ("MINOR: debug: add a trivial PRNG for scheduler
stress-tests"), 32-bit gcc 4.7 emits this warning when parsing the
initial seed for the debugger's RNG (2463534242):
src/debug.c:46:1: warning: this decimal constant is unsigned only in ISO C90 [enabled by default]
Let's mark it explicitly unsigned.
On frontend side, when a conn-stream is detached from a H1 connection, the
H1 stream is destroyed and if we already have some data to parse (a
pipelined request), we process these data immedialtely calling
h1_process(). Then we adjust the H1 connection timeout. But h1_process() may
fail and release the H1 connection. For instance, a parsing error may be
reported. Thus, when that happens, we must not use anymore the H1 connection
and exit.
This patch must be backported as far as the 2.2. This bug can impact the 2.3
and the 2.2, in theory, if h1 stream creation fails. But, concretly, it only
fails on the 2.4 because the requests are now parsed at this step.
When a channel is set in TUNNEL mode, we now always set the CF_NEVER_WAIT flag,
to be sure to never wait for sending data. It is important because in TUNNEL
mode, we have no idea if more data are expected or not. Setting this flag
prevent the MSG_MORE flag to be set on the connection.
It is only a problem with the HTX, since the 2.2. On previous versions, the
MSG_MORE flag is only set on the mux initiative. In fact, the problem arises
because there is an ambiguity in tunnel mode about the HTX_FL_EOI flag. In this
mode, from the mux point of view, while the SHUTR is not received more data are
expected. But from the channel point of view, we want to send data asap.
At short term, this fix is good enough and is valid anyway. But for the long
term more reliable solution must be found. At least, the to_forward field must
regain its original meaning.
This patch must be backported as far as 2.2.
When a protocol upgrade request is received, once parsed, it is waiting for
the response in the DONE state. But we must not set the flag CS_FL_EOI
because we don't know if a protocol upgrade will be performed or not.
Now, it is set on the response path, if both sides reached the DONE
state. If a protocol upgrade is finally performed, both side are switched in
TUNNEL state. Thus the CS_FL_EOI flag is not set.
If backported, this patch must be adapted because for now it relies on last
2.4-dev changes. It may be backported as far as 2.0.
As stated in the rfc7231, section 4.3.6, an HTTP tunnel via a CONNECT method
is successfully established if the server replies with any 2xx status
code. However, only 200 responses are considered as valid. With this patch,
any 2xx responses are now considered to estalish the tunnel.
This patch may be backported on demand to all stable versions and adapted
for the legacy HTTP. It works this way since a very long time and nobody
complains.
Due to the addition of the OpenTracing filter it is necessary to define
ARGC_OT enum. This value is used in the functions fmt_directive() and
smp_resolve_args().
The OpenTracing filter uses several internal HAProxy functions to work
with variables and therefore requires two static local HAProxy functions,
var_accounting_diff() and var_clear(), to be declared global.
In fact, the var_clear() function was not originally defined as static,
but it lacked a declaration.
This new option allows to tune the maximum number of simultaneous
entries with the same primary key in the cache (secondary entries).
When we try to store a response in the cache and there are already
max-secondary-entries living entries in the cache, the storage will
fail (but the response will still be sent to the client).
It defaults to 10 and does not have a maximum number.
The secondary entry counter cannot be updated without going over all the
items of a duplicates list periodically. In order to avoid doing it too
often and to impact the cache's performances, a timestamp is added to
the cache_entry. It will store the timestamp (with second precision) of
the last iteration over the list (actually the last call of the
clear_expired_duplicates function). This way, this function will not be
called more than once per second for a given duplicates list.
Add an arbitrary maximum number of secondary entries per primary hash
(10 for now) to the cache. This prevents the cache from being filled
with duplicates of the same resource.
This works thanks to an entry counter that is kept in one of the
duplicates of the list (the last one).
When an entry is added to the list, the ebtree's implementation ensures
that it will be added to the end of the existing list so the only thing
to do to keep the counter updated is to get the previous counter from
the second to last entry.
Likewise, when an entry is explicitely deleted, we update the counter
from the list's last item.
SSL_CTX_add_server_custom_ext is openssl specific function present
since openssl-1.0.2, let us define readable guard for it, not depending
on HA_OPENSSL_VERSION
The cache entries are now added into the tree even when they are not
complete yet. If we realized while trying to add a response's payload
that the shctx was full, the entry was disabled through the
disable_cache_entry function, which cleared the key field of the entry's
node, but without actually removing it from the tree. So the shctx row
could be stolen from the entry and the row's content be rewritten while
a lookup in the tree would still find a reference to the old entry. This
caused a random crash in case of cache saturation and row reuse.
This patch adds the missing removal of the node from the tree next to
the reset of the key in disable_cache_entry.
This bug was introduced by commit 3243447 ("MINOR: cache: Add entry
to the tree as soon as possible")
It does not need to be backported.
In issue #1004, it was reported that it is not possible to remove
correctly a certificate after updating it when it came from a crt-list.
Indeed the "commit ssl cert" command on the CLI does not update the list
of ckch_inst in the crtlist_entry. Because of this, the "del ssl
crt-list" command does not remove neither the instances nor the SNIs
because they were never linked to the crtlist_entry.
This patch fixes the issue by inserting the ckch_inst in the
crtlist_entry once generated.
Must be backported as far as 2.2.
When a frontend H1 connection timed out waiting for the next request, a 408
error message is returned to the client. It is performed into the H1C task
process function, h1_timeout_task(), and under the idle connection takeover
lock. If the 408 error message cannot be sent immediately, we wait for a
next retry. In this case, the lock must be released.
This bug was introduced by the commit c4bfa59f1d ("MAJOR: mux-h1: Create the
client stream as later as possible") and is specific to the 2.4-DEV. No
backport needed.
Depending on the context, the current eweight or the next one must be used
to reposition a server in the tree. When the server state is updated, for
instance its weight, the next eweight must be used because it is not yet
committed. However, when the server is used, on normal conditions, the
current eweight must be used.
In fact, it is only a bug on the 1.8. On newer versions, the changes on a
server are performed synchronously. But it is safer to rely on the right
eweight value to avoid any futur bugs.
On the 1.8, it is important to do so, because the server state is updated
and committed inside the rendez-vous point. Thus, the next server state may
be unsync with the current state for a short time, waiting all threads join
the rendez-vous point. It is especially a problem if the next eweight is set
to 0. Because otherwise, it must not be used to reposition the server in the
tree, leading to a divide by 0.
This patch must be backported as far as 1.8.
This changes the subscribe/unsubscribe functions to rely on the control
layer's check_events/ignore_events. At the moment only the socket version
of these functions is present so the code should basically be the same.
Right now the connection subscribe/unsubscribe code needs to manipulate
FDs, which is not compatible with QUIC. In practice what we need there
is to be able to either subscribe or wake up depending on readiness at
the moment of subscription.
This commit introduces two new functions at the control layer, which are
provided by the socket code, to check for FD readiness or subscribe to it
at the control layer. For now it's not used.
Now we don't touch the fd anymore there, instead we rely on the ->drain()
provided by the control layer. As such the function was renamed to
conn_ctrl_drain().
This is what we need to drain pending incoming data from an connection.
The code was taken from conn_sock_drain() without the connection-specific
stuff. It still takes a connection for now for API simplicity.
conn_fd_handler() is 100% specific to socket code. It's about time
it moves to sock.c which manipulates socket FDs. With it comes
conn_fd_check() which tests for the socket's readiness. The ugly
connection status check at the end of the iocb was moved to an inlined
function in connection.h so that if we need it for other socket layers
it's not too hard to reuse.
The code was really only moved and not changed at all.
The send() loop present in this function and the error handling is already
present in raw_sock_from_buf(). Let's rely on it instead and stop touching
the FD from this place. The send flag was changed to use a more agnostic
CO_SFL_*. The name was changed to "conn_ctrl_send()" to remind that it's
meant to be used to send at the lowest level.
Add cur_server_timeout and cur_tunnel_timeout.
These sample fetches return the current timeout value for a stream. This
is useful to retrieve the value of a timeout which was changed via a
set-timeout rule.
Prepare the possibility to register sample fetches on the stream.
This commit is necessary to implement sample fetches to retrieve the
current timeout values.
Add a new http-request action 'set-timeout [server/tunnel]'. This action
can be used to update the server or tunnel timeout of a stream. It takes
two parameters, the timeout name to update and the new timeout value.
This rule is only valid for a proxy with backend capabilities. The
timeout value cannot be null. A sample expression can also be used
instead of a plain value.
Allow the modification of the tunnel timeout on the stream side.
Use a new field in the stream for the tunnel timeout. It is initialized
by the tunnel timeout from backend unless it has already been set by a
set-timeout tunnel rule.
Allow the modification of the timeout server value on the stream side.
Do not apply the default backend server timeout in back_establish if it
is already defined. This is the case if a set-timeout server rule has
been executed.
parse_size_err() function is now more strict on the size format. The first
character must be a digit. Otherwise an error is returned. Thus "size k" is
now rejected.
This patch must be backported to all stable versions.
First, an error is now reported if the first character is not a digit. Thus,
"timeout client s" triggers an error now. Then 'u' is also rejected
now. 'us' is valid and should be used set the timer in microseconds. However
'u' alone is not a valid unit. It was just ignored before (default to
milliseconds). Now, it is an error. Finally, a warning is reported if the
end of the text is not reached after the timer parsing. This warning will
probably be switched to an error in a futur version.
This patch must be backported to all stable versions.
For HTTP expect rules, if the buffer is not empty, it is guarantee that all
responses headers are received, with the start-line. Thus, except for
payload matching, there is no reason to wait for more data from the moment
the htx message is not empty.
This patch may be backported as far as 2.2.
The check timeout is used to limit a health-check execution. By default
inter timeout is used. But when defined the check timeout is used. In this
case, the inter timeout (or connect timeout) is used for the connection
establishment only. And the check timeout for the health-check
execution. Thus, it must be set after a successfull connect. It means it is
rearm at the end of each connect rule.
This patch with the previous one (BUG/MINOR: http-check: Use right condition
to consider HTX message as full) should solve the issue #991. It must be
backported as far as 2.2. On the 2.3 and 2.2, there are 2 places were the
connection establishement is handled. The check timeout must be set on both.
When an HTTP expect rule is evaluated, we must know if more data is expected
or not to wait if the matching fails. If the whole response is received or
if the HTX message is full, we must not wait. In this context,
htx_free_data_space() must be used instead of htx_free_space(). The fisrt
one count down the block size. Otherwise at the edge, when only the block
size remains free (8 bytes), we may think there is some place for more data
while the mux is unable to add more block.
This bug explains the loop described on the GH issue #991. It should be
backported as far as 2.2.
This last call to conn_cond_update_polling() is now totally misleading as
the function only stops polling in case of unrecoverable connection error.
Let's open-code the test to make it more prominent and explain what we're
trying to do there. It's even almost certain this code is never executed
anymore, as the only remaining case should be a mux's wake function setting
CO_FL_ERROR without disabling the polling, but they need to be audited first
to make sure this is the case.
This was a leftover of the pre-mux v1.8-dev3 era. It makes no sense anymore
to try to disable polling on a connection we don't own, it's the mux's job
and it's properly done upon shutdowns and closes.
As explained in previous commit, the situation is absurd as we try to
cleanly drain pending data before impolitely shutting down, and it could
be counter productive on real muxes. Let's use cs_drain_and_close() instead.
When the shutr() requests CS_SHR_DRAIN and there's no particular shutr
implemented on the underlying transport layer, we must drain pending data.
This is what happens when cs_drain_and_close() is called. It is important
for TCP checks to drain large responses and close cleanly.
Not only it's become totally useless with muxes, in addition it's
dangerous to play with the mux's FD while shutting a stream down for
writes. It's already done *if necessary* by the cs_shutw() code at the
mux layer. Fortunately it doesn't seem to have any impact, most likely
the polling updates used to immediately revert this operation.
conn_stop_polling() in fact only calls fd_stop_both() after checking
that the ctrl layer is ready. It's the case in conn_fd_check() so
let's get rid of this next-to-last user of this function.
The duplicated entries (in case of vary) were not taken into account by
the "show cache" command. They are now dumped too.
A new "vary" column is added to the output. It contains the complete
seocndary key (in hex format).
QUIC will rely on UDP at the receiver level, and will need these functions
to suspend/resume the receivers. In the future, protocol chaining may
simplify this.
Currnetly conn_ctrl_init() does an fd_insert() and conn_ctrl_close() does an
fd_delete(). These are the two only short-term obstacles against using a
non-fd handle to set up a connection. Let's have pur these into the protocol
layer, along with the other connection-level stuff so that the generic
connection code uses them instead. This will allow to define new ones for
other protocols (e.g. QUIC).
Since we only support regular sockets at the moment, the code was placed
into sock.c and shared with proto_tcp, proto_uxst and proto_sockpair.
For the sake of an improved readability, let's group the protocol
field members according to where they're supposed to be defined:
- connection layer (note: for now even UDP needs one)
- binding layer
- address family
- socket layer
Nothing else was changed.
The various protocols were made static since there was no point in
exporting them in the past. Nowadays with QUIC relying on UDP we'll
significantly benefit from UDP being exported and more generally from
being able to declare some functions as being the same as other
protocols'.
In an ideal world it should not be these protocols which should be
exported, but the intermediary levels:
- socket layer (sock.c only right now), already exported as functions
but nothing structured at the moment ;
- family layer (sock_inet, sock_unix, sockpair etc): already structured
and exported
- binding layer (the part that relies on the receiver): currently fused
within the protocol
- connectiong layer (the part that manipulates connections): currently
fused within the protocol
- protocol (connection's control): shouldn't need to be exposed
ultimately once the elements above are in an easily sharable way.
This field used to be needed before commit 2b5e0d8b6 ("MEDIUM: proto_udp:
replace last AF_CUST_UDP* with AF_INET*") as it was used as a protocol
entry selector. Since this commit it's always equal to the socket family's
value so it's entirely redundant. Let's remove it now to simplify the
protocol definition a little bit.
At the end of stream_new(), once the input buffer is transfer to the request
channel, it must not be used anymore. The previous patch (16df178b6 "BUG/MEDIUM:
stream: Xfer the input buffer to a fully created stream") was pushed to quickly.
No backport needed.
The input buffer passed as argument to create a new stream must not be
transferred when the request channel is initialized because the channel
flags are not set at this stage. In addition, the API is a bit confusing
regarding the buffer owner when an error occurred. The caller remains the
owner, but reading the code it is not obvious.
So, first of all, to avoid any ambiguities, comments are added on the
calling chain to make it clear. The buffer owner is the caller if any error
occurred. And the ownership is transferred to the stream on success.
Then, to make things simple, the ownership is transferred at the end of
stream_new(), in case of success. And the input buffer is updated to point
on BUF_NULL. Thus, in all cases, if the caller try to release it calling
b_free() on it, it is not a problem. Of course, it remains the caller
responsibility to release it on error.
The patch fixes a bug introduced by the commit 26256f86e ("MINOR: stream:
Pass an optional input buffer when a stream is created"). No backport is
needed.
Since HAProxy 2.3, OpenSSL 1.1.1 is a requirement for using a
multi-certificate bundle in the configuration. This patch emits a fatal
error when HAProxy tries to load a bundle with an older version of
HAProxy.
This problem was encountered by an user in issue #990.
This must be backported in 2.3.
With the removal of the family-specific port setting, all protocol had
exactly the same implementation of ->add(). A generic one was created
with the name "default_add_listener" so that all other ones can now be
removed. The API was slightly adjusted so that the protocol and the
listener are passed instead of the listener and the port.
Note that all protocols continue to provide this ->add() method instead
of routinely calling default_add_listener() from create_listeners(). This
makes sure that any non-standard protocol will still be able to intercept
the listener addition if needed.
This could be backported to 2.3 along with the few previous patches on
listners as a pure code cleanup.
In create_listeners() we iterate over a port range and call the
protocol's ->add() function to add a new listener on the specified
port. Only tcp4/tcp6/udp4/udp6 support a port, the other ones ignore
it. Now that we can rely on the address family to properly set the
port, better do it this way directly from create_listeners() and
remove the family-specific case from the protocol layer.
At various places we need to set a port on an IPv4 or IPv6 address, and
it requires casts that are easy to get wrong. Let's add a new set_port()
helper to the address family to assist in this. It will be directly
accessible from the protocol and will make the operation seamless.
Right now this is only implemented for sock_inet as other families do
not need a port.
When a H1 message is parsed, if the parser state is switched to TUNNEL mode
just after the header parsing, the BODYLESS flag is set on the HTX
start-line. By transitivity, the corresponding flag is set on the message in
HTTP analysers. Thus it is possible to rely on it to not wait for the
request body.
CNT_LEN and TE_CHNK flags must be set on the message only when the
corresponding flag is set on the HTX start-line. Before, when the transfer
length was known XFER_LEN set), the HTTP_MSGF_TE_CHNK was the default. But
it is not appropriate. Now, it is only set if the message is chunked. Thus,
it is now possible to have a known transfer length without CNT_LEN or
TE_CHNK.
In addition, the BODYLESS flags may be set, independently on XFER_LEN one.
This flags is now unused. It was used in REQ_WAIT_HTTP analyser, when a
stream was waiting for a request, to set the keep-alive timeout or to avoid
to send HTTP errors to client.
It is now impossible to start the HTTP request processing in the stream
analysers with a partial or empty request message. The mux-h2 was already
waiting of the request headers before creating the stream. Now the mux-h1
does the same. All errors (aborts, timeout or invalid requests) waiting for
the request headers are now handled by the multiplexers. So there is no
reason to still handle them in the REQ_WAIT_HTTP (http_wait_for_request)
analyser.
To ensure there is no ambiguity, a BUG_ON() was added to exit if a partial
request is received in this analyser.
H1C_F_CS_* flags are renamed into H1C_F_ST_*. They reflect the connection
state. So "ST" is well suited. "CS" is confusing because it is also the
abbreviation for conn-stream.
In addition, H1C flags are reordered.
This is the reason for all previous patches. The conn-stream and the
associated stream are created as later as possible. It only concerns the
frontend connections. But it means the request headers, and possibly the
first data block, are received and parsed before the conn-stream
creation. To do so, an embryonic H1 stream, with no conn-stream, is
created. The result of this "early parsing" is stored in its rx buffer, used
to fill the request channel when the stream is created. During this step,
some HTTP errors may be returned by the mux. It must also handle
http-request/keep-alive timeouts. A significative change is about H1 to H2
upgrade. It happens very early now, and no H1 stream are created (and thus
of course no conn-stream).
The most important part of this patch is located to the h1_process()
function. Because it must trigger the parsing when there is no H1
stream. h1_recv() function has also been simplified.
For now, this part is unsued. But this patch adds functions to handle errors
on idle and embryonic H1 connections and send corresponding HTTP error
messages to the client (400, 408 or 500). Thanks to previous patches, these
functions take care to update the right stats counters, but also the
counters tracked by the session.
A field to store the HTTP error code has been added in the H1C structure. It
is used for error retransmits, if any, and to get it in http logs. It is
used to return the mux exit status code when the MUX_EXIT_STATUS ctl
parameter is requested.
When a log message is emitted from the session level, by a multiplexer,
there is no stream. Thus for HTTP session, there no status code and the
termination flags are not correctly set.
Thanks to previous patch, the HTTP status code is deduced from the mux exist
status, using the MUX_EXIT_STATE ctl param. This is only done for HTTP
frontends. If it is defined ( != 0), it is used to deduce the termination
flags.
The ctl param MUX_EXIT_STATUS can be request to get the exit status of a
multiplexer. For instance, it may be an HTTP status code or an H2 error. For
now, 0 is always returned. When the mux h1 will be able to return HTTP
errors itself, this ctl param will be used to get the HTTP status code from
the logs.
the mux_exit_status enum has been created to map internal mux exist status
to generic one. Thus there is 5 possible status for now: success, invalid
error, timeout error, internal error and unknown.
The cumulative numbers of http requests, http errors, bytes received and
sent and their respective rates for a tracked counters are now updated using
specific stream independent functions. These functions are used by the
stream but the aim is to allow the session to do so too. For now, there is
no reason to perform these updates from the session, except from the mux-h2
maybe. But, the mux-h1, on the frontend side, will be able to return some
errors to the client, before the stream creation. In this case, it will be
mandatory to update counters tracked at the session level.
An idle expiration date is added on the H1 connection with the function to
set it depending on connection state. First, there is no idle timeout on
backend connections, For idle frontend connections, the http-request or
keep-alive timeout are used depending on which timeout is defined and if it
is the first request or not. For embryonic connections, the http-request is
always used, if defined. For attached or shutted down connections, no idle
timeout is applied.
For now the idle expiration date is never set and the h1_set_idle_expiration
function remains unused.
When the conn-stream is detached for a H1 connection, there is no reason to
subscribe for reads or process pending input data if the connection is not
idle. Because, it means a shutdown is pending.
Conditions to set a timeout on the H1C task have been simplified or at least
changed to rely on H1 connection flags. Now, following rules are used :
* the shutdown timeout is applied on dead (not alive) or shutted down
connections.
* The client/server timeout is applied if there are still some pending
outgoing data.
* The client timeout is applied on alive frontend connections with no
conn-stream. It means on idle or embryionic frontend connections.
* For all other connections (backend or attached connections), no timeout
is applied. For frontend or backend attached connections, the timeout is
handled by the application layer. For idle backend connections, there is
no timeout.
We now only rely on one flag to notify a shutdown. The shutdown is performed
at the connection level when there are no more pending outgoing data. So, it
means it is performed immediately if the output buffer is empty. Otherwise
it is deferred after the outgoing data are sent.
This simplify a bit the mux because there is now only one flag to check.
Don't try to read more data if a parsing or a formatting error was reported
on the H1 stream. There is no reason to continue to process the messages for
the current connection in this case. If a parsing error occurs, it means the
input is invalid. If a formatting error occurs, it is an internal error and
it is probably safer to give up.
Mainly to make it easier to read. First of all, when a H1 connection is
still there, we check if the connection was stolen by another thread or
not. If yes we release the task and leave. Then we check if the task is
expired or not. Only expired tasks are considered. Finally, if a conn-stream
is still attached to the connection (H1C_F_CS_ATTACHED flag set), we
return. Otherwise, the task and the H1 connection are released.
Be prepared to have a H1 connection in one of the following states :
* A H1 connection waiting for a new message with no H1 stream.
H1C_F_CS_IDLE flag is set.
* A H1 connection processing a new message with a H1 stream but no
conn-stream attached. H1C_F_CS_EMBRYONIC flag is set
* A H1 connection with a H1 stream and a conn-stream attached.
H1C_F_CS_ATTACHED flag is set.
* A H1 connection with no H1 stream, waiting to be released. No flag is set.
These flags are mutually exclusives. When none is set, it means the
connection will be released ASAP, just remaining outgoing data must be sent
before. For now, the second state (H1C_F_CS_EMBRYONIC) is transient.
For now this buffer is not used. But it will be used to parse the headers,
and possibly the first block of data, when no stream is attached to the H1
connection. The aim is to use it to create the stream, thanks to recent
changes on the streams creation api.
Dedicated functions are now used to create frontend and backend H1
streams. h1c_frt_stream_new() is now used to create frontend H1 streams and
h1c_bck_stream_new() to create backend ones. Both rely on h1s_new() function
to allocate the stream itself. It is a bit easier to add specific processing
depending we are on the frontend or the backend side.
Instead of using H1S flags to report an error on the request or the
response, independently it is a parsing or a formatting error, we now use a
flag to report parsing errors and another one to report formatting
ones. This simplify the message parsing. It is also easier to figure out
what error happened when one of this flag is set. The side may be deduced
checking the H1C_F_IS_BACK flag.
Instead of using 2 flags on the H1 stream (H1S_F_BUF_FLUSH and
H1S_F_SPLICED_DATA), we now only use one flag on the H1 connection
(H1C_F_WANT_SPLICE) to notify we want to use splicing or we are using
splicing. This flag blocks the calls to rcv_buf() connection callback.
It is a bit easier to set the H1 connection capability to receive data in
its input buffer instead of relying on the H1 stream.
H1C_F_WAIT_OPPOSITE must be set on the H1 conenction to don't read more data
because we must be sync with the opposite side. This flag replaces the
H1C_F_IN_BUSY flag. Its name is a bit explicit. It is automatically set on
the backend side when the mux is created. It is safe to do so because at
this stage, the request has not yet been sent to the server. This way, in
h1_recv_allowed(), a test on this flag is enough to block the reads instead
of testing the H1 stream state on the backend side.
It is now possible to set the buffer used by the channel request buffer when
a stream is created. It may be useful if input data are already received,
instead of waiting the first call to the mux rcv_buf() callback. This change
is mandatory to support H1 connection with no stream attached.
For now, the multiplexers don't pass any buffer. BUF_NULL is thus used to
call stream_create_from_cs().
These info are only provided by the mux-h1. But, thanks to previous patches,
we can get them from the session directly. There is no need to retrieve them
from the mux anymore.
Since the idle duration provided by the session is always up-to-date, there
is no more reason to rely on the multiplexer cs_info to set it to the
stream.
When a log message is emitted from the session, using sess_log() function,
there is no stream available. In this case, instead of deducing the idle
duration from the accept date, we use the one provided by the session. 0 is
used if it is undefined (i.e set to -1).
These info are reset for the next transaction, if the connection is kept
alive. From the stream point of view, it should be the same a new
connection, except there is no handshake. Thus the handshake duration is set
to 0.
The idle duration between two streams is added to the session structure. It
is not necessarily pertinent on all protocols. In fact, it is only defined
for H1 connections. It is the duration between two H1 transactions. But the
.get_cs_info() callback function on the multiplexers only exists because
this duration is missing at the session level. So it is a simplification
opportunity for a really low cost.
To reduce the cost, a hole in the session structure is filled by moving
.srv_list field at the end of the structure.
IDLE frontend connections have no stream attached. The stream is only
created when new data are received, when the parsing of the next request
starts. Thus the keep-alive timeout, handled into the HTTP analysers, is not
considered while nothing is received. But this is especially when this
timeout must be considered. Concretely the http-keep-alive is ignored while
no data are received. Only the client timeout is used. It will only be
considered on incomplete requests, if the http-request timeout is not set.
To fix the bug, the http-keep-alive timeout must be handled at the mux
level, for IDLE frontend connection only.
This patch should fix the issue #984. It must be backported as far as
2.2. On prior versions, the stream is created earlier. So, it is not a
problem, except if this behavior changes of course (it was an optim of the
2.2, but don't remember the commit).
A copy-paste bug between {tcp,udp}{4,6}_add_listener() resulted in
using a struct sockaddr_in to set the TCP/UDP port while it ought to
be a struct sockaddr_in6. Fortunately, the port has the same offset
(2) in both so it was harmless. A cleaner way to proceed would be
to have a set_port function exported by the address family layer.
This needs to be backported to 2.3.
It seems to me that lua_close() must be called on all states at deinit
time, not just the first two ones. This is likely a remnant of commit
59f11be43 ("MEDIUM: lua-thread: Add the lua-load-per-thread directive").
There should likely be some memory leak reports when using Lua without
this fix, though none were observed for now.
No backport is needed as this was merged into 2.4-dev.
Lua dedicated TCP, HTTP and SSL socket and proxies must be initialized
once. Right now, they are initialized from the Lua init state, but since
commit 59f11be43 ("MEDIUM: lua-thread: Add the lua-load-per-thread
directive") this function is called one time per lua context. This
caused some fields to be cleared and overwritten, and pre-allocated
object to be lost. This is why the address sanitizer detected memory
leaks from the socket_ssl server initialization.
Let's move all the state-independent part of the function to the
hlua_init() function to avoid this.
No backport is needed, this is only 2.4-dev.
In case of successful unsafe method on a stored resource, the cached entry
must be invalidated (see RFC7234#4.4).
A "non-error response" is one with a 2xx (Successful) or 3xx (Redirection)
status code.
This implies that the primary hash must now be calculated on requests
that have an unsafe method (POST or PUT for instance) so that we can
disable the corresponding entries when we process the response.
The Cache-Control max-age and s-maxage directives should be followed by
a positive numerical value (see RFC 7234#5.2.1.1). According to the
specs, a sender "should not" generate a quoted-string value but we will
still accept this format.
When a response has an Age header (filled in by another cache on the
message's path) that is greater than its defined maximum age (extracted
either from cache-control directives or an expires header), it is
already stale and should not be cached.
The goal is to allow execution of one main lua state per thread.
This patch contains the main job. The lua init is done using these
steps:
- "lua-load-per-thread" loads the lua code in the first thread
- it creates the structs
- it stores loaded files
- the 1st step load is completed (execution of hlua_post_init)
and now, we known the number of threads
- we initilize lua states for all remaining threads
- for each one, we load the lua file
- for each one, we execute post-init
Once all is loaded, we control consistency of functions references.
The rules are:
- a function reference cannot be in the shared lua state and in
a per-thread lua state at the same time.
- if a function reference is declared in a per-thread lua state, it
must be declared in all per-thread lua states
The goal is to allow execution of one main lua state per thread.
The array introduces storage of one reference per thread, because each
lua state can have different reference id for a same function. A function
returns the preferred state id according to configuration and current
thread id.
The goal is to allow execution of one main lua state per thread.
"state_from" is a pointer to the parent lua state. "state_id"
is the index of the parent state id in the reference lua states
array. "state_id" is better because the lock is a "== 0" test
which is quick than pointer comparison. In other way, the state_id
index could index other things the the Lua state concerned. I
think to the function references.
The goal is to allow execution of one main lua state per thread.
This function will initialize the struct with other things than 0.
With this function helper, the initialization is centralized and
it prevents mistakes. This patch also keeps a reference to each
declared function in a list. It will be useful in next patches to
control consistency of declared references.
The goal is to allow execution of one main lua state per thread.
The array of states is initialized at the max number of thread +1.
We define the index 0 is the common state shared by all threads
and should be locked. Other index index are dedicated to each
one thread. The old gL now becomes hlua_states[0].
The goal is to allow execution of one main lua state per thread.
This patch opens the way to addition of a per-thread dedicated lua state.
By passing the hlua we can figure the original state that's been used
and decide to lock or not.
The goal is to allow execution of one main lua state per thread.
Stop using locks in init part, we will use only in parts where
the parent lua state is known, so we could take decision about lock
according with the lua parent state.
The goal is to allow execution of one main lua state per thread.
This commit introduces this variable in the core. Lua state initialized
by thread will have access to this variable, which reports the executing
thread. 0 indicates the shared thread. Programs which must be executed
only once can check for core.thread <= 1.
The goal is to allow execution of one main lua state per thread.
This function will be called for each initialized lua state, so
one per thread. The split transforms the lua state variable from
global to local.
The goal is to allow execution of one main lua state per thread.
This function will be called once per thread, using different Lua
states. This patch prepares the work.
The goal is to allow execution of one main lua state per thread.
The function hlua_ctx_init() now gets the original lua state from
its caller. This allows the initialisation of lua_thread (coroutines)
from any master lua state.
The parent lua state is stored in the hlua struct.
This patch is a temporary transition, it will be modified later.
The goal is to allow execution of one main lua state per thread.
This is a preparative work in order to init more than one stack
in the lua-thread objective.
The goal is to allow execution of one main lua state per thread.
Because this struct will be filled after the configuration parser, we
cannot copy the content. The actual state of the Haproxy code doesn't
justify this change, it is an update preparing next steps.
The goal is to no longer use "struct hlua" with global main lua_state.
The usage of the "struct hlua" is no longer required. This patch replaces
this struct by another one.
Now, the usage of runtime Lua phase is separated from the start lua phase.
The goal is to no longer use "struct hlua" with global main lua_state.
This patch returns NULL value when some code tries go get the hlua struct
associated with a task through hlua_gethlua(). This functions is useful
only during runtime because the struct hlua contains only runtime states.
Some Lua functions allowed to yield are called from init environment.
I'm not sure this is a good practice. Maybe it will be clever to
disallow calling this kind of functions.
The goal is no longer using "struct hlua" with global main lua_state.
if somewhere in the code, hlua_ctx_renew() is called with a global Lua
context, we have a serious bug. A crash is better than working with
this bug, so this patch remove a useless control.
In other way, this control were used during hlua_post_init() function.
The function hlua_post_init() used a call to the runtime hlua_ctx_resume()
function. This call no longer exists.
The goal is to no longer use "struct hlua" with global main lua_state.
The hlua_post_init() is executed during start phase, it does not require
yielding nor any advanced runtime error processing. Let's simplify this
by re-implementing the code using lower-level functions which directly
take a state and not an hlua anymore.
The goal is to no longer use "struct hlua" with global main lua_state
and directly take the state instead.
This patch removes the implicit dependency to this struct with
the function hlua_prepend_path()
Let's switch memory accounting to atomics so that the allocator function
may safely be used from concurrent Lua states.
Given that this function is extremely hot on the call path, we try to
optimize it for the most common case, which is:
- no limit
- there's enough memory
The accounting is what is particuarly expensive in threads since all
CPUs compete for a cache line, so when the limit is not used, we don't
want to use accounting. However we need to preserve it during the boot
phase until we may parse a "tune.lua.maxmem" value. For this, we turn
the unlimited "0" value to ~0 at the end of the boot phase to mark the
definite end of accounting. The function then detects this value and
directly jumps to realloc() in this case.
When the limit is enforced however, we use a CAS to check and reserve
our share of memory, and we roll back on failure. The CAS is used both
for increments and decrements so that a single operation is enough to
update the counters.
The function really has the semantics of a realloc() except that it
also passes the old size to help with accounting. No need to special
case the free or malloc, realloc does everything we need.
If the session is not established, the applet handler could leave
with the applet detached from the ring. At next call, the attach
counter will be decreased again causing unpredectable behavior.
This patch should be backported on branches >=2.2
With commit a1f12746b ("MINOR: traces: add a new level "error" below
the "user" level") a new trace level was inserted, resulting in
shifting all exiting ones by one. But the levels reported in the
__trace() function were not updated accordingly, resulting in the
TRACE_LEVEL_DEVELOPER not to be properly reported anymore. This
patch fixes it by extending the number of levels to 6.
No backport is needed.
When many concurrent requests targeting the same resource were seen, the
cache could sometimes be filled by too many partial responses resulting
in the impossibility to cache a single one of them. This happened
because the actual tree insertion happened only after all the payload of
every response was seen. So until then, every response was added to the
cache because none of the streams knew that a similar request/response
was already being treated.
This patch consists in adding the cache_entry as soon as possible in the
tree (right after the first packet) so that the other responses do not
get cached as well (if they have the same primary key).
A "complete" flag is also added to the cache_entry so that we know if
all the payload is already stored in the entry or if it is still being
processed.
Turn the "Accept-Encoding" value to lower case before processing it.
Calculate the CRC on every token instead of a sorted concatenation of
them all (in order to avoir copying them) then XOR all the CRCs into a
single hash (while ignoring duplicates).
Lua allows registering multiple sample-fetches, converters, action, cli,
applet/services with the same name. This is absolutely useless since only
the first registration will be used. This patch sends a warning if the case
is encountered.
This pach could be backported until 1.8, with the 3 associated patches:
- MINOR: actions: Export actions lookup functions
- MINOR: actions: add a function returning a service pointer from its name
- MINOR: cli: add a function to look up a CLI service description
This function will be useful to check if the keyword is already registered.
Also add a define for the max number of args.
This will be needed by a next patch to fix a bug and will have to be
backported.
This function simply calls action_lookup() on the private service_keywords,
to look up a service name. This will be used to detect double registration
of a same service from Lua.
This will be needed by a next patch to fix a bug and will have to be
backported.
These functions will be useful to check if a keyword is already registered.
This will be needed by a next patch to fix a bug, and will need to be
backported.
Operation luaL_openlibs() and lua_prepend path are processed whithout
the safe context, so in case of failure Haproxy aborts or stops without
error message.
This patch could be backported until 1.8
"lua-load" doesn't check if the expected parameter is present. It tries to
open() directly the argument at second position. So if the filename is
omitted, it tries to load an empty filename.
This patch could be backported until 1.8
The stats on haproxy.org reported ~12k GOAWAY for ~34k connections, with
only 2 protocol errorss. It turns out that the GOAWAY frame counter added
in commit a8879238c ("MINOR: mux-h2: report detected error on stats")
matches a bit too many situations. First it counts those which are not
sent as well as failed retries, second it counts as errors the cases of
attempts to cleanly close, while it's titled "GOAWAY sent on detected
error". Let's address this by moving the counter up one line and excluding
the clean codes.
This can be backported to 2.3.
A number of traces could be added, and a few TRACE_PROTO were replaced
with TRACE_ERROR. The goal is to be able to enable error tracing only
to detect anomalies.
It looks like they're mostly correct as they don't seem to strike on
valid H2 traffic but are very verbose on h2spec.
Sometimes it would be nice to be able to only trace abnormal events such
as protocol errors. Let's add a new "error" level below the "user" level
for this. This will allow to add TRACE_ERROR() at various error points
and only see them.
Since commit a8879238c ("MINOR: mux-h2: report detected error on stats")
we now have some error stats on stream/connection level protocol errors,
but some were improperly marked as stream while they're connection, and
2 or 3 relevant ones were missing and have now been added.
This could be backported to 2.3.
This patch adds a new logging variable '%HPO' for logging HTTP path only
(without query string) from relative or absolute URI.
For example:
log-format "hpo=%HPO hp=%HP hu=%HU hq=%HQ"
GET /r/1 HTTP/1.1
=>
hpo=/r/1 hp=/r/1 hu=/r/1 hq=
GET /r/2?q=2 HTTP/1.1
=>
hpo=/r/2 hp=/r/2 hu=/r/2?q=2 hq=?q=2
GET http://host/r/3 HTTP/1.1
=>
hpo=/r/3 hp=http://host/r/3 hu=http://host/r/3 hq=
GET http://host/r/4?q=4 HTTP/1.1
=>
hpo=/r/4 hp=http://host/r/4 hu=http://host/r/4?q=4 hq=?q=4
Since 2.3 default local log format always adds hostame field.
This behavior change was due to log/sink re-work, because according
to rfc3164 the hostname field is mandatory.
This patch re-introduce a legacy "local" format which is analog
to rfc3164 but with hostname stripped. This is the new
default if logs are generated by haproxy.
To stay compliant with previous configurations, the option
"log-send-hostname" acts as if the default format is switched
to rfc3164.
This patch addresses the github issue #963
This patch should be backported in branches >= 2.3.
In issue #958 Ashley Penney reported intermittent crashes on AWS's ARM
nodes which would not happen on x86 nodes. After investigation it turned
out that the Neoverse N1 CPU cores used in the Graviton2 CPU are much
more aggressive than the usual Cortex A53/A72/A55 or any x86 regarding
memory ordering.
The issue that was triggered there is that if a tasklet_wakeup() call
is made on a tasklet scheduled to run on a foreign thread and that
tasklet is just being dequeued to be processed, there can be a race at
two places:
- if MT_LIST_TRY_ADDQ() happens between MT_LIST_BEHEAD() and
LIST_SPLICE_END_DETACHED() if the tasklet is alone in the list,
because the emptiness tests matches ;
- if MT_LIST_TRY_ADDQ() happens during LIST_DEL_INIT() in
run_tasks_from_lists(), then depending on how LIST_DEL_INIT() ends
up being implemented, it may even corrupt the adjacent nodes while
they're being reused for the in-tree storage.
This issue was introduced in 2.2 when support for waking up remote
tasklets was added. Initially the attachment of a tasklet to a list
was enough to know its status and this used to be stable information.
Now it's not sufficient to rely on this anymore, thus we need to use
a different information.
This patch solves this by adding a new task flag, TASK_IN_LIST, which
is atomically set before attaching a tasklet to a list, and is only
removed after the tasklet is detached from a list. It is checked
by tasklet_wakeup_on() so that it may only be done while the tasklet
is out of any list, and is cleared during the state switch when calling
the tasklet. Note that the flag is not set for pure tasks as it's not
needed.
However this introduces a new special case: the function
tasklet_remove_from_tasklet_list() needs to keep both states in sync
and cannot check both the state and the attachment to a list at the
same time. This function is already limited to being used by the thread
owning the tasklet, so in this case the test remains reliable. However,
just like its predecessors, this function is wrong by design and it
should probably be replaced with a stricter one, a lazy one, or be
totally removed (it's only used in checks to avoid calling a possibly
scheduled event, and when freeing a tasklet). Regardless, for now the
function exists so the flag is removed only if the deletion could be
done, which covers all cases we're interested in regarding the insertion.
This removal is safe against a concurrent tasklet_wakeup_on() since
MT_LIST_DEL() guarantees the atomic test, and will ultimately clear
the flag only if the task could be deleted, so the flag will always
reflect the last state.
This should be carefully be backported as far as 2.2 after some
observation period. This patch depends on previous patch
"MINOR: task: remove __tasklet_remove_from_tasklet_list()".
This function is only used at a single place directly within the
scheduler in run_tasks_from_lists() and it really ought not be called
by anything else, regardless of what its comment says. Let's delete
it, move the two lines directly into the call place, and take this
opportunity to factor the atomic decrement on tasks_run_queue. A comment
was added on the remaining one tasklet_remove_from_tasklet_list() to
mention the risks in using it.
In process_runnable_tasks(), we walk the run queue and pick tasks to
insert them into the local list. And for each of these operations we
perform a few increments, some of which are atomic, and they're even
performed under the runqueue's lock. This is useless inside the loop,
better do them at the end, since we don't use these values inside the
loop and they're not used anywhere else either during this time. The
only one is task_list_size which is accessed in parallel by other
threads performing remote tasklet wakeups, but it's already
approximative and is used to decide to get out of the loop when the
limit is reached. So now we compute it first as an initial budget
instead.
This function is only called at a single place and adds more confusion
than it removes. It also makes one think it could be used outside of
the scheduler while it must absolutely not. Let's just move its two
lines to the call place, making the code more readable there. In
addition this clearly shows that the preliminary LIST_INIT() is
useless since the entry is immediately overwritten.
Commit a5a447984 ("MINOR: debug: add "debug dev sched" to stress the
scheduler.") doesn't scale with threads because ha_random64() takes care
of being totally thread-safe for use with UUIDs. We don't need this for
the stress-testing functions, let's just implement a xorshift PRNG
instead. On 8 threads the performance jumped from 230k ctx/s with 96%
spent in ha_random64() to 14M ctx/s.
This command supports starting a bunch of tasks or tasklets, either on the
current thread (mask=0), all (default), or any set, either single-threaded
or multi-threaded, and possibly auto-scheduled.
These tasks/tasklets will randomly pick another one to wake it up. The
tasks only do it 50% of the time while tasklets always wake two tasks up,
in order to achieve roughly 50% load (since the target might already be
woken up).
res.body may be called from a health-check. It is probably never used. But it is
possibe. In such case, there is no channel. Thus we must not use it
unconditionally to set the flag SMP_F_MAY_CHANGE on the smp.
Now the condition test the channel first. In addtion, the flag is not set if the
payload is fully received.
This patch must be backported as far as 2.2.
L7OKC may now be used as an error status for an HTTP/TCP expect rule. Thus
it is for instance possible to write:
option httpchk GET /isalive
http-check expect status 200,404
http-check expect status 200 error-status L7OKC
It is more or less the same than the disable-on-404 option except that if a
DOWN is up again but still replying a 404 will be set to NOLB state. While
it will stay in DOWN state with the disable-on-404 option.
Regarding the health counter, a check finished with the CONDPASS result is
now the same than with the PASSED result: The health counter is always
incemented. Before, it was only performed is the health counter was not 0.
There is no change for the disable-on-404 option because it is only
evaluated for running or stopping servers. So with an health check counter
greater than 0. But it will make possible to handle (STOPPED -> STOPPING)
transition for servers.
The parsing of the check options based on tcp-check rules (redis, spop,
smtp, http...) are moved aways from check.c. Now, these functions are placed
in tcpcheck.c. These functions are only related to the tcpcheck ruleset
configured on a proxy and not to the health-check attached to a server.
This option is now deprecated. It is recent, but it is now marked as
deprecated as far as 2.2. Thus, there is now a warning in the 2.4 if this
option is still used. It will be removed in 2.5.
Becaue the 2.3 is quite new, this patch may be backported to 2.3.
This option is now ignored because I/O check buffers are now allocated using the
buffer pool. Thus, it is marked as deprecated in the documentation and ignored
during the configuration parsing. The field is also removed from the global
structure.
Because this option is ignored since a recent fix, backported as fare as 2.2,
this patch should be backported too. Especially because it updates the
documentation.
The special handling of in-progress connect rules at the begining of
tcpcheck_main() function can be removed. Instead, at the begining of the
tcpcheck_eval_connect() function, we test is there is already an existing
connection. In this case, it means we are waiting for a connection
establishment. In addition, before evaluating a new connect rule, we take
care to release any previous connection.
Historically, the input and output buffers of a check are allocated by hand
during the startup, with a specific size (not necessarily the same than
other buffers). But since the recent refactoring of the checks to rely
exclusively on the tcp-checks and to use the underlying mux layer, this part
is totally buggy. Indeed, because these buffers are now passed to a mux,
they maybe be swapped if a zero-copy is possible. In fact, for now it is
only possible in h2_rcv_buf(). Thus the bug concretely only exists if a h2
health-check is performed. But, it is a latent bug for other muxes.
Another problem is the size of these buffers. because it may differ for the
other buffer size, it might be source of bugs.
Finally, for configurations with hundreds of thousands of servers, having 2
buffers per check always allocated may be an issue.
To fix the bug, we now allocate these buffers when required using the buffer
pool. Thus not-running checks don't waste memory and muxes may swap them if
possible. The only drawback is the check buffers have now always the same
size than buffers used by the streams. This deprecates indirectly the
"tune.chksize" global option.
In addition, the http-check regtest have been update to perform some h2
health-checks.
Many thanks to @VigneshSP94 for its help on this bug.
This patch should solve the issue #936. It relies on the commit "MINOR:
tcpcheck: Don't handle anymore in-progress send rules in tcpcheck_main".
Both must be backport as far as 2.2.
bla
The special handling of in-progress send rules at the begining of
tcpcheck_main() function can be removed. Instead, at the begining of the
tcpcheck_eval_send() function, we test is there is some data in the output
buffer. In this case, it means we are evaluating an unfinished send rule and
we can jump to the sending part, skipping the formatting part.
This patch is mandatory for a major fix on the checks and must be backported
as far as 2.2.
When a new kind of check is found during the parsing of a proxy section (via
an option directive), we must reset tcpcheck flags for this proxy. It is
mandatory to not inherit some flags from a previously declared check (for
instance in the default section).
This patch must be backported as far as 2.2.
Building with gcc-9.3.0 without threads may result in this warning:
In file included from include/haproxy/api-t.h:36,
from include/haproxy/api.h:33,
from src/fd.c:90:
src/fd.c: In function 'updt_fd_polling':
include/haproxy/fd.h:507:11: warning: array subscript 63 is above array bounds of 'int[1]' [-Warray-bounds]
507 | DISGUISE(write(poller_wr_pipe[tid], &c, 1));
include/haproxy/compiler.h:92:41: note: in definition of macro 'DISGUISE'
92 | #define DISGUISE(v) ({ typeof(v) __v = (v); ALREADY_CHECKED(__v); __v; })
| ^
src/fd.c:113:5: note: while referencing 'poller_wr_pipe'
113 | int poller_wr_pipe[MAX_THREADS]; // Pipe to wake the threads
| ^~~~~~~~~~~~~~
gcc is wrong but this time it cannot be blamed because it doesn't know
that the FD's thread_mask always has at least one bit set. Let's add
the test for all_threads_mask there. It will also remove that test and
drop the else block.
Another bug in the peers message parser was uncovered by last commit
1dfd4f106 ("BUG/MEDIUM: peers: fix decoding of multi-byte length in
stick-table messages"): the function return on incomplete message does
not check if the channel has a pending close before deciding to return
0. It did not hurt previously because the loop calling co_getblk() once
per character would have depleted the buffer and hit the end, causing
<0 to be returned and matching the condition. But now that we process
at once what is available this cannot be relied on anymore and it's
now clearly visible that the final check is missing.
What happens when this strikes is that if a peer connection breaks in
the middle of a message, the function will return 0 (missing data) but
the caller doesn't check for the closed buffer, subscribes to reads,
and the applet handler is immediately called again since some data are
still available. This is detected by the loop prevention and the process
dies complaining that an appctx is spinning.
This patch simply adds the check for closed channel. It must be
backported to the same versions as the fix above.
Since commit 3d08236cb3 HAProxy can be trivially
crashed remotely by sending an `accept-encoding` HTTP request header that
contains 16 commas.
This is because the `values` array in `accept_encoding_normalizer` accepts only
16 entries and it is not verified whether the end is reached during looping.
Fix this issue by checking the length. This patch also simplifies the ist
processing in the loop, because it manually calculated offsets and lengths,
when the ist API exposes perfectly safe functions to advance and truncate ists.
I wonder whether the accept_encoding_normalizer function is able to re-use some
existing function for parsing headers that may contain lists of values. I'll
leave this evaluation up to someone else, only patching the obvious crash.
This commit is 2.4-dev specific and was merged just a few hours ago. No
backport needed.
The cache section's process-vary option takes a 0 or 1 value to disable
or enable the vary processing.
When disabled, a response containing such a header will never be cached.
When enabled, we will calculate a preliminary hash for a subset of request
headers on all the incoming requests (which might come with a cpu cost) which
will be used to build a secondary key for a given request (see RFC 7234#4.1).
The default value is 0 (disabled).
Calculate a preliminary secondary key for every request we see so that
we can have a real secondary key if the response is cacheable and
contains a manageable Vary header.
The cache's ebtree is now allowed to have multiple entries with the same
primary key. Two of those entries will be distinguished thanks to
secondary keys stored in the cache_entry (based on hashes of a subset of
their headers).
When looking for an entry in the cache (cache_use), we still use the
primary key (built the same way as before), but in case of match, we
also need to check if the entry has a vary signature. If it has one, we
need to perform an extra check based on the newly built secondary key.
We will only be able to forge a response out of the cache if both the
primary and secondary keys match with one of our entries. Otherwise the
request will be forwarder to the server.
The Vary functionality is based on a secondary key that needs to be
calculated for every request to which a server answers with a Vary
header. The Vary header, which can only be found in server responses,
determines which headers of the request need to be taken into account in
the secondary key. Since we do not want to have to store all the headers
of the request until we have the response, we will pre-calculate as many
sub-hashes as there are headers that we want to manage in a Vary
context. We will only focus on a subset of headers which are likely to
be mentioned in a Vary response (accept-encoding and referer for now).
Every managed header will have its own normalization function which is
in charge of transforming the header value into a core representation,
more robust to insignificant changes that could exist between multiple
clients. For instance, two accept-encoding values mentioning the same
encodings but in different orders should give the same hash.
This patch adds a function that parses a Vary header value and checks if
all the values belong to our supported subset. It also adds the
normalization functions for our two headers, as well as utility
functions that can prebuild a secondary key for a given request and
transform it into an actual secondary key after the vary signature is
determined from the response.
When at least one data filter is registered on a channel, the offsets of all
filters must be kept up to date. For data filters but also for others. It is
safer to do it in that way. Indirectly, this patch fixes 2 hidden bugs
revealed by the commit 22fca1f2c ("BUG/MEDIUM: filters: Forward all filtered
data at the end of http filtering").
The first one, the worst of both, happens at the end of http filtering when
at least one data filtered is registered on the channel. We call the
http_end() callback function on the filters, when defined, to finish the
http filtering. But it is performed for all filters. Before the commit
22fca1f2c, the only risk was to call the http_end() callback function
unexpectedly on a filter. Now, we may have an overflow on the offset
variable, used at the end to forward all filtered data. Of course, from the
moment we forward an arbitrary huge amount of data, all kinds of bad things
may happen. So offset computation is performed for all filters and
http_end() callback function is called only for data filters.
The other one happens when a data filter alter the data of a channel, it
must update the offsets of all previous filters. But the offset of non-data
filters must be up to date, otherwise, here too we may have an integer
overflow.
Another way to fix these bugs is to always ignore non-data filters from the
offsets computation. But this patch is safer and probably easier to
maintain.
This patch must be backported in all versions where the above commit is. So
as far as 2.0.
Restore init of log-format list in parse_http_del_header which was
accidently deleted by commit ebdd4c55da
(implementation of different header matching methods for
http-request/response del-header).
This is related to GitHub issue #909
"RAND_keep_random_devices_open" is OpenSSL specific, does not present
in other OpenSSL variants like LibreSSL or BoringSSL. BoringSSL recently
"updated" its internal openssl version to 1.1.1, we temporarily set it
back to 1.1.0, as we are going to remove that hack, let us add proper
guarding.
Level-7 retries are only possible with a restricted number of HTTP
return codes. While it is usually not safe to retry on 401 and 403, I
came up with an authentication backend which was not synchronizing
authentication of users. While not perfect, being allowed to also retry
on those return codes is really helpful and acts as a hotfix until we
can fix the backend.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This patch adds -m flag which allows to specify header name
matching method when deleting headers from http request/response.
Currently beg, end, sub, str and reg are supported.
This is related to GitHub issue #909
Function __http_find_header is used to search headers by name using specified
matching method. Matching by substring returned unexpected results due to wrong
length of substring supplied to strnistr function.
Fixed also the boolean condition by inverting it, as we're interested in
headers that contains the substring.
This patch should be backported as far as 2.2
Baptiste reported a new crash affecting 2.3 which can be triggered
when using H2 on the backend, with http-reuse always and with a tens
of clients doing close only. There are a few combined cases which cause
this to happen, but each time the issue is the same, an already freed
session is dereferenced in session_unown_conn().
Two cases were identified to cause this:
- a connection referencing a session as its owner, which is detached
from the session's list and is destroyed after this session ends.
The test on conn->owner before calling session_unown_conn() is not
sufficent as the pointer is not null but is not valid anymore.
- a connection that never goes idle and that gets killed form the
mux, where session_free() is called first, then conn_free() calls
session_unown_conn() which scans the just freed session for older
connections. This one is only triggered with DEBUG_UAF
The reason for this session to be present here is that it's needed during
the connection setup, to be passed to conn_install_mux_be() to mux->init()
as the owning session, but it's never deleted aftrewards. Furthermore, even
conn_session_free() doesn't delete this pointer after freeing the session
that lies there. Both do definitely result in a use-after-free that's more
easily triggered under DEBUG_UAF.
This patch makes sure that the owner is always deleted after detaching
or killing the session. However it is currently not possible to clear
the owner right after a synchronous init because the proxy protocol
apparently needs it (a reg test checks this), and if we leave it past
the connection setup with the session not attached anywhere, it's hard
to catch the right moment to detach it. This means that the session may
remain in conn->owner as long as the connection has never been added to
nor removed from the session's idle list. Given that this patch needs to
remain simple enough to be backported, instead it adds a workaround in
session_unown_conn() to detect that the element is already not attached
anywhere.
This fix absolutely requires previous patch "CLEANUP: connection: do not
use conn->owner when the session is known" otherwise the situation will
be even worse, as some places used to rely on conn->owner instead of the
session.
The fix could theorically be backported as far as 1.8. However, the code
in this area has significantly changed along versions and there are more
risks of breaking working stuff than fixing real issues there. The issue
was really woken up in two steps during 2.3-dev when slightly reworking
the idle conns with commit 08016ab82 ("MEDIUM: connection: Add private
connections synchronously in session server list") and when adding
support for storing used H2 connections in the session and adding the
necessary call to session_unown_conn() in the muxes. But the same test
managed to crash 2.2 when built in DEBUG_UAF and patched like this,
proving that we used to already leave dangling pointers behind us:
| diff --git a/include/haproxy/connection.h b/include/haproxy/connection.h
| index f8f235c1a..dd30b5f80 100644
| --- a/include/haproxy/connection.h
| +++ b/include/haproxy/connection.h
| @@ -458,6 +458,10 @@ static inline void conn_free(struct connection *conn)
| sess->idle_conns--;
| session_unown_conn(sess, conn);
| }
| + else {
| + struct session *sess = conn->owner;
| + BUG_ON(sess && sess->origin != &conn->obj_type);
| + }
|
| sockaddr_free(&conn->src);
| sockaddr_free(&conn->dst);
It's uncertain whether an existing code path there can lead to dereferencing
conn->owner when it's bad, though certain suspicious memory corruption bugs
make one think it's a likely candidate. The patch should not be hard to
adapt there.
Backports to 2.1 and older are left to the appreciation of the person
doing the backport.
A reproducer consists in this:
global
nbthread 1
listen l
bind :9000
mode http
http-reuse always
server s 127.0.0.1:8999 proto h2
frontend f
bind :8999 proto h2
mode http
http-request return status 200
Then this will make it crash within 2-3 seconds:
$ h1load -e -r 1 -c 10 http://0:9000/
If it does not, it might be that DEBUG_UAF was not used (it's harder then)
and it might be useful to restart.
At a few places we used to rely on conn->owner to retrieve the session
while the session is already known. This is not correct because at some
of these points the reason the connection's owner was still the session
(instead of NULL) is a mistake. At one place a comparison is even made
between the session and conn->owner assuming it's valid without checking
if it's NULL. Let's clean this up to use the session all the time.
Note that this will be needed for a forthcoming fix and will have to be
backported.
HAVE_SSL_CTX_SET_CIPHERSUITES is newly defined macro set in openssl-compat.h,
which helps to identify ssl libs (currently OpenSSL-1.1.1 only) that supports
TLS13 cipersuites manipulation on TLS13 context
When a file from a crt-list was not found, this one was ignored silently
letting HAProxy starts without it.
This bug was introduced by 47da821 ("MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist").
This commit adds a found variable which is checked once we tried every
bundle combination so we can exits with an error if none were found.
Must be backported in 2.3.
When a non-existing file was specified in the configuration, haproxy
does not exits with an error which is not normal.
This bug was introduced by dfa93be ("MEDIUM: ssl: emulate multi-cert
bundles loading in standard loading") which does nothing if the stat
failed.
This patch introduce a "found" variable which is checked at the end of
the function so we exit with an error if no find were found.
Must be backported to 2.3.
In issue #970 it was reported that the bundle loading does not work
anymore with crt-list.
This bug was introduced by 47da821 ("MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist") which incorrectly uses "path"
instead of "crt_path" in the name resolution.
Must be backported to 2.3.
It is not possible on response comming from a server, but an errorfile may be
empty. In this case, the http-after-response ruleset must not be evaluated
because it is totally unexpected to manipulate headers on an empty HTX message.
This patch must be backported everywhere the http-after-response rules are
supported, i.e as far as 2.2.
In bug #959 it was reported that haproxy segfault on startup when trying
to load a certifcate which use the X509v3 AKID extension but without the
keyid field.
This field is not mandatory and could be replaced by the serial or the
DirName.
For example:
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Subject Key Identifier:
42:7D:5F:6C:3E:0D:B7:2C:FD:6A:8A:32:C6:C6:B9:90:05:D1:B2:9B
X509v3 Authority Key Identifier:
DirName:/O=HAProxy Technologies/CN=HAProxy Test Intermediate CA
serial:F2:AB:C1:41:9F:AB:45:8E:86:23:AD:C5:54:ED:DF:FA
This bug was introduced by 70df7b ("MINOR: ssl: add "issuers-chain-path" directive").
This patch must be backported as far as 2.2.
in the context of a progressive backend migration, we want to be able to
activate SSL on outgoing connections to the server at runtime without
reloading.
This patch adds a `set server ssl` command; in order to allow that:
- add `srv_use_ssl` to `show servers state` command for compatibility,
also update associated parsing
- when using default-server ssl setting, and `no-ssl` on server line,
init SSL ctx without activating it
- when triggering ssl API, de/activate SSL connections as requested
- clean ongoing connections as it is done for addr/port changes, without
checking prior server state
example config:
backend be_foo
default-server ssl
server srv0 127.0.0.1:6011 weight 1 no-ssl
show servers state:
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - -1
where srv0 can switch to ssl later during the runtime:
set server be_foo/srv0 ssl on
5 be_foo 1 srv0 127.0.0.1 2 0 1 1 15 1 0 4 0 0 0 0 - 6011 - 1
Also update existing tests and create a new one.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
a common init for ssl_ctx will be later usable in other functions in
order to support hot enable of ssl during runtime.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
CONNECT requests are bodyless messages but with no EOM blocks. Thus, conditions
to stop waiting for the message payload are not suited to this kind of
messages. Indeed, the message finishes on an EOH block. But the tunnel mode at
the stream level is only set in HTTP_XFER_BODY analyser. So, the stream is
blocked, waiting for a body that does not exist till a timeout expires.
To fix this bug, we just stop waiting for a body for CONNECT requests. Another
solution is to rely on HTX_SL_F_BODYLESS/HTTP_MSGF_BODYLESS flags. But this one
is less intrusive.
This message must be backported as far as 2.0. For the 2.0, only the HTX part
must be fixed.
When http filtering ends, if there are some filtered data not forwarded yet, we
forward them, in flt_http_end(). Most of time, this doesn't happen, except when
a tunnel is established using a CONNECT. In this case, there is not EOM on the
request and there is no body. Thus the headers are never forwarded, blocking the
stream.
This patch must be backported as far as 2.0. Prior versions don't suffer of this
bug because there is no HTX support. On the 2.0, the change is only applicable
on HTX streams. A special test must be performed to make sure.
Return ERR_NONE instead of 0 on success for all config callbacks that should
return ERR_* codes. There is no change because ERR_NONE is a macro equals to
0. But this makes the return value more explicit.
post-check function callbacks must return ERR_* flags. Thus, init_h2() is fixed
to return ERR_NONE on success or (ERR_ALERT|ERR_FATAL) on error.
This patch may be backported as far as 2.2.
Functions registered to release memory per-thread have no return value. But the
registering function and the function pointer in per_thread_free_fct structure
specify it should return an integer. This patch fixes it.
This patch may be backported as far as 2.0.
When tcp-check or http-check rules are used, if the corresponding check option
(option tcp-check and option httpchk) is declared after the ruleset, a warning
is emitted about an unused check ruleset while there is no problem in reality.
This patch must be backported as far as 2.2.
In sync mode, if an applet receives a ack while the processing delay has already
expired, there is not frame waiting for this ack. But there is no reason to
close the connection in this case. The ack may be ignored and the connection may
be reused to process another frame. The only reason to trigger an error and
close the connection is when the wrong ack is received while there is still a
frame waiting for its ack. In sync mode, this should never happen.
This patch may be backported in all versions supporting the SPOE.
When a SPOE applet is used to send a frame, a reference on this applet is saved
in the spoe context of the offladed stream. But, if the applet is released
before receving the corresponding ack, we must be sure to remove this
reference. This was performed for fragmented frames only. But it must also be
performed for a spoe contexts in the applet waiting_queue and in the thread
waiting_queue (used in async mode).
This bug leads to a memory corruption when an offloaded stream try to update the
state of a released applet because it still have a reference on it. There are
many ways to trigger this bug. The easiest is probably during reloads. On the
old process, all applets are woken up to be released ASAP.
Many thanks to Maciej Zdeb to report the bug and to work on it for 2
months. Without his help, it would have been much more difficult to fix the
bug. It is always a huge pleasure to see how some users are enthousiast and
helpful. Thanks again Maciej !
This patch must be backported to all versions where the spoe is supported (>=
1.7).
First of all, this patch is tagged as a bug. But in fact, it only fixes a bug in
the 2.2. On the 2.3 and above, it only add the ability to display warnings, when
an http-error directive is parsed from a proxy section and when an errorfile
directive is parsed from a http-errors section.
But on the 2.2, it make sure to display the warning emitted on a content-length
mismatch when an errorfile is parsed. The following is only applicable to the
2.2.
commit "BUG/MINOR: http-htx: Just warn if payload of an errorfile doesn't match
the C-L" (which is only present in 2.2, 2.1 and 2.0 trees, i.e see commit
7bf3d81d3cf4b9f4587 in 2.2 tree), is changing the behavior of `http_str_to_htx`
function. It may now emit warnings. And, it is the caller responsibility to
display it.
But the warning is missing when an 'http-error' directive is parsed from
a proxy section. It is also missing when an 'errorfile' directive is
parsed from a http-errors section.
This bug only exists on the 2.2. On earlier versions, these directives
are not supported and on later ones, an error is triggered instead of a
warning.
Thanks to William Dauchy that spotted the bug.
This patch must be backported as far as 2.2.
Report an error when using an explicit proto for a connect rule with
non-compatible mode in regards with the selected check type (tcp-check
vs http-check).
If the check mux has been explicitly defined but is incompatible with
the selected check type (tcp-check vs http-check), report a warning and
prevent haproxy startup.
Only reuse the mux from server if the check is using the same mode.
For example, this prevents a tcp-check on a h2 server to select the h2
multiplexer instead of passthrough.
This bug was introduced by the following commit :
BUG/MEDIUM: checks: Use the mux protocol specified on the server line
It must be backported up to 2.2.
Fixes github issue #945.
req.cook, req.cook_val, req.cook_cnt and and their response counterparts may be
called without cookie name. In this case, empty parentheses may be used, or no
parentheses at all. In both, the result must be the same. But only the first one
works. The second one always returns a failure. This patch fixes this bug.
Note that on old versions (< 2.2), both cases fail.
This patch must be backported in all stable versions.
HTTP sample fetches dealing with the cookies (req/res.cook,
req/res.cook_val and req/res.cook_cnt) must be prepared to be called
without cookie name. For the first two, the first cookie value is
returned, regardless its name. For the last one, all cookies are counted.
To do so, http_extract_cookie_value() may now be called with no cookie
name (cookie_name_l set to 0). In this case, the matching on the cookie
name is ignored and the first value found is returned.
Note this patch also fixes matching on cookie values in ACLs.
This should be backported in all stable versions.
There is a bug in peer_recv_msg() due to an incorrect cast when trying
to decode the varint length of a stick-table message, causing lengths
comprised between 128 and 255 to consume one extra byte, ending in
protocol errors. The root cause of this is that peer_recv_msg() tries
hard to reimplement all the parsing and control that is already done in
intdecode() just to measure the length before calling it. And it got it
wrong.
Let's just get rid of this unneeded code duplication and solely rely on
intdecode() instead. The bug was introduced in 2.0 as part of a cleanup
pass on this code with commit 95203f218 ("MINOR: peers: Move high level
receive code to reduce the size of I/O handler."), so this patch must
be backported to 2.0.
Thanks to Yves Lafon for reporting the problem.
The TX part of a cache for a dictionary is made of an reserved array of ebtree nodes
which are pointers to dictionary entries. So when we flush the TX part of such a
cache, we must not only remove these nodes to dictionary entries from their ebtree.
We must also reset their values. Furthermore, the LRU key and the last lookup
result must also be reset.
If we could not decode the ID of a dictionary entry from a peer update message,
we must inform the remote peer about such an error as this is done for
any other decoding error.
Define a per-thread counters allocated with the greatest size of any
stat module counters. This variable is named trash_counters.
When using a proxy without allocated counters, return the trash counters
from EXTRA_COUNTERS_GET instead of a dangling pointer to prevent
segfault.
This is useful for all the proxies used internally and not
belonging to the global proxy list. As these objects does not appears on
the stat report, it does not matter to use the dummy counters.
For this fix to be functional, the extra counters are explicitly
initialized to NULL on proxy/server/listener init functions.
Most notably, the crash has already been detected with the following
vtc:
- reg-tests/lua/txn_get_priv.vtc
- reg-tests/peers/tls_basic_sync.vtc
- reg-tests/peers/tls_basic_sync_wo_stkt_backend.vtc
There is probably other parts that may be impacted (SPOE for example).
This bug was introduced in the current release and do not need to be
backported. The faulty commits are
"MINOR: ssl: count client hello for stats" and
"MINOR: ssl: add counters for ssl sessions".
Register a new function on POST DEINIT to free stats fields/lines for
each domain.
This patch does not fix a critical bug but may be backported to 2.3.
Do not cache responses that do not have an explicit expiration time
(s-maxage or max-age Cache-Control directives or Expires header) or a
validator (ETag or Last-Modified headers) anymore, as suggested in
RFC 7234#3.
The TX_FLAG_IGNORE flag is used instead of the TX_FLAG_CACHEABLE so as
not to change the behavior of the checkcache option.
This size is used by some pattern matching to determine if there
is sufficient room in the buffer to add final \0 if necessary.
If the size is not set, the conditions use uninitialized value.
Note: it seems this bug can't cause a crash.
Should be backported until 2.2 (at least)
The functions add final 0 to string if the final 0 is not set,
but don't check the flag CONST. This patch duplicates the strings
if the final zero is not set and the string is CONST.
Should be backported until 2.2 (at least)
In issue #940, it was reported that the crt-list does not work correctly
anymore. Indeed when inserting a crt-list line which use a certificate
previously seen in the crt-list, this one won't be inserted in the SNI
list and will be silently ignored.
This bug was introduced by commit 47da821 "MEDIUM: ssl: emulates the
multi-cert bundles in the crtlist".
This patch also includes a reg-test which tests this issue.
This bugfix must be backported in 2.3.
Commit a66adf41e ("MINOR: http-htx: Add understandable errors for the
errorfiles parsing") added a warning when loading malformed error files,
but this warning may trigger another build warning due to the %lu format
used. Let's simply cast it for output since it's just used for end user
output.
This must be backported to 2.0 like the commit above.
Since commit d0447a7c3 ("MINOR: ssl: add counters for ssl sessions"),
gcc 9+ complains about this:
CC src/ssl_sock.o
src/ssl_sock.c: In function 'ssl_sock_io_cb':
src/ssl_sock.c:5416:3: warning: 'counters_px' may be used uninitialized in this function [-Wmaybe-uninitialized]
5416 | ++counters_px->reused_sess;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
src/ssl_sock.c:5133:23: note: 'counters_px' was declared here
5133 | struct ssl_counters *counters, *counters_px;
| ^~~~~~~~~~~
Either a listener or a server are expected there, so ther counters are
always initialized and the compiler cannot know this. Let's preset
them and test before updating the counter, we're not in a hot path
here.
No backport is needed.
This function used to grab the idle lock when scanning the threads for
idle connections, but it doesn't need it since the lock only protects
the tree. Let's remove it.
No details are provided when an error occurs during the parsing of an errorfile,
Thus it is a bit hard to diagnose where the problem is. Now, when it happens, an
understandable error message is reported.
This patch is not a bug fix in itself. But it will be required to change an
fatal error into a warning in last stable releases. Thus it must be backported
as far as 2.0.
The default dh_param value is 2048 and it's preset to zero unless explicitly
set, so we must not report a warning about DH param not being loadble in 1024
bits when we're going to use 2048. Thanks to Dinko for reporting this.
This should be backported to 2.2.
Since commit 37bafdcbb ("MINOR: sock_inet: move the IPv4/v6 transparent mode code
to sock_inet"), build options for transparent proxying are registered twice.
This patch removes the older one.
It does not require heavy deletion from the expr anymore, so we can now
turn this to a single-linked list since most of the time we want to delete
all instances of a given pattern from the head. By doing so we save 32 bytes
of memory per pattern. The pat_unlink_from_head() function was adjusted
accordingly.
Instead of using LIST_DEL() on the pattern itself inside an expression,
we look it up from its head. The goal is to get rid of the double-linked
list while this usage remains exclusively for freeing on startup error!
Instead of scanning all elements from the expression and using the
slow delete path there, let's use the faster way which involves
pat_delete_gen() while the elements are detached from ther reference.
When purging all of a reference, it's much more efficient to scan the
reference patterns from the reference head and delete all derivative
patterns than to scan the expressions. The only thing is that we need
to proceed both for the current and next generations, in case there is
a huge gap between the two. With this, purging 20M IP addresses in small
batches of 100 takes roughly 3 seconds.
This function will be usable to purge at most a specified number of old
entries from a reference. Entries are declared old if their generation
number is in the past compared to the one passed in argument. This will
ease removal of early entries when new ones have been appended.
We also call malloc_trim() when available, at the end of the series,
because this is one place where there is a lot of memory to save. Reloads
of 1M IP addresses used in an ACL made the process grow up to 1.7 GB RSS
after 10 reloads and roughly stabilize there without this call, versus
only 260 MB when the call is present. Sadly there is no direct equivalent
for jemalloc, which stabilizes around 800MB-1GB.
pat_ref_load() basically combines pat_ref_append() and pat_ref_commit().
It's very similar to pat_ref_add() except that it also allows to set the
generation ID and the line number. pat_ref_add() was modified to directly
rely on it to avoid code duplication. Note that a previous declaration
of pat_ref_load() was removed as it was just a leftover of an earlier
incarnation of something possibly similar, so no existing functionality
was changed here.
This function will be used after a successful pat_ref_append() to propagate
the pattern to all use places (including parsing and indexing). On failure,
it will entirely roll back all insertions and free the pattern itself. It
also preserves the generation number so that it is convenient for use in
association with pat_ref_append(). pat_ref_add() was modified to rely on
it instead of open-coding the insertion and roll-back.
Instead of matching any pattern found in the tree, only match those
matching the current generation of entries. This will make sure that
reloads are atomic, regardless of the time they take to complete, and
that newly added data are not matched until the whole reference is
committed. For consistency we proceed the same way on "show map" and
"show acl".
This will have no impact for now since generations are not used.
Right now it's not possible to perform a safe reload because we don't
know what patterns were recently added or were already present. This
patch adds a generation counter to the reference patterns so that it
is possible to know what generation of the reference they were loaded
with. A reference now has two generations, the current one, used for
all additions, and the next one, allocated to those wishing to update
the contents. The generation wraps at 2^32 so comparisons must be made
relative to the current position.
The idea will be that upon full reload, the caller will first get a new
generation ID, will insert all new patterns using it, will then switch
the current ID to the new one, and will delete all entries older than
the current ID. This has the benefit of supporting chunked updates that
remain consistent and that won't block the whole process for ages like
pat_ref_reload() currently does.
Till now the only way to remove a known reference was via
pat_ref_delete_by_id() which scans the whole list to find a matching pointer.
Let's add pat_ref_delete_by_ptr() which takes a valid pointer. It can be
called by the function above after the pointer is found, and can also be
used to roll back a failed insertion much more efficiently.
These ones are not used anymore, so let's remove them to remove a bit
of the complexity. The ACL keyword's delete() function could be removed
as well, though most keyword declarations are positional and we have a
high risk of introducing a mistake here, so let's not touch the ACL part.
A few ACL keyword used to reference pat_delete_gen() as the deletion
function but this is not needed since it's the default one now. Let's
just remove this reference.
When we're removing an element under the expression lock, we don't need
anymore to run over all ->delete() functions via the expressions, since
we know that the single function does it fine now. Note that at this
point, pattern->delete() is not used at all through out the code anymore.
pat_del_tree_gen() was already chained onto pat_del_list_gen() to deal
with remaining cases, so let's complete the merge and have a generic
pattern deletion function acting on the reference and taking care of
reliably removing all elements.
This is the next step in speeding up entry removal. Now we don't scan
the whole lists or trees for elements pointing to the target reference,
instead we start from the reference and delete all linked patterns.
This simplifies some delete functions since we don't need anymore to
delete multiple times from an expression since all nodes appear after
the reference element. We can now have one generic list and one generic
tree deletion function.
This required the replacement of pattern_delete() with an open-coded
version since we now need to lock all expressions first before proceeding.
This means there is a high risk of lock inversion here but given that the
expressions are always scanned in the same order from the same head, this
must not happen.
Now deleting first entries is instantaneous, and it's still slow to
delete the last ones when looking up their ID since it still requires
to look them up by a full scan, but it's already way faster than
previously. Typically removing the last 10 IP from a 20M entries ACL
with a full-scan each took less than 2 seconds.
It would be technically possible to make use of indexed entries to
speed up most lookups for removal by value (e.g. IP addresses) but
that's for later.
There is a data model issue in the current pattern design that makes
pattern deletion extremely expensive: there's no direct way from a
reference to access all indexed occurrences. As such, the only way
to remove all indexed entries corresponding to a reference update
is to scan all expressions's lists and trees to find a link to the
reference. While this was possibly OK when map removal was not
common and most maps were small, this is not conceivable anymore
with GeoIP maps containing 10M+ entries and del-map operations that
are triggered from http-request rulesets.
This patch introduces two list heads from the pattern reference, one
for the objects linked by lists and one for those linked by tree node.
Ideally a single list would be enough but the linked elements are too
much unrelated to be distinguished at the moment, so we'll need two
lists. However for the long term a single-linked list will suffice but
for now it's not possible due to the way elements are removed from
expressions. As such this patch adds 32 bytes of memory usage per
reference plus 16 per indexed entry, but both will be cut in half
later.
The links are not yet used for deletion, this patch only ensures the
list is always consistent.
Now we have a single prune() function to act on an expression, and one
delete function for the lists and one for the trees. The presence of a
pointer in the lists is enough to warrant a free, and we rely on the
PAT_SF_REGFREE flag to decide whether to free using free() or regfree().
Currently we have no way to know how to delete/prune a pattern in a
generic way. A pattern doesn't contain its own type so we don't know
what function to call. Tree nodes are roughly OK but not lists where
regex are possible. Let's add one new bit for sflags at index time to
indicate that regex_free() will be needed upon deletion. It's not used
for now.
It's pointless to delete a backref and relink it to the next entry since
the next entry is going to do the exact same and so on until all of them
are deleted. Let's simply delete backrefs on reload.
It's not possible to uniquely update a single expression without updating
the pattern reference, I don't know why we've put the revision in the
expression back then, given that it in fact provides an update for a
full pattern. Let's move the revision into the reference's head instead.
This is one case where we may release large amounts of data at once. Tests
show that without this, after 10 full reloads of an ACL containing 1M IP
addresses, the memory usage grew and stabilized around 1.7 GB of RSS. With
this change, it stays around 260 MB and is stable across reloads.
This patch implements a couple of converters to validate and extract data from a
MQTT (Message Queuing Telemetry Transport) message. The validation consists of a
few checks as well as "packet size" validation. The extraction can get any field
from the variable header and the payload.
This is limited to CONNECT and CONNACK packet types only. All other messages are
considered as invalid. It is not a problem for now because only the first packet
on each side can be parsed (CONNECT for the client and CONNACK for the server).
MQTT 3.1.1 and 5.0 are supported.
Reviewed and Fixed by Christopher Faulet <cfaulet@haproxy.com>
This patch implements a couple of converters to validate and extract tag value
from a FIX (Financial Information eXchange) message. The validation consists in
a few checks such as mandatory fields and checksum computation. The extraction
can get any tag value based on a tag string or tag id.
This patch requires the istend() function. Thus it depends on "MINOR: ist: Add
istend() function to return a pointer to the end of the string".
Reviewed and Fixed by Christopher Faulet <cfaulet@haproxy.com>
Let us use SSL_CTX_set1_curves_list, defined by OpenSSL, as well as in
openssl-compat when SSL_CTRL_SET_CURVES_LIST is present (BoringSSL),
for feature detection instead of versions.
Following the patch b4daee ("MINOR: sock: add a check against cross
worker<->master socket activities"), this patch adds a dedicated applet
for the master CLI. It ensures that the CLI connection can't be
used with the master rights in the case of bugs.