The new function hap_get_next_build_opt() will iterate over the list of
build options. This will be used for debugging, so that the build options
can be retrieved from the CLI.
A new flag RX_F_PASS_PKTINFO is now available, whose purpose is to mark
that the destination address is about to be retrieved on some listeners.
The address can be retrieved from the first received datagram, and
relies on the IP_PKTINFO, IP_RECVDSTADDR and IPV6_RECVPKTINFO support.
snr_set_srv_down() (was formely known as snr_update_srv_status()), is
still too ambiguous because it's not clear whether we will be putting
the server under maintenance or not. This is mainly due to the fact that
the function behaves differently if has_no_ip is set or not.
By reviewing the function callers, it has now become clear that
snr_resolution_cb() is always calling the function with a valid resolution
so we only want to put the server under maintenance if we don't have a
valid IP address. On the other hand snr_resolution_error_cb() always
calls the function on error, with either no resolution (for SRV requests)
or with failing resolution (all cases except RSLV_STATUS_VALID), so in
this case we decide whether to put the server under maintenance case by
case (ie: expired? timeout?)
As a result, let's simplify snr_set_srv_down() so that it is only called
when the caller really thinks that the server should be put under
maintenance, which means always for snr_resolution_error_cb(), and only
if the resolution didn't yield usable ip for snr_resolution_cb().
RSLV_UPD_CNAME and RSLV_UPD_NAME_ERROR flags have now become useless since
3cf7f987 ("MINOR: dns: proper domain name validation when receiving DNS
response") as they are never set, but we forgot to remove them.
RSLV_UPD_OBSOLETE_IP was introduced with commit a8c6db8d2 ("MINOR: dns:
Cache previous DNS answers.") but the commit didn't make any use of it,
and today the flag is still unused. Since we have no valid use for it,
better remove it to prevent confusions.
A leftover check was left by recent patch series about server
addr:svc_port propagation: a check on (msg) being set was performed
in srv_update_addr_port(), but msg is always set, so the check is not
needed and confuses coverity (See GH #2399)
In table_process_entry(), stktable_data_ptr() result is dereferenced
without checking if it's NULL first, which may happen when bad inputs
are provided to the function.
However, data_type and ts arguments were already checked prior to calling
the function, so we know for sure that stktable_data_ptr() will never
return NULL in this case.
However some static code analyzers such as Coverity are being confused
because they think that the result might possibly be NULL.
(See GH #2398)
To make it explicit that we always provide good inputs and expect valid
result, let's switch to the __stktable_data_ptr() unsafe function.
This reverts commit 18f2ccd244.
Found issues related to QUIC fast-forward were resolved (see github
issue #2372). Reenable it by default. If any issue arises, it can be
disabled using the global statement :
tune.quit.zero-copy-fwd-send off
This can be backported to 2.9, but only after a sensible period of
observation.
If QCS Tx buffer cannot be allocated in nego_ff callback, disable
fast-forward for this connection and return immediately. If snd_buf is
later finally used but still no buffer can being allocated, the
connection will be closed on error.
This should fix coverity reported in github issue #2390.
This should be backported up to 2.9.
When encoding new HTTP/3 frames, QCS Tx buffer must be allocated if
currently NULL. Previously, allocation failure was not properly checked,
leaving the connection in an unspecified state, or worse risking a
crash.
Fix this by setting <h3c.err> to H3_INTERNAL_ERROR each time the
allocation fails. This will stop sending and close the connection. In
the future, it may be better to put the connection on pause waiting for
allocation to succeed but this is too complicated to implement for now
in a reliable way.
Along with the current change, return of all HTX parsing functions
(h3_resp_*_send) were set to a negative value in case of error. A new
BUG_ON() in h3_snd_buf() ensures that if such a value is returned,
either a connection error is register (via <h3c.err>) or buffer is
temporarily full (flag QC_SF_BLK_MROOM).
This should fix github issue #2389.
This should be backported up to 2.6. Note that qcc_get_stream_txbuf()
does not exist in 2.9 and below. mux_get_buf() is its equivalent. An
explicit check b_is_null(&qcs.tx.buf) should be used there.
When parsing a HTX response, if too many headers are present, stop
sending and close the connection with error code H3_INTERNAL_ERROR.
Previously, no error was reported despite the interruption of header
parsing. This cause an infinite loop. However, this is considered as
minor as it happens on the response path from backend side.
This should be backported up to 2.6.
It relies on previous commit
"MINOR: h3: check connection error during sending".
If an error occurs during HTX to H3 encoding, h3_snd_buf() should be
interrupted. This commit add this possibility by checking for <h3c.err>
member value. If non null, sending loop is stopped and an error is
reported using qcc_set_error().
This commit does not change any behavior for now, as <h3c.err> is never
set during sending. However, this will change in future commits, most
notably to reject too many headers or handle buffer allocation failure.
As such, this commit should be backported along the following fixes.
Note that in 2.6 qcc_set_error() does not exist and must be replaced by
qcc_emit_cc_app().
This bug impacts only the QUIC OpenSSL compatibility module (USE_QUIC_OPENSSL_COMPAT).
The TLS capture of information from client hello enabled by
tune.ssl.capture-buffer-size could not work with USE_QUIC_OPENSSL_COMPAT. This
is due to the fact the callback set for this feature was replaced by
quic_tls_compat_msg_callback(). In fact this called must be registered by
ssl_sock_register_msg_callback() as this done for the TLS client hello capture.
A call to this function appends the function passed as parameter to a list of
callbacks to be called when the TLS stack parse a TLS message.
quic_tls_compat_msg_callback() had to be modified to return if it is called
for a non-QUIC TLS session.
Must be backported to 2.8.
This bug impacts only the QUIC OpenSSL compatibility module (USE_QUIC_OPENSSL_COMPAT).
To make this module works, quic_tls_compat_keylog_callback() function must be
set as keylog callback, or at least be called by another keylog callback.
This is what SSL_CTX_keylog() was supposed to do. In addition to export the TLS
secrets via sample fetches this latter also calls quic_tls_compat_keylog_callback()
when compiled with USE_QUIC_OPENSSL_COMPAT defined.
Before this patch, SSL_CTX_keylog() was replaced by quic_tls_compat_keylog_callback()
and the TLS secret were no more exported by sample fetches.
Must be backported to 2.8.
Add a check on nego_ff to ensure connection is not on error. If this is
the case, fast-forward is disable to prevent unnecessary sending. If
snd_buf is latter called, stconn will be notified of the error to
interrupt the stream.
This check is necessary to ensure snd_buf and nego_ff are consistent.
Note that previously, if fast-forward was conducted even on connection
error, no sending would occur as qcc_io_send() also check these flags.
However, there is a risk that stconn is never notified of the error
status, thus it is considered as a bug.
Its impact is minimal for now as fast-forward is disable by default on
QUIC. By fixing it, it should be possible to reactive it soon.
This should be backported up to 2.9.
Previously, if snd_buf operation was conducted despite QCS already
locally closed, the input buffer was silently dropped. This situation
could happen if a RESET_STREAM was emitted butemission not reported to
the stream layer. Resetting silently the buffer ensure QUIC MUX remain
compliant with RFC 9000 which forbid emission after RESET_STREAM.
Since previous commit, it is now ensured that RESET_STREAM sending will
always be reported to stream-layer. Thus, there is no need anymore to
silently reset the buffer. A BUG_ON() statement is added to ensure this
assumption will remain valid.
The new code is deemed cleaner as it does not hide a missing error
notification on the stconn-layer. Previously, if an error was missing,
sending would continue unnecessarily with a false success status
reported for the stream.
Note that the BUG_ON() statement was also added into nego_ff callback.
This is necessary to ensure both sending path remains consistent.
This patch is labelled as MEDIUM as issues were already encountered in
snd_buf/nego_ff implementation and it's not easy to cover all occurences
during test. If the BUG_ON() is triggered without any apparent
stream-layer issue, this commit should be reverted.
On RESET_STREAM emission, the stream Tx channel is closed. This event
must be reported to stream-conn layer to interrupt future send
operations.
Previously, se_fl_set_error() was manually invocated before/after
qcc_reset_stream(). Change this by moving se_fl_set_error() invocation
into the latter. This ensures that notification won't be forget, most
notably in HTTP/3 layer.
In most cases, behavior should be identical as both functions were
called together unless not necessary. However, there is one exception
which could cause a RESET_STREAM emission without error notification :
this happens on H3 trailers parsing error. All other H3 errors happen
before the stream-layer creation and thus the error is notified on
stream creation. This regression has been caused by the following patch :
152beeec34
MINOR: mux-quic: report error on stream-endpoint earlier
Thus it should be backported up to 2.7.
Note that the case described above did not cause any crash or protocol
error. This is because currently MUX QUIC snd_buf operation silently
reset buffer on transmission if QCS is already closed locally. This will
however be removed in a future commit so the current patch is necessary
to prevent an invalid behavior.
All map_*_ converters now have an additional output type: key. Such
converters will return the matched entry's key (as found in the map file)
as a string instead of the value.
Consider this example map file:
|example.com value1
|haproxy value2
With the above map file:
str(test.example.com/url),map_dom_key(file.map) will return "example.com"
str(running haproxy),map_sub_key(file.map) will return "haproxy"
This should address GH #1446.
This patchs adds support for optional ptr (0xffff form) instead of key
argument to match against existing sticktable entries, ie: if the key is
empty or cannot be matched on the cli due to incompatible characters.
Lookup is performed using a linear search so it will be slower than key
search which relies on eb tree lookup.
Example:
set table mytable key mykey data.gpc0 1
show table mytable
> 0x7fbd00032bd8: key=mykey use=0 exp=86373242 shard=0 gpc0=1
clear table mytable ptr 0x7fbd00032bd8
This patchs depends on:
- "MINOR: stktable: add table_process_entry helper function"
It should solve GH #2118
Only keep key-related logic in table_process_entry_per_key() function,
and then use table_process_entry() function that takes an entry pointer
as argument to process the entry.
Similarly to the previous commit, we get rid of unused peer member.
peer->addr was only used to save a copy of the sever's addr at parsing
time. But instead of relying on an intermediate variable, we can actually
use server's address directly when initiating the peer session.
As with other streams created from server's settings (tcp/http, log, ring),
we should rely on srv->svc_port for the port part of the address. This
shouldn't change anything for peers since the address is fully resolved
at parsing time and runtime changes are not supported, but this should
help to make the code future-proof.
peer->proto and peer->xprt struct members are now pure legacy: they are
only set during parsing but never used afterwards.
This is due to commit 02efedac ("MINOR: peers: now remove the remote
connection setup code") which made some cleanup in the past, but the
unused proto and xprt members were probably left unused by mistake.
Since we don't have valid uses for them, we remove them.
Also, peer_xprt() helper function was removed since it was related to
peer->xprt struct member.
Historically, we used the internal peer proxy as stream target, because
then we only cared about initiating a basic tcp connection with the
endpoint, and relying on parent proxy settings was enough.
But later, we introduced the possibility to connect to an SSL peer by
taking server's SSL parameters into acount. This was done in commit
1055e687 ("MINOR: peers: Make outgoing connection to SSL/TLS peers work.")
However, the above commit introduced an ambiguity:
peer_session_target() function was introduced, and the function will
either return the peers proxy's object or the current server's object
depending if ssl is configured or not.
While this works fine to ensure proper SSL handling while being
conservative with historical behavior, this cause other server transport
related settings to only work when ssl settings are provided, which is
quite debatable.
Indeed, while we're there, why not always using the server's object as
a stream target, to ensure all transport related options are properly
handled? Moreover, the peers documentation tells this:
... "support for all "server" parameters found in 5.2 paragraph that
are related to transport settings" ...
To remove the ambiguity and fully comply with the documentation, we make
peer_session_target() always return the server's object.
This was the last missing bit from cd994407a ("BUG/MAJOR: server/addr:
fix a race during server addr:svc_port updates")
Indeed, despite the fix, svc_port updates from resolvers were still
directly performed on the server's struct.
Now they make proper use of the server_set_inetaddr() function so the port
change (+ optional addr change with AR) will be propagated atomically.
This patch depends on:
- "MINOR: server: ensure connection cleanup on server addr changes"
- "CLEANUP: server/event_hdl: remove purge_conn hint in INETADDR event"
- "MEDIUM: server: merge srv_update_addr() and srv_update_addr_port() logic"
- "MEDIUM: server: make server_set_inetaddr() updater serializable"
- "MINOR: server/event_hdl: expose updater info through INETADDR event"
- "MINOR: server: add dns hint in server_inetaddr_updater struct"
- "MEDIUM: server/dns: clear RMAINT when addr resolves again"
While it could be backported in 2.9 with cd994407a ("BUG/MAJOR:
server/addr: fix a race during server addr:svc_port updates") to ensure
addr and svc_port updates performed by resolver's code comply with the
API taking care of pushing the update (and thus avoid any race), some
patch dependencies are quite sensitive so it's probably best to avoid
backporting for no good reason, or at least wait for it to be considered
stable to prevent any breakeages
As seen before, server's addr and svc_port should not be updated directly
during runtime, because even if the update is performed under the lock,
some competing threads might be reading ->addr and ->svc_port without
the lock because they simply cannot afford it.
To prevent races with such competing threads, server's addr and port
should only be updated using server_set_inetaddr() function or similar.
This patch depends on:
- "MINOR: server: ensure connection cleanup on server addr changes"
- "CLEANUP: server/event_hdl: remove purge_conn hint in INETADDR event"
- "MEDIUM: server: merge srv_update_addr() and srv_update_addr_port() logic"
- "MEDIUM: server: make server_set_inetaddr() updater serializable"
- "MINOR: server/event_hdl: expose updater info through INETADDR event"
- "MINOR: server: add dns hint in server_inetaddr_updater struct"
- "MEDIUM: server/dns: clear RMAINT when addr resolves again"
While it could be backported in 2.9 with cd994407a ("BUG/MAJOR:
server/addr: fix a race during server addr:svc_port updates") to ensure
addr and svc_port reset performed by resolver's code comply with the
API taking care of pushing the update (and thus avoid any race), some
patch dependencies are quite sensitive so it's probably best to avoid
backporting for no good reason, or at least wait for it to be considered
stable to prevent any breakeages.
snr_update_srv_status() and srvrq_update_srv_status() will both set or
clear the server RMAINT state depending of the result of the current dns
resolution.
This used to work pretty well in the past, but now that addr:svc_port
changes are changed atomically through a dedicated task, the change is
performed asynchronously, so this can cause some flapping issues if the
server is put out of maintenance while the server's address is still
unassigned.
To prevent errors, the resolver's code is now only allowed to put the
server under maintenance but not to remove it from maintenance:
the decision to remove a server from maintenance is performed by the task
responsible for updating the server's addr: if the addr resolves again
thanks to a valid DNS resolution and the server was previously under
RMAINT, then it cleared from RMAINT state.
srvrq_update_srv_status() was renamed srvrq_set_srv_down(), since it is
only called to put the server in maintenance as a result of a failing
SRV entry.
snr_update_srv_status() was renamed srv_set_srv_down() and slightly
modified so that it only takes care of putting the server under
maintenance when needed.
The cli command "set server x/y addr" does not need to remove the RMAINT
flag anymore.
server_set_inetaddr() updater argument is a simple char * string
containing infos about the caller responsible for the update.
In this patch, we try to make this argument serializable, that is, make
it so that we can easily export it without having to keep the original
pointer passed by the caller or having to work with strings of variable
lengths.
This was a prerequisite for exposing more updater information through
SERVER_INETADDR event (upcoming patch).
Static strings were simply mapped to a fixed ID that can be converted back
to a string when needed using server_inetaddr_updater_by_to_str(). One
special case one made for the SERVER_INETADDR_UPDATER_DNS_RESOLVER updater
since in this case the updater hint has to be generated from the
corresponding resolver id / nameserver id combination. This was achieved
by saving the nameserver id within the updater struct. Knowing that the
resolver id can be guessed from the server struct directly, it was not
exposed through the updater struct.
This patch depends on:
- "MINOR: resolvers: add unique numeric id to nameservers"
No functional change should be expected.
When we want to avoid keeping pointers on a nameserver struct, it's not
always convenient to refer as a nameserver using it's text-based unique
identifier since it's not limited in length thus it cannot be serialized
and deserialized safely.
To address this limitation, we add a new ->puid member in dns_nameserver
struct which is a parent-unique numeric value that can be used to refer
to the dns nameserver within its parent resolver context.
To achieve this, we reused the resolver->nb_nameserver member that wasn't
used. Each time we add a new nameserver to a resolver: we set ns->puid to
the current number of nameservers within the resolver and we increment
this number right away.
Public helper function find_nameserver_by_resolvers_and_id() was added to
help retrieve nameserver pointer from (resolver X nameserver puid)
combination.
dns_dgram_init() function prototype was found in both resolvers and dns
header files, but it should belong to the dns header file, so the
duplicate entry was simply removed.
server_parse_addr_change_request() was completely replaced by the newer
srv_update_addr_port() function. Considering the function doesn't offer
useful features that srv_update_addr_port() couldn't do, we simply
remove the function.
Both functions are performing the similar tasks, except that the _port()
version is doing a bit more work.
In this patch, we add the server_set_inetaddr() function that works like
the srv_update_addr_port() but it takes parsed inputs instead of raw
strings as arguments.
Then, server_set_inetaddr() is used as underlying helper function for
both srv_update_addr() and srv_update_addr_port() to make them easier
to maintain.
Also, helper functions were added:
- server_set_inetaddr_warn() -> same as server_set_inetaddr() but report
a warning on updates.
- server_get_inetaddr() -> fills a struct server_inetaddr from srv
Since the feedback message generation part was slightly reworked, some
minor changes in the way addr:svc_port updates are reported in the logs
or cli messages should be expected (no loss of information though).
Previously, in srv_update_addr_port(), we forced connection cleanup on
server changes.
This was done in 6318d33ce ("BUG/MEDIUM: connections: force connections
cleanup on server changes").
However, there is no reason we shouldn't have done the same in
srv_update_addr() function, because the end goal is the same: perform
runtime changes on server's address.
The purge_conn hint propagated through the INETADDR server event was
simply there to keep the original behavior (only purge the connection
for events originating from srv_update_addr_port()), but to ensure the
address change is handled the same way for both code paths, we simply
ignore this hint.
server addr:svc_port updates during runtime might set or clear the
SRV_F_MAPPORTS flag. Unfortunately, the flag update is still directly
performed by srv_update_addr_port() function while the addr:svc_port
update is being scheduled for atomic update. Given that existing readers
don't take server's lock to read addr:svc_port, they also check the
SRV_F_MAPPORTS flag right after without the lock.
So we could cause the readers to incorrectly interpret the svc_port from
the server struct because the mapport information is not published
atomically, resulting in inconsistencies between svc_port / mapport flag.
(MAPPORTS flag causes svc_port to be used differently by the reader)
To fix this, we publish the mapport information within the INETADDR server
event and we let the task responsible for updating server's addr and port
position or clear the flag depending on the mapport hint.
This patch depends on:
- MINOR: server/event_hdl: add server_inetaddr struct to facilitate event data usage
- MINOR: server/event_hdl: update _srv_event_hdl_prepare_inetaddr prototype
This should be backported in 2.9 with 683b2ae01 ("MINOR: server/event_hdl:
add SERVER_INETADDR event")
Slightly change _srv_event_hdl_prepare_inetaddr() function prototype to
reduce the input arguments by learning some settings directly from the
server. Also taking this opportunity to make the function static inline
since it's relatively simple and not meant to be used directly.
event_hdl_cb_data_server_inetaddr struct had some anonymous structs
defined in it, making it impossible to pass as a function argument and
harder to maintain since changes must be performed at multiple places
at once. So instead we define a storage struct named server_inetaddr
that helps to save addr:port server information in INET context.
4e5e2664 ("MINOR: proxy: add findserver_unique_id() and findserver_unique_name()")
added findserver_unique_id() and findserver_unique_name() functions that
were inspired from the historical findserver() function, so unfortunately
they don't perform well when used on large backend farms because they scan
the whole server list linearly.
I was about to provide a patch to optimize such functions when I stumbled
on Baptiste's work:
19a106d24 ("MINOR: server: server_find functions: id, name, best_match")
It turns out Baptiste already implemented helper functions to supersed
the unoptimized findserver() function (at least at runtime when servers
have been assigned their final IDs and inserted in the lookup trees): they
offer more matching options and rely on eb lookups so they are much more
suitable for fast queries. I don't know how I missed that, but they are a
perfect base for the server rid matching functions.
So in this patch, we essentially revert 4e5e2664 to provide the optimized
equivalent functions named server_find_by_id_unique() and
server_find_by_name_unique(), then we force existing findserver_unique_*()
callers to switch to the new functions.
This patch depends on:
- "OPTIM: server: eb lookup for server_find_by_name()"
This could be backported up to 2.8.
server_find_by_name() function was added in 19a106d24 ("MINOR: server:
server_find functions: id, name, best_match").
At that time, only the used_server_id proxy tree was available, thus the
name lookup was performed as a linear search.
However, used_server_name proxy tree was added in 84d6046a ("MINOR: proxy:
Add a "server by name" tree to proxy."), so we may safely rely on it to
perform server name lookups now. This will hopefully make the function
quite faster, especially when performing lookups in huge backend farms.
Currently, we have a check in proxy_cfg_ensure_no_http() that generates a
warning if the monitor-uri is configured on a proxy that doesn't have
mode HTTP enabled.
However, when we give a look at monitor-uri implementation, it's not
100% correct. Indeed, despite the warning message, the directive will
still be evaluated when HTTP upgrade occurs from a TCP frontend.
Thus the error is misleading.
To make the error message comply with the actual behavior, the check was
moved alongside other checks that accept both native HTTP mode or HTTP
upgrades in cfgparse.c.
When a TCP frontend uses an HTTP backend, the stream is automatically
upgraded and it results in a similar behavior as if a switch-mode http
rule was evaluated since stream_set_http_mode() gets called in both
situations and minimal HTTP analyzers are set.
In the current implementation, some postparsing checks are generating
errors or warnings when the frontend is in TCP mode with some HTTP options
set and no upgrade is expected (no switch-rule http). But as you can guess,
unfortunately this leads in issues when such "HTTP" only options are used
in a frontend that has implicit switching rules (that is, when the
frontend uses an HTTP backend for example), because in this case the
PR_O_HTTP_UPG will not be set, so the postparsing checks will consider
that some options are not relevant and will raise some warnings.
Consider the following example:
backend back
mode http
server s1 git.haproxy.org:80
frontend front
mode tcp
bind localhost:8080
http-request set-var(txn.test) str(TRUE),debug(WORKING,stderr)
use_backend back
By starting an haproxy instance with the above example conf, we end up
having this warning:
[WARNING] (400280) : config : 'http-request' rules ignored for frontend 'front' as they require HTTP mode.
However, by making a request on the frontend, we notice that the request
rules are still executed, and that's because the stream is effectively
upgraded as a result of an implicit upgrade:
[debug] WORKING: type=str <TRUE>
So this confirms the previous description: since implicit and explicit
upgrades result in approximately the same behavior on the frontend side,
we should consider them both when doing postparsing checks.
This is what we try to address in the following commit: PR_O_HTTP_UPG
flag is now more generic in the sense that it refers to either implicit
(through default_backend or use_backend rules) or explicit (switch-mode
rules) upgrades. Indeed, everytime an HTTP or dynamic backend (where the
mode cannot be assumed during parsing) is encountered in default_backend
directive or use_backend rules, we explicitly position the upgrade flag
so that further checks that depend on the proxy being in HTTP context
don't report false warnings.
Consider the following configuration:
backend back
mode http
frontend front
mode tcp
bind localhost:8080
stats enable
stats uri /stats
tcp-request content switch-mode http if FALSE
use_backend back
Firing a request to /stats on the haproxy process started with the above
configuration will cause a segfault in http_handle_stats().
The cause for the crash is that in this case, the upgrade doesn't simply
switches to HTTP mode, but also changes the stream backend (changing from
the frontend itself to the targeted HTTP backend).
However, there is an inconsitency in the stats logic between the check
for the stats URI and the actual handling of the stats page.
Indeed, http_stats_check_uri() checks uri parameters from the proxy
undergoing the http analyzers, whereas http_handle_stats() uses s->be
instead.
During stream analysis, from the frontend perspective: s->be defaults to
the frontend. But if the frontend is in TCP mode and the stream is
upgraded to HTTP via backend switching rules, then s->be will be assigned
to the actual HTTP-capable backend in stream_set_backend().
What this means is that when the http analyzer first checks if the current
URI matches the one from the "stats uri" directive, it will check against
the "stats uri" directive from the frontend, but later since the stats
handlers reads the uri from s->be it wil actually use the value from the
backend and the previous safety checks are thus garbage, resulting in
unexpected behavior. (In our test case since the backend didn't define
"stats uri" it is set to NULL, and http_handle_stats() dereferences it)
To fix this, we should ensure that prechecks and actual stats processing
always rely on the same proxy source for stats config directives.
This is what is done in this patch, thanks to the previous commit, since
we can make sure that the stat applet will use ->http_px as its parent
proxy. So here we simply propagate the current proxy being analyzed
through all the stats processing functions.
This patch depends on:
- MINOR: stats: store the parent proxy in stats ctx (http)
It should be backported up to 2.4.
For 2.4: the fix is less trivial since stats ctx was directly stored
within the applet struct at that time, so this alternative patch must be
used instead (without "MINOR: stats: store the parent proxy in stats ctx
(http)" dependency):
diff --git a/include/haproxy/applet-t.h b/include/haproxy/applet-t.h
index 014e01ed9..1d9a63359 100644
--- a/include/haproxy/applet-t.h
+++ b/include/haproxy/applet-t.h
@@ -121,6 +121,7 @@ struct appctx {
* keep the grouped together and avoid adding new ones.
*/
struct {
+ struct proxy *http_px; /* parent proxy of the current applet (only relevant for HTTP applet) */
void *obj1; /* context pointer used in stats dump */
void *obj2; /* context pointer used in stats dump */
uint32_t domain; /* set the stats to used, for now only proxy stats are supported */
diff --git a/src/http_ana.c b/src/http_ana.c
index b557da89d..1025d7711 100644
--- a/src/http_ana.c
+++ b/src/http_ana.c
@@ -63,8 +63,8 @@ static enum rule_result http_req_restrict_header_names(struct stream *s, struct
static void http_manage_client_side_cookies(struct stream *s, struct channel *req);
static void http_manage_server_side_cookies(struct stream *s, struct channel *res);
-static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *backend);
-static int http_handle_stats(struct stream *s, struct channel *req);
+static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *px);
+static int http_handle_stats(struct stream *s, struct channel *req, struct proxy *px);
static int http_handle_expect_hdr(struct stream *s, struct htx *htx, struct http_msg *msg);
static int http_reply_100_continue(struct stream *s);
@@ -428,7 +428,7 @@ int http_process_req_common(struct stream *s, struct channel *req, int an_bit, s
}
/* parse the whole stats request and extract the relevant information */
- http_handle_stats(s, req);
+ http_handle_stats(s, req, px);
verdict = http_req_get_intercept_rule(px, &px->uri_auth->http_req_rules, s);
/* not all actions implemented: deny, allow, auth */
@@ -3959,16 +3959,16 @@ void http_check_response_for_cacheability(struct stream *s, struct channel *res)
/*
* In a GET, HEAD or POST request, check if the requested URI matches the stats uri
- * for the current backend.
+ * for the current proxy.
*
* It is assumed that the request is either a HEAD, GET, or POST and that the
* uri_auth field is valid.
*
* Returns 1 if stats should be provided, otherwise 0.
*/
-static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *backend)
+static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct proxy *px)
{
- struct uri_auth *uri_auth = backend->uri_auth;
+ struct uri_auth *uri_auth = px->uri_auth;
struct htx *htx;
struct htx_sl *sl;
struct ist uri;
@@ -4003,14 +4003,14 @@ static int http_stats_check_uri(struct stream *s, struct http_txn *txn, struct p
* s->target which is supposed to already point to the stats applet. The caller
* is expected to have already assigned an appctx to the stream.
*/
-static int http_handle_stats(struct stream *s, struct channel *req)
+static int http_handle_stats(struct stream *s, struct channel *req, struct proxy *px)
{
struct stats_admin_rule *stats_admin_rule;
struct stream_interface *si = &s->si[1];
struct session *sess = s->sess;
struct http_txn *txn = s->txn;
struct http_msg *msg = &txn->req;
- struct uri_auth *uri_auth = s->be->uri_auth;
+ struct uri_auth *uri_auth = px->uri_auth;
const char *h, *lookup, *end;
struct appctx *appctx;
struct htx *htx;
@@ -4020,6 +4020,7 @@ static int http_handle_stats(struct stream *s, struct channel *req)
memset(&appctx->ctx.stats, 0, sizeof(appctx->ctx.stats));
appctx->st1 = appctx->st2 = 0;
appctx->ctx.stats.st_code = STAT_STATUS_INIT;
+ appctx->ctx.stats.http_px = px;
appctx->ctx.stats.flags |= uri_auth->flags;
appctx->ctx.stats.flags |= STAT_FMT_HTML; /* assume HTML mode by default */
if ((msg->flags & HTTP_MSGF_VER_11) && (txn->meth != HTTP_METH_HEAD))
diff --git a/src/stats.c b/src/stats.c
index d1f3daa98..1f0b2bff7 100644
--- a/src/stats.c
+++ b/src/stats.c
@@ -2863,9 +2863,9 @@ static int stats_dump_be_stats(struct stream_interface *si, struct proxy *px)
return stats_dump_one_line(stats, stats_count, appctx);
}
-/* Dumps the HTML table header for proxy <px> to the trash for and uses the state from
- * stream interface <si> and per-uri parameters <uri>. The caller is responsible
- * for clearing the trash if needed.
+/* Dumps the HTML table header for proxy <px> to the trash and uses the state from
+ * stream interface <si>. The caller is responsible for clearing the trash if
+ * needed.
*/
static void stats_dump_html_px_hdr(struct stream_interface *si, struct proxy *px)
{
@@ -3015,17 +3015,19 @@ static void stats_dump_html_px_end(struct stream_interface *si, struct proxy *px
* input buffer. Returns 0 if it had to stop dumping data because of lack of
* buffer space, or non-zero if everything completed. This function is used
* both by the CLI and the HTTP entry points, and is able to dump the output
- * in HTML or CSV formats. If the later, <uri> must be NULL.
+ * in HTML or CSV formats.
*/
int stats_dump_proxy_to_buffer(struct stream_interface *si, struct htx *htx,
- struct proxy *px, struct uri_auth *uri)
+ struct proxy *px)
{
struct appctx *appctx = __objt_appctx(si->end);
- struct stream *s = si_strm(si);
struct channel *rep = si_ic(si);
struct server *sv, *svs; /* server and server-state, server-state=server or server->track */
struct listener *l;
+ struct uri_auth *uri = NULL;
+ if (appctx->ctx.stats.http_px)
+ uri = appctx->ctx.stats.http_px->uri_auth;
chunk_reset(&trash);
switch (appctx->ctx.stats.px_st) {
@@ -3045,7 +3047,7 @@ int stats_dump_proxy_to_buffer(struct stream_interface *si, struct htx *htx,
break;
/* match '.' which means 'self' proxy */
- if (strcmp(scope->px_id, ".") == 0 && px == s->be)
+ if (strcmp(scope->px_id, ".") == 0 && px == appctx->ctx.stats.http_px)
break;
scope = scope->next;
}
@@ -3227,10 +3229,16 @@ int stats_dump_proxy_to_buffer(struct stream_interface *si, struct htx *htx,
}
/* Dumps the HTTP stats head block to the trash for and uses the per-uri
- * parameters <uri>. The caller is responsible for clearing the trash if needed.
+ * parameters from the parent proxy. The caller is responsible for clearing
+ * the trash if needed.
*/
-static void stats_dump_html_head(struct appctx *appctx, struct uri_auth *uri)
+static void stats_dump_html_head(struct appctx *appctx)
{
+ struct uri_auth *uri;
+
+ BUG_ON(!appctx->ctx.stats.http_px);
+ uri = appctx->ctx.stats.http_px->uri_auth;
+
/* WARNING! This must fit in the first buffer !!! */
chunk_appendf(&trash,
"<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\n"
@@ -3345,17 +3353,21 @@ static void stats_dump_html_head(struct appctx *appctx, struct uri_auth *uri)
}
/* Dumps the HTML stats information block to the trash for and uses the state from
- * stream interface <si> and per-uri parameters <uri>. The caller is responsible
- * for clearing the trash if needed.
+ * stream interface <si> and per-uri parameters from the parent proxy. The caller
+ * is responsible for clearing the trash if needed.
*/
-static void stats_dump_html_info(struct stream_interface *si, struct uri_auth *uri)
+static void stats_dump_html_info(struct stream_interface *si)
{
struct appctx *appctx = __objt_appctx(si->end);
unsigned int up = (now.tv_sec - start_date.tv_sec);
char scope_txt[STAT_SCOPE_TXT_MAXLEN + sizeof STAT_SCOPE_PATTERN];
const char *scope_ptr = stats_scope_ptr(appctx, si);
+ struct uri_auth *uri;
unsigned long long bps = (unsigned long long)read_freq_ctr(&global.out_32bps) * 32;
+ BUG_ON(!appctx->ctx.stats.http_px);
+ uri = appctx->ctx.stats.http_px->uri_auth;
+
/* Turn the bytes per second to bits per second and take care of the
* usual ethernet overhead in order to help figure how far we are from
* interface saturation since it's the only case which usually matters.
@@ -3629,8 +3641,7 @@ static void stats_dump_json_end()
* a pointer to the current server/listener.
*/
static int stats_dump_proxies(struct stream_interface *si,
- struct htx *htx,
- struct uri_auth *uri)
+ struct htx *htx)
{
struct appctx *appctx = __objt_appctx(si->end);
struct channel *rep = si_ic(si);
@@ -3650,7 +3661,7 @@ static int stats_dump_proxies(struct stream_interface *si,
px = appctx->ctx.stats.obj1;
/* skip the disabled proxies, global frontend and non-networked ones */
if (!px->disabled && px->uuid > 0 && (px->cap & (PR_CAP_FE | PR_CAP_BE))) {
- if (stats_dump_proxy_to_buffer(si, htx, px, uri) == 0)
+ if (stats_dump_proxy_to_buffer(si, htx, px) == 0)
return 0;
}
@@ -3666,14 +3677,12 @@ static int stats_dump_proxies(struct stream_interface *si,
}
/* This function dumps statistics onto the stream interface's read buffer in
- * either CSV or HTML format. <uri> contains some HTML-specific parameters that
- * are ignored for CSV format (hence <uri> may be NULL there). It returns 0 if
- * it had to stop writing data and an I/O is needed, 1 if the dump is finished
- * and the stream must be closed, or -1 in case of any error. This function is
- * used by both the CLI and the HTTP handlers.
+ * either CSV or HTML format. It returns 0 if it had to stop writing data and
+ * an I/O is needed, 1 if the dump is finished and the stream must be closed,
+ * or -1 in case of any error. This function is used by both the CLI and the
+ * HTTP handlers.
*/
-static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *htx,
- struct uri_auth *uri)
+static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *htx)
{
struct appctx *appctx = __objt_appctx(si->end);
struct channel *rep = si_ic(si);
@@ -3688,7 +3697,7 @@ static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *ht
case STAT_ST_HEAD:
if (appctx->ctx.stats.flags & STAT_FMT_HTML)
- stats_dump_html_head(appctx, uri);
+ stats_dump_html_head(appctx);
else if (appctx->ctx.stats.flags & STAT_JSON_SCHM)
stats_dump_json_schema(&trash);
else if (appctx->ctx.stats.flags & STAT_FMT_JSON)
@@ -3708,7 +3717,7 @@ static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *ht
case STAT_ST_INFO:
if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
- stats_dump_html_info(si, uri);
+ stats_dump_html_info(si);
if (!stats_putchk(rep, htx, &trash))
goto full;
}
@@ -3733,7 +3742,7 @@ static int stats_dump_stat_to_buffer(struct stream_interface *si, struct htx *ht
case STATS_DOMAIN_PROXY:
default:
/* dump proxies */
- if (!stats_dump_proxies(si, htx, uri))
+ if (!stats_dump_proxies(si, htx))
return 0;
break;
}
@@ -4112,11 +4121,14 @@ static int stats_process_http_post(struct stream_interface *si)
static int stats_send_http_headers(struct stream_interface *si, struct htx *htx)
{
struct stream *s = si_strm(si);
- struct uri_auth *uri = s->be->uri_auth;
+ struct uri_auth *uri;
struct appctx *appctx = __objt_appctx(si->end);
struct htx_sl *sl;
unsigned int flags;
+ BUG_ON(!appctx->ctx.stats.http_px);
+ uri = appctx->ctx.stats.http_px->uri_auth;
+
flags = (HTX_SL_F_IS_RESP|HTX_SL_F_VER_11|HTX_SL_F_XFER_ENC|HTX_SL_F_XFER_LEN|HTX_SL_F_CHNK);
sl = htx_add_stline(htx, HTX_BLK_RES_SL, flags, ist("HTTP/1.1"), ist("200"), ist("OK"));
if (!sl)
@@ -4166,11 +4178,14 @@ static int stats_send_http_redirect(struct stream_interface *si, struct htx *htx
{
char scope_txt[STAT_SCOPE_TXT_MAXLEN + sizeof STAT_SCOPE_PATTERN];
struct stream *s = si_strm(si);
- struct uri_auth *uri = s->be->uri_auth;
+ struct uri_auth *uri;
struct appctx *appctx = __objt_appctx(si->end);
struct htx_sl *sl;
unsigned int flags;
+ BUG_ON(!appctx->ctx.stats.http_px);
+ uri = appctx->ctx.stats.http_px->uri_auth;
+
/* scope_txt = search pattern + search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
scope_txt[0] = 0;
if (appctx->ctx.stats.scope_len) {
@@ -4263,7 +4278,7 @@ static void http_stats_io_handler(struct appctx *appctx)
}
if (appctx->st0 == STAT_HTTP_DUMP) {
- if (stats_dump_stat_to_buffer(si, res_htx, s->be->uri_auth))
+ if (stats_dump_stat_to_buffer(si, res_htx))
appctx->st0 = STAT_HTTP_DONE;
}
@@ -4888,6 +4903,7 @@ static int cli_parse_show_stat(char **args, char *payload, struct appctx *appctx
appctx->ctx.stats.scope_str = 0;
appctx->ctx.stats.scope_len = 0;
+ appctx->ctx.stats.http_px = NULL; // not under http context
appctx->ctx.stats.flags = STAT_SHNODE | STAT_SHDESC;
if ((strm_li(si_strm(appctx->owner))->bind_conf->level & ACCESS_LVL_MASK) >= ACCESS_LVL_OPER)
@@ -4954,7 +4970,7 @@ static int cli_io_handler_dump_info(struct appctx *appctx)
*/
static int cli_io_handler_dump_stat(struct appctx *appctx)
{
- return stats_dump_stat_to_buffer(appctx->owner, NULL, NULL);
+ return stats_dump_stat_to_buffer(appctx->owner, NULL);
}
static int cli_io_handler_dump_json_schema(struct appctx *appctx)