Try to cover some common use-cases for "log-profile" feature. The tests
mainly focus on log-profile section declaration, and testing the behavior
of logformat / log-tag overriding capabilities.
For now, the use of log-profiles is somewhat limited because we lack
the ability to explicitly trigger the log building process at specific
steps during the stream handling. Indeed, for now we rely on
"option logasap" and proxy log-format string content "hacks" to force
the log emission at some specific steps, thus more tests should be added
over the time, when new mechanisms allowing the emission of logs at
expected processing steps will be added, or if new keywords are added to
the log-profile section.
This test requires versions >= 3.0-dev1
Now that log-profile parsing logic has been implemented in "MINOR: log:
add log-profile parsing logic" and is actually effective since "MEDIUM:
log: handle log-profile in process_send_log()", let's document the feature
and add some examples.
Log-profile section is declared like this:
log-profile myprof
log-tag "custom-tag"
on error format "%ci: error"
on any format "(custom httplog) ${HAPROXY_HTTP_LOG_FMT}" sd "[exampleSDID@1234 step=\"accept\" id=\"%ID\"]"
(check out the documentation for the full list of options, some options
are only relevant under specific contexts)
And used this way (from usual "log" directive lines):
global
log stdout format rfc5424 profile myprof local0
--------------
For now, the use of log-profiles is somewhat limited because we lack
the ability to explicitly trigger the log building process at specific
steps during the stream handling, but it should gain more traction over
the time as the feature evolves and new mechanisms allowing the emission
of logs at expected processing steps will be added.
It should partially fix GH #401
In previous commit we implemented log-profile parsing logic. Now let's
actually make use of available log-profile information from logger struct
to decide whether we need to rebuild the logline under process_send_log()
according to log profile settings. Nothing is done if the logger didn't
specify a log-profile.
Now that we have log-profile parsing done, let's prepare for runtime
log-profile handling by adding the necessary string buffer required to
re-build log strings using sess_build_logline() on the fly without
altering regular loglines content.
Indeed, since a different log-profile may (or may not) be specified for
each logger, we must keep the original string and only rebuild a custom
one when required for the current logger (according to the selected log-
profile).
This patch implements prerequisite log-profile struct and parser logic.
It has no effect during runtime for now.
Logformat expressions provided in log-profile "steps" are postchecked
during postparsing for each proxy "log" directive that makes use of a
given profile. (this allows to ensure that the logformat expressions
used in the profile are compatible with proxy using them)
Logger struct may benefit from having a "flags" struct member to set
or remove different logger states. For that, we reuse an existing
4 bytes hole in the logger struct to store a 2 bytes flags integer,
leaving the struct with a 2-bytes hole now.
Extract sd metadata assignment in __send_log() to make an inline helper
function out of it in order to be able to use it from other functions if
needed.
Prerequisite work for log-profiles, we need to know under which proxy
context the logger is being used. When the info is not available, (ie:
global section or log-forward section, <px> is set to NULL)
Since resolve_logger() always resolves logger target (even when error
occurs), we must take care of freeing parsing hints because free_logger()
won't try to do it if target RESOLVED flag is set on the target.
This isn't considered as a bug because resolve_logger(), being a
postparsing check, will make haproxy immediately exit upon fatal error
in haproxy.c, but it's better to ensure that everything will be properly
freed if we decide to perform a clean exit upon postparsing checks error
in the future.
It is no longer relevant to say that <logger> is used for implicit
settings. In fact the function resolves <logger>, but currently
mainly focuses on loggers's target. However we could extend the
function to perform additional work on the logger itself in the future.
let's adjust the comment to prevent any confusion.
'%OG' logformat alias may be used to report the log origin (when/where)
that triggered log generation using sess_build_logline().
Possible values are:
- "sess_error": log was generated during session error handling
- "sess_killed": log was generated during session abortion (killed
embryonic session)
- "txn_accept": log was generated right after frontend conn was accepted
- "txn_request": log was generated after client request was received
- "txn_connect": log was generated after backend connection establishment
- "txn_response": log was generated during server response handling
- "txn_close": log was generated at the final txn step, before closing
- "unspec": unknown or not specified
Documentation was updated.
This is another prerequisite work in preparation for log-profiles: in this
patch we make process_send_log() aware of the log origin, primarily aiming
for sess and txn logging steps such as error, accept, connect, close, as
well as relevant sess and stream pointers.
Move the embryonic session logging logic down to sess_log() in preparation
for log-profiles because then log preferences will be set per logger and
not per proxy. Indeed, as each logger may come with its own log-profile
that possibly overrides proxy logformat preferences, the check will need
to be performed at a central place by lower sending functions.
To ensure the change doesn't break existing behavior, a dedicated
sess_log_embryonic() wrapper was added and is exclusively used by
session_kill_embryonic() to indicate that a special logging logic must
be performed under sess_log().
Also, thanks to this change, log-format-sd will now be taken into account
for legacy embryonic session logging.
rename session_build_err_string() to session_embryonic_build_legacy_err()
and add new <out> buffer argument to the prototype. <out> will be used as
destination for the generated string instead of implicitly relying on the
trash buffer. Finally, expose the new function through the header file so
that it becomes usable from any source file.
The function is expected to be called with a session originating from
a connection and should not be used for applets.
This commit looks messy, but all it does is reorganize send_log() helpers
by dependency order to remove the need of forward-declaring some of them.
Also, since they're all internal helpers, let's explicitly mark them as
static to prevent any misuse.
In commit f8642ee82 ("MEDIUM: resolvers: rename dns extra counters to
resolvers extra counters"), we renamed "dns" counters to "resolvers", but
we forgot to update the documentation accordingly.
This may be backported to all stable versions.
qc_prep_pkts() is built around a double loop iteration. First, it
iterates over every QEL instance register on sending. The inner loop is
used to repeatdly called qc_build_pkt() with a QEL instance. If the QEL
instance has no more data to sent, the next QEL entry is selected. It
can also be interrupted earlier if there is not enough room on the sent
buffer.
Clarify the inner loop by using qc_may_build_pkt() directly into it
besides the check on buffer room left. This function is used to test if
the QEL instance has something to send.
This should simplify send evolution, in particular GSO implementation.
Each emitted QUIC datagram is prefixed by an out-of-band header. This
header specify the datagram length and the pointer to the first QUIC
packet instance. This header length is defined via QUIC_DGRAM_HEADLEN.
Replace every occurences of manually calculated header length with
globally defined QUIC_DGRAM_HEADLEN. This should ease code maintenance
and simplify GSO implementation.
qc_build_pkt() error handling was difficult due to multiple error code
possible. Improve this by defining a proper enum to describe the various
error code. Also clean up ending labels inside qc_build_pkt().
Previously, packets encoding was stopped as soon as buffer room left is
less than UDP MTU. This is suboptimal if the next packet would be
smaller than that.
To improve this, only check if there is at least enough room for the
mandatory packet header. qc_build_pkt() would ensure there is thus
responsible to return QC_BUILD_PKT_ERR_BUFROOM as soon as buffer left is
insufficient to stop packets encoding. An extra check is added to ensure
end pointer would never exceed buffer end.
This should not have any significant impact on the performance. However,
this renders the code intention clearer.
API for sending has been extended to support emission on more than 2 QEL
instances. However, this has rendered the PADDING emission for INITIAL
packets less previsible. Indeed, if qc_send() is used with empty QEL
instances, a padding frame may be generated before handling the last QEL
registered, which could cause unnecessary padding to be emitted.
This commit simplify PADDING by only activating it for the last QEL
registered. This ensures that no superfluous padding is generated as if
the minimal INITIAL datagram length is reached, padding is resetted
before handling last QEL instance.
This bug is labelled as minor as haproxy already emit big enough INITIAL
packets coalesced with HANDSHAKE one without needing padding. This
however render the padding code difficult to test. Thus, it may be
useful to force emission on INITIAL qel only without coalescing
HANDSHAKE packet. Here is a sample to reproduce it :
--- a/src/quic_conn.c
+++ b/src/quic_conn.c
@@ -794,6 +794,14 @@ struct task *quic_conn_io_cb(struct task *t, void *context, unsigned int state)
}
}
+ if (qc->iel && qel_need_sending(qc->iel, qc)) {
+ struct list empty = LIST_HEAD_INIT(empty);
+ qel_register_send(&send_list, qc->iel, &qc->iel->pktns->tx.frms);
+ if (qc->hel)
+ qel_register_send(&send_list, qc->hel, &empty);
+ qc_send(qc, 0, &send_list);
+ }
+
/* Insert each QEL into sending list if needed. */
list_for_each_entry(qel, &qc->qel_list, list) {
if (qel_need_sending(qel, qc))
This should be backported up to 3.0.
Since 2.9, it is possible to drain the request payload from the H1
multiplexer in case of early reply. When this happens, the upper stream is
detached but the H1 stream is not destroyed. Once the whole request is
drained, the end of the detach stage is finished. So the H1 stream is
destroyed and the H1 connection is ready to be reused, if possible,
otherwise it is released.
And here is the issue. If some data of the next request are received with
last bytes of the drained one, parsing of the next request is immediately
started. The previous H1 stream is destroyed and a new one is created to
handle the parsing. At this stage the H1 connection may be released, for
instance because of a parsing error. This case was not properly handled.
Instead of immediately exiting the mux, it was still possible to access the
released H1 connection to refresh its timeouts, leading to a UAF issue.
Many thanks to Annika for her invaluable help on this issue.
The patch should fix the issue #2602. It must be backported as far as 2.9.
Add a documentation about the history of the master-worker and how it
was implemented in its first version and how it is currently working.
This is a global view of the architecture, and not an exhaustive
explanation of all mechanisms.
By default, there is always at least on resolver section, the default one,
based on "/etc/resolv.conf" content. However, it is possible to have no
resolver at all if the file is empty or if any error occurred. Errors are
silently ignored at this stage.
In that case, there was a bug in the Prometheus exporter leading to a crash
because the resolver section list is empty. An invalid resolver entity was
used. To fix the issue we must only take care to not dump resolvers metrics
when there is no resolver.
Thanks to Aurelien to have spotted the offending commit.
This patch should fix the issue #2604. It must be backported to 3.0.
To stay consistent with the work started in 54627f991 ("DOC: config: add
context hint for proxy keywords") and 3d4e1e682 ("DOC: config: add context
hint for server keywords"), we add missing context hint for "guid" (both
proxy and server) keyword and "hash-key" server keyword that were added
during 3.0 development.
This may be backported in 3.0.
"guid" proxy keyword added in da754b45 ("MINOR: proxy: implement GUID
support") was lacking the section hint in the keyword description, let's
fix that.
It could be backported in 3.0 with da754b45.
As reported by Ashley Morris, "hash-key" keyword which was introduced in
commit faa8c3e0 ("MEDIUM: lb-chash: Deterministic node hashes based on
server address") doesn't belong to proxy keywords and should be found in
5.2 "Server and default-server options" instead.
It should be backported in 3.0 with faa8c3e0
Since 7a21c3a4ef ("MAJOR: log: implement proper postparsing for logformat
expressions"), logformat expressions stored in a default section are not
postchecked anymore. This is because the REGISTER_POST_PROXY_CHECK() only
evaluates regular proxies. Because of this, proxy options which are
automatically enabled on the proxy depending on the logformat expression
features in use are not set on the default proxy, which means such options
are not passed to the regular proxies that inherit from it (proxies that
and will actually be running the logformat expression during runtime).
Because of that, a logformat expression stored inside a default section
and executed by a regular proxy may not behave properly. Also, since
03ca16f38b ("OPTIM: log: resolve logformat options during postparsing"),
it's even worse because logformat node options postresoving is also
skipped, which may also alter logformat expression encoding feature.
To fix the issue, let's add a special case for default proxies in
parse_logformat_string() and lf_expr_postcheck() so that default proxies
are postchecked on the fly during parsing time in a "relaxed" way as we
cannot assume that the features involved in the logformat expression won't
be compatible with the proxy actually running it since we may have
different types of proxies inheriting from the same default section.
This bug was discovered while trying to address GH #2597.
It should be backported to 3.0 with 7a21c3a4ef and 03ca16f38b.
logformat_node was referenced as "node" in the error message reported
to the user, but in fact it is referred to as "item" in user
documentation. Using "item" in the error message to better comply with
the doc.
Error message was introduced with 7a21c3a4ef ("MAJOR: log: implement
proper postparsing for logformat expressions")
When parsing a logformat expression using parse_logformat_string(), the
caller passes the proxy under which the expression is found as argument.
This information allows the logformat expression API to check if the
expression is compatible with the proxy settings.
Since 7a21c3a ("MAJOR: log: implement proper postparsing for logformat
expressions"), the proxy compatibilty checks are postponed after the proxy
is fully parsed to ensure proxy properties are fully resolved for checks
consistency.
The way it works, is that each time parse_logformat_string() is called for
a given expression and proxy, it schedules the expression for postchecking
by appending the expression to the list of pending expression checks on
the proxy (lf_checks struct). Then, when the proxy is called with the
REGISTER_POST_PROXY_CHECK() hook, it iterates over unchecked expressions
and performs the check, then it removes the expression from its list.
However, I overlooked a special case: if a logformat expression is used
on a proxy that is disabled or a default proxy:
REGISTER_POST_PROXY_CHECK() hook is never called. Because of that, lf
expressions may still point to the proxy after the proxy is freed.
For most logformat expressions, this isn't an issue because they are
stored within the proxy itself, but this isn't the case with
{tcp,http}checks logformat expressions: during deinit() sequence, all
proxies are first cleaned up, and only then shared checks are freed.
Because of that, the below config will trigger UAF since 7a21c3a:
uaf.conf:
listen dummy
bind localhost:2222
backend testback
disabled
mode http
option httpchk
http-check send hdr test "test"
http-check expect status 200
haproxy -f uaf.conf -c:
==152096== Invalid write of size 8
==152096== at 0x21C317: lf_expr_deinit (log.c:3491)
==152096== by 0x2334A3: free_tcpcheck_http_hdr (tcpcheck.c:84)
==152096== by 0x2334A3: free_tcpcheck_http_hdr (tcpcheck.c:79)
==152096== by 0x2334A3: free_tcpcheck_http_hdrs (tcpcheck.c:98)
==152096== by 0x23365A: free_tcpcheck.part.0 (tcpcheck.c:130)
==152096== by 0x2338B1: free_tcpcheck (tcpcheck.c:108)
==152096== by 0x2338B1: deinit_tcpchecks (tcpcheck.c:3780)
==152096== by 0x2CF9A4: deinit (haproxy.c:2949)
==152096== by 0x2D0065: deinit_and_exit (haproxy.c:3052)
==152096== by 0x169BC0: main (haproxy.c:3996)
==152096== Address 0x52a8df8 is 6,968 bytes inside a block of size 7,168 free'd
==152096== at 0x484B27F: free (vg_replace_malloc.c:872)
==152096== by 0x2CF8AD: deinit (haproxy.c:2906)
==152096== by 0x2D0065: deinit_and_exit (haproxy.c:3052)
==152096== by 0x169BC0: main (haproxy.c:3996)
To fix the issue, let's ensure in proxy_free_common() that no unchecked
expressions may still point to the proxy after the proxy is freed by
purging the list (DEL_INIT is used to reset list items).
Special thanks to GH user @mhameed who filed a comprehensive issue with
all the relevant information required to reproduce the bug (see GH #2597),
after having first reported the issue on the alpine project bug tracker.
As shown by previous patch series, having to free some common proxy
struct members twice (in free_proxy() and proxy_free_defaults()) is
error-prone: we often overlook one of the two free locations when
adding new features.
To prevent such bugs from being introduced in the future, and also avoid
code duplication, we now have a proxy_free_common() function to free all
proxy struct members that are common to all proxy types (either regular or
default ones).
This should greatly improve code maintenance related to proxy freeing
logic.
proxy header_unique_id wasn't cleaned up in proxy_free_defaults(),
resulting in small memory leak if "unique-id-header" was used on a
default proxy section.
It may be backported to all stable versions.
proxy conn_src.iface_name was only freed in proxy_free_defaults(), whereas
proxy conn_src.bind_hdr_name was only freed in free_proxy().
Because of that, using "source usesrc hdr_ip()" in a default proxy, or
"source interface" in a regular or default proxy would cause memory leaks
during deinit.
It may be backported to all stable versions.
proxy dyncookie_key wasn't cleaned up in free_proxy(), resulting in small
memory leak if "dynamic-cookie-key" was used on a regular or default
proxy.
It may be backported to all stable versions.
proxy check_{command,path} members (used for "external-check" feature)
weren't cleaned up in free_proxy(), resulting in small memory leak if
"external-check command" or "external-check path" were used on a regular
or default proxy.
It may be backported to all stable versions.
proxy email-alert settings weren't cleaned up in free_proxy(), resulting
in small memory leak if "email-alert to" or "email-alert from" were used
on a regular or default proxy.
It may be backported to all stable versions.
proxy log_tag wasn't cleaned up in free_proxy(), resulting in small
memory leak if "log-tag" was used on a regular or default proxy.
It may be backported to all stable versions.
proxy server_id_hdr_name member (used for "http-send-name-header" option)
wasn't cleaned up in free_proxy(), resulting in small memory leak if
"http-send-name-header" was used on a regular or default proxy.
This may be backported to all stable versions.
Warning message to indicate that the "http-send-name-header" option is
ignored for backend in "mode log" was referenced using its internal
struct wording instead of public name (as seen in the documentation).
Let's fix that.
It may be backported with c7783fb ("MINOR: log/backend: prevent
"http-send-name-header" use with LOG mode") in 2.9.
BoringSSL support is known to be broken since 2021, it was removed from
the CI at this time and never fixed.
(30ee2965b6)
Even the QUIC code for boringSSL was removed in 2022.
(e06f7459fa)
Instead of setting this flag on the ones used for the zero-copy negociation,
it is set on the connection flags used for xprt->rcv_buf()
call. Fortunately, there is no real consequence. The only visible effect is
the chunk size that is written on 8 bytes for no reason.
This patch is related to issue #2598. It must be backported to 3.0.
When data are transfered via zero-copy data forwarding, if some data were
already received, we try to immediately tranfer it during the negociation
step. If data are chunked and the chunk size is unknown, 10 bytes are reserved
to write the chunk size during the done step. However, when input data are
finally transferred, the offset is ignored. Data are copied into the output
buffer. But the first 10 bytes are then crushed by the chunk size. Thus the
chunk is truncated leading to a malformed message.
This patch should fix the issue #2598. It must be backported to 3.0.
This fixes an issue I've had where if a connection was idle for ~23s
it would get in a bad state. I don't understand this code, so I'm
not sure exactly why it was failing.
I discovered this by bisecting to identify the commit that caused the
regression between 2.9 and 3.0. The commit is
d2c3f8dde7: "MINOR: stconn/connection:
Move shut modes at the SE descriptor level" - a part of v3.0-dev8.
It seems to be an innocent renaming, so I looked through it and this
stood out as suspect:
- if (mode != CO_SHW_NORMAL)
+ if (mode & SE_SHW_NORMAL)
It looks like the not went missing here, so this patch reverses that
condition. It fixes my test.
I don't quite understand what this is doing or is for so I can't write
a regression test or decent commit message. Hopefully someone else
will be able to pick this up from where I've left it.
[CF: This inverts the condition to perform clean shutdowns. This means no
clean shutdown are performed when it should do. This patch must be
backported to 3.0]
quic_conn API for sending was recently refactored. The main objective
was to regroup the different functions present for both handshake and
application emission.
After this refactoring, an optimization was introduced to avoid calling
qc_send() if there was nothing new to emit. However, this prevent the Tx
buffer to be purged if previous sending was interrupted, until new
frames are finally available.
To fix this, simply remove the optimization. qc_send() is thus now
always called in quic_conn IO handlers.
The impact of this bug should be minimal as it happens only on sending
temporary error. However in this case, this could cause extra latency or
even a complete sending freeze in the worst scenario.
This must be backported up to 3.0.
qc_build_frms() is responsible to encode multiple frames in a single
QUIC packet. It accounts for room left in the buffer packet for each
newly encded frame.
An incorrect computation was performed when encoding a STREAM frame in a
single packet. Frame length was accounted twice which would reduce in
excess the buffer packet room. This caused the remaining built frames to
be reduced with the resulting packet not able to fill the whole MTU.
The impact of this bug should be minimal. It is only present when
multiple frames are encoded in a single packet after a STREAM. However
in this case datagrams built are smaller than expecting, which is
suboptimal for bandwith.
This should be backported up to 2.6.