cpu_map is 8.2kB/entry and there's one such entry per group, that's
~520kB total. In addition, the init code is still in haproxy.c enclosed
in ifdefs. Let's make this a dynamically allocated array in the cpuset
code and remove that init code.
Later we may even consider reallocating it once the number of threads
and groups is known, in order to shrink it a little bit, as the typical
setup with a single group will only need 8.2kB, thus saving half a MB
of RAM. This would require that the upper bound is placed in a variable
though.
It's easier to use this function now to natively support variable
fields in the file's path. This also removes read_file_from_trash()
that was only used here and was static.
This function takes on input a printf format for the file name, making
it particularly suitable for /proc or /sys entries which take a lot of
numbers. It also automatically trims the trailing CR and/or LF chars.
More and more utility function rely on the trash while most of the init
code doesn't have access to it because it's initialized very late (in
PRE_CHECK for the initial one). It's a pool, and it purposely supports
being reallocated, so let's initialize it in STG_POOL so that early
STG_INIT code can at least use it.
Do not force affinity on the process, instead let's just apply it to
cpu-map, it will automatically be used later in the init process. We
can do this because we know that cpu-map was not set when we're using
this detection code.
This is much saner, as we don't need to manipulate the process' affinity
at this point in time, and just update the info that the user omitted to
set by themselves, which guarantees a better long-term consistency with
the documented feature.
parse_cpu_set() stopped returning the undocumented -1 which was a
leftover from an earlier attempt, changed from ulong to int since
it only returns a success/failure and no more a mask. Thus it must
not return -1 and its callers must only test for != 0, as is
documented.
This field used to store the cpumap of the first thread in a group, and
was used till 2.4 to hold some default settings, after which it was no
longer used. Let's just drop it.
The per-process CPU affinity settings are only applied during forking,
which means that cpu-map are ignored when running in foreground (e.g.
haproxy started with -db). This is historic due to the original semantics
of a process array, but isn't documented and causes surprises when trying
to debug affinity settings.
Let's make sure the setting is applied to the workers themselves even
in foreground. This may be backported to 2.6 though it is really not
important. If backported, it also depends on previous commit:
BUG/MINOR: cpuset: remove the bogus "proc" from the cpu_map struct
We're currently having a problem with the porting from cpu_map from
processes to thread-groups as it happened in 2.7 with commit 5b09341c0
("MEDIUM: cpu-map: replace the process number with the thread group
number"), though it seems that it has deeper roots even in 2.0 and
that it was progressively made worng over time.
The issue stems in the way the per-process and per-thread cpu-sets were
employed over time. Originally only processes were supported. Then
threads were added after an optional "/" and it was documented that
"cpu-map 1" is exactly equivalent to "cpu-map 1/all" (this was clarified
in 2.5 by commit 317804d28 ("DOC: update references to process numbers
in cpu-map and bind-process").
The reality is different: when processes were still supported, setting
"cpu-map 1" would apply the mask to the process itself (and only when
run in the background, which is not documented either and is also a
bug for another fix), and would be combined with any possible per-thread
mask when calculating the threads' affinity, possibly resulting in empty
sets. However, "cpu-map 1/all" would only set the mask for the threads
and not the process. As such the following:
cpu-map 1 odd
cpu-map 1/1-8 even
would leave no CPU while doing:
cpu-map 1/all odd
cpu-map 1/1-8 even
would allow all CPUs.
While such configs are very unlikely to ever be met (which is why this
bug is tagged minor), this is becoming quite more visible while testing
automatic CPU binding during 2.9 development because due to this bug
it's much more common to end up with incorrect bindings.
This patch fixes it by simply removing the .proc entry from cpu_map and
always setting all threads' maps. The process is no longer arbitrarily
bound to the group 1's mask, but in case threads are disabled, we'll
use thread 1's mask since it contains the configured CPUs.
This fix should be backported at least to 2.6, but no need to insist if
it resists as it's easier to break cpu-map than to fix an unlikely issue.
As documented, the NUMA auto-detection is not supposed to be used when
the CPU affinity was set either by taskset (already checked) or by a
cpu-map directive. However this check was missing, so that configs
having cpu-map entries would still first bind to a single node. In
practice it has no impact on correct configs since bindings will be
replaced. However for those where the cpu-map directive are not
exhaustive it will have the impact of binding those threads to one node,
which disagrees with the doc (and makes future evolutions significantly
more complicated).
This could be backported to 2.4 where numa-cpu-mapping was added, though
if nobody encountered this by then maybe we should only focus on recent
versions that are more NUMA-friendly (e.g. 2.8 only). This patch depends
on this previous commit that brings the function we rely on:
MINOR: cpuset: add cpu_map_configured() to know if a cpu-map was found
Since we'll soon want to adjust the "thread-groups" degree of freedom
based on the presence of cpu-map, we first need to be able to detect
if cpu-map was used. This function scans all cpu-map sets to detect if
any is present, and returns true accordingly.
This adds the "core.get_var()" method allow the reading
of "proc." scoped variables outside of TXN or HTTP/TCPApplet.
Fixes: #2212
Signed-off-by: Daan van Gorkum <djvg@djvg.net>
A FCGI response may contain a "Location" header with no status code. In this
case a 302-Found HTTP response must be returned to the client. However,
while the status code is indeed 302, the reason is wrong. "Found" must be
set instead of "Moved Temporarily".
This patch must be backported as far as 2.2. With the commit e3e4e0006
("BUG/MINOR: http: Return the right reason for 302"), this should fix the
issue #2208.
Calling lual_newstate(Init main lua stack) in the hlua_init_state()
function, the return value of lua_newstate() may be NULL (for example
in case of OOM). In this case, L will be NULL, and then crash happens
in lua_getextraspace(). So, we add a check for lua_newstate.
This should be backported at least to 2.4, maybe further.
Building with gcc-6.5:
src/quic_conn.c: In function 'send_retry':
src/quic_conn.c:6554:2: error: dereferencing type-punned pointer will break
strict-aliasing rules [-Werror=strict-aliasing]
*((uint32_t *)((unsigned char *)&buf[i])) = htonl(qv->num);
This patch use write_n32 to set the value.
This could be backported until v2.6
This bug arrived with this commit:
MEDIUM: quic: Dynamic allocations of QUIC TLS encryption levels
It is possible that haproxy receives a late Initial packet after it has
released its Initial or Handshake encryption levels. In this case
it must not try to retransmit packets from such encryption levels to
speed up the handshake completion.
No need to backport.
Adds a new sample fetch method to get the curve name used in the
key agreement to enable better observability. In OpenSSLv3, the function
`SSL_get_negotiated_group` returns the NID of the curve and from the NID,
we get the curve name by passing the NID to OBJ_nid2sn. This was not
available in v1.1.1. SSL_get_curve_name(), which returns the curve name
directly was merged into OpenSSL master branch last week but will be available
only in its next release.
Because of a cut/paste error, the wrong reason was returned for 302
code. The 301 reason was returned instead. Thus now, "Found" is returned for
302, instead of "Moved Permanently".
This pathc should fix the issue 2208. It must be backported to all stable
versions.
When "add" or "sub" conveters are used, an overflow detection is performed.
When 2 negative integers are added (or a positive integer is substracted to
a positive one), we take care to not exceed the low limit (LLONG_MIN) and
when 2 positive integers are added, we take care to not exceed the high
limit (LLONG_MAX).
However, because of a missing 'else' statement, if there is no overflow in
the first case, we fall back on the second check (the one for positive adds)
and LLONG_MAX is returned. It means that most of time, when 2 negative
integers are added (or a positive integer is substracted to a negative one),
LLONG_MAX is returned.
This patch should solve the issue #2216. It must be backported to all stable
versions.
A typo in the "fc_src" description was fixed. This sample returns the
original source IP address and not the destination one.
This patch should be backported as far as 2.6.
I assumed that the hlua_yieldk() function used in queue:pop_wait()
function would eventually return when the continuation function would
return.
But this is wrong, the continuation function is simply called back by the
resume after the hlua_yieldk() which does not return in this case. The
caller is no longer the initial calling function, but Lua, so when the
continuation function eventually returns, it does not give the hand back
to the C calling function (queue:pop_wait()), like we're used to, but
directly to Lua which will continue the normal execution of the (Lua)
function that triggered the C-function, effectively bypassing the end
of the C calling function.
Because of this, the queue waiting list cleanup never occurs!
This causes some undesirable effects:
- pop_wait() will slowly leak over the time, because the allocated queue
waiting entry never gets deallocated when the function is finished
- queue:push() will become slower and slower because the wait list will
keep growing indefinitely as a result of the previous leak
- the task that performed at least 1 pop_wait() could suffer from
useless wakeups because it will stay indefinitely in the queue waiting
list, so every queue:push() will try to wake the task, even if the
task is not waiting for new queue items.
- last but not least, if the task that performed at least 1 pop_wait ends
or crashes, the next queue:push() will lead to invalid reads and
process crash because it will try to wakeup a ghost task that doesn't
exist anymore.
To fix this, the pop_wait function was reworked with the assumption that
the hlua_yieldk() with continuation function never returns. Indeed, it is
now the continuation function that will take care of the cleanup, instead
of the parent function.
This must be backported in 2.8 with 86fb22c5 ("MINOR: hlua_fcn: add Queue class")
lua_yieldk ctx argument is of type lua_KContext which is typedefed to
intptr_t when available so it can be used to store pointers.
But the wrapper function hlua_yieldk() passes it as a regular it so it
breaks that promise.
Changing hlua_yieldk() prototype so that ctx argument is of type
lua_KContext.
This bug had no functional impact because ctx argument is not being
actively used so far. This may be backported to all stable versions
anyway.
The internal tick clock was used to export the timestamp int the token
on retry packets. Doing this in cluster mode the nodes don't
understand the timestamp from tokens generated by others.
This patch re-work this using the the real current date (wall-clock time).
Timestamp are also now considered in secondes instead of milleseconds.
This patch should be backported until v2.6
RFC 9000, 17.2.5.1:
"The client MUST use the value from the Source Connection ID
field of the Retry packet in the Destination Connection ID
field of subsequent packets that it sends."
There was no control of this and we could accept a different
dcid on init packets containing a valid token.
The randomized value used as new scid on retry packets is now
added in the aad used to encode the token. This way the token
will appear as invalid if the dcid missmatch the scid of
the previous retry packet.
This should be backported until v2.6
According to rfc 5869 about hkdf, extract function returns a
pseudo random key usable to perform expand using labels to derive keys.
So the intermediate expand on a label is useless, the key should be strong
enought using only one expand.
This patch should be backported until v2.6
Computing the token key and IV, a stronger derived key was used
to compute the key but the weak secret was still used to compute
the IV. This could be used to found the secret.
This patch fix this using the same derived key than the one used
to compute the token key.
This should backport until v2.6
Configuration parsing allow port like 8000/websocket/. This is
a nonsense and allowing this syntax may hide to the user something
not corresponding to its intent.
This patch should not be backported because it could break existing
configurations
In hlua_queue_size(), queue size is loaded as a regular int, but the
queue might be shared by multiple threads that could perform some
atomic pushing or popping attempts in parallel, so we better use an
atomic load operation to guarantee consistent readings.
This could be backported in 2.8.
Because of b58bd97 ("MINOR: hlua_fcn/mailers: handle timeout mail from
mailers section"), latest examples/lua/mailers.lua file wouldn't work
if loaded with haproxy < 2.8.2 (which is not yet released), unless it
is manually recompiled from the latest code sources.
But this can be an issue given that direct URL may be referenced in
guides/tutorials as download link for the the script, independently
from the related haproxy source code revision.
ie: http://git.haproxy.org/?p=haproxy-2.8.git;a=blob_plain;f=examples/lua/mailers.lua;hb=HEAD
Thus, in order to make examples/lua/mailers.lua work with previous 2.8
versions, we ensure that the script stays compatible with pre-b58bd97
code by adding extra checks in the script.
This must be backported in 2.8 with b58bd97
When errors are encountered in sink_new_from_logsrv() function,
incompetely allocated ressources are freed to prevent memory leaks.
For instance: logsrv implicit server is manually cleaned up on error prior
to returning from the function.
However, since 198e92a8e5 ("MINOR: server: add a global list of all known
servers") every server created using new_server() is registered to the
global list, but unfortunately the manual srv cleanup in
sink_new_from_logsrv() doesn't remove the srv from the global list, so the
freed server will still be referenced there, which can result in invalid
reads later.
Moreover, server API has evolved since, and now the srv_drop() function is
available for that purpose, so let's use it, but make sure that srv is
freed before the proxy because on older versions srv_drop() expects the
srv to be linked to a valid proxy pointer.
This must be backported up to 2.4.
[For 2.4 version, free_server() must be used instead of srv_drop()]
As the example/lua/mailers.lua script does its best to mimic the c-mailer
implementation, it should support the "timeout mail" directive as well.
This could be backported in 2.8.
srv->rid default value is set in _srv_parse_init() after the server is
succesfully allocated using new_server().
This is wrong because new_server() can be used independently so rid value
assignment would be skipped in this case.
Hopefully new_server() allocates server data using calloc() so srv->rid
is already set to 0 in practise. But if calloc() is replaced by malloc()
or other memory allocating function that doesn't zero-initialize srv
members, this could lead to rid being uninitialized in some cases.
This should be backported in 2.8 with 61e3894dfe ("MINOR: server: add
srv->rid (revision id) value")
Multiple error paths (memory,IO related) in cfg_post_parse_ring() were
not implemented correcly and could result in memory leak or undefined
behavior.
Fixing them all at once.
This can be backported in 2.4
sft freeing attempt made in a575421 ("BUG/MINOR: sink: missing sft free in
sink_deinit()") is incomplete, because sink->sft is meant to be used as a
list and not a single sft entry.
Because of that, the previous fix only frees the first sft entry, which
fixes memory leaks for single-server forwarders (this is the case for
implicit rings), but could still result in memory leaks when multiple
servers are configured in a explicit ring sections.
What this patch does: instead of directly freeing sink->sft, it iterates
over every list members to free them.
It must be backported up to 2.4 with a575421.
When leaving cfg_parse_log_forward() on error paths, errmsg which is local
to the function could still point to valid data, and it's our
responsibility to free it.
Instead of freeing it everywhere it is invoved, we free it prior to
leaving the function.
This should be backported as far as 2.4.
Multiple error paths were badly handled in cfg_parse_log_forward():
some errors were raised without interrupting the function execution,
resulting in undefined behavior.
Instead of fixing issues separately, let's fix the whole function at once.
This should be backported as far as 2.4.
"missing name for ip-forward section" is generated instead of "missing
name name for log-forward section" in cfg_parse_log_forward().
This may be backported up to 2.4.
In e709e1e ("MEDIUM: logs: buffer targets now rely on new sink_write")
we started using the sink API instead of using the ring_write function
directly.
But as indicated in the commit message, the maxlen parameter of the log
directive now only applies to the message part and not the complete
payload. I don't know what the original intent was (maybe minimizing code
changes) but it seems wrong, because the doc doesn't mention this special
case, and the result is that the ring->buffer output can differ from all
other log output types, making it very confusing.
One last issue with this is that log messages can end up being dropped at
runtime, only for the buffer target, and even if logsrv->maxlen is
correctly set (including default: 1024) because depending on the generated
header size the payload can grow bigger than the accepted sink size (sink
maxlen is not mandatory) and we have no simple way to detect this at
configuration time.
First, we partially revert e709e1e:
TARGET_BUFFER still leverages the proper sink API, but thanks to
"MINOR: sink: pass explicit maxlen parameter to sink_write()" we now
explicitly pass the logsrv->maxlen to the sink_write function in order
to stop writing as soon as either sink->maxlen or logsrv->maxlen is
reached.
This restores pre-e709e1e behavior with the added benefit from using the
high-level API, which includes automatically announcing dropped message
events.
Then, we also need to take the ending '\n' into account: it is not
explicitly set when generating the logline for TARGET_BUFFER, but it will
be forcefully added by the sink_forward_io_handler function from the tcp
handler applet when log messages from the buffer are forwarded to tcp
endpoints.
In current form, because the '\n' is added later in the chain, maxlen is
not being considered anymore, so the final log message could exceed maxlen
by 1 byte, which could make receiving servers unhappy in logging context.
To prevent this, we sacrifice 1 byte from the logsrv->maxlen to ensure
that the final message will never exceed log->maxlen, even if the '\n'
char is automatically appended later by the forwarding applet.
Thanks to this change TCP (over RING/BUFFER) target now behaves like
FD and UDP targets.
This commit depends on:
- "MINOR: sink: pass explicit maxlen parameter to sink_write()"
It may be backported as far as 2.2
[For 2.2 and 2.4 the patch does not apply automatically, the sink_write()
call must be updated by hand]
sink_write() currently relies on sink->maxlen to know when to stop
writing a given payload.
But it could be useful to pass a smaller, explicit value to sink_write()
to stop before the ring maxlen, for instance if the ring is shared between
multiple feeders.
sink_write() now takes an optional maxlen parameter:
if maxlen is > 0, then sink_write will stop writing at maxlen if maxlen
is smaller than ring->maxlen, else only ring->maxlen will be considered.
[for haproxy <= 2.7, patch must be applied by hand: that is:
__sink_write() and sink_write() should be patched to take maxlen into
account and function calls to sink_write() should use 0 as second argument
to keep original behavior]
A regression was introduced with 5464885 ("MEDIUM: log/sink: re-work
and merge of build message API.").
For UDP targets, a final '\n' is systematically inserted, but with the
rework of the build message API, it is inserted after the maxlen
limitation has been enforced, so this can lead to the final message
becoming maxlen+1. For strict syslog servers that only accept up to
maxlen characters, this could be a problem.
To fix the regression, we take the final '\n' into account prior to
building the message, like it was done before the rework of the API.
This should be backported up to 2.4.
When maxlen parameter exceeds sink size, a warning is generated and maxlen
is enforced to sink size. But the err_code is incorrectly set to ERR_ALERT
Indeed, being a "warning", ERR_WARN should be used here.
This may be backported as far as 2.2
When a ring section defines its size using the "size" directive with a
smaller size than the default one or smaller than the previous one,
a warning is generated to inform the user that the new size will be
ignored.
However the err_code is returned as FATAL, so this cause haproxy to
incorrectly abort the init sequence.
Changing the err_code to ERR_WARN so that this warning doesn't refrain
from successfully starting the process.
This should be backported as far as 2.4
Adding missing free for sft (string_forward_target) in sink_deinit(),
which resulted in minor leak for each declared ring target at deinit().
(either explicit and implicit rings are affected)
This may be backported up to 2.4.
_proxy_http_parse_7239_expr() helper used in proxy_http_parse_7239()
function may return ERR_ABORT in case of memory error. But the error check
used below is insufficient to catch ERR_ABORT so the function could keep
executing prior to returning ERR_ABORT, which may cause undefined
behavior. Hopefully no sensitive handling is performed in this case so
this bug has very limited impact, but let's fix it anyway.
We now use ERR_CODE mask instead of ERR_FATAL to check if err_code is set
to any kind of error combination that should prevent the function from
further executing.
This may be backported in 2.8 with b2bb9257d2 ("MINOR: proxy/http_ext:
introduce proxy forwarded option")