DragonflyBSD already has an attribute __read_mostly which serves the
same purpose as the one in compiler.h.
No need to be backported as it was added in the current 2.4-dev.
By passing a version number to "add map/acl", it becomes possible to
atomically replace maps and ACLs. The principle is that a new version
number is first retrieved by calling"prepare map/acl", and this version
number is used with "add map" and "add acl". Newly added entries then
remain invisible to the matching mechanism but are visible in "show
map/acl" when the version number is specified, or may be cleard with
"clear map/acl". Finally when the insertion is complete, a
"commit map/acl" command must be issued, and the version is atomically
updated so that there is no intermediate state with incomplete entries.
The command is used to atomically replace a map/acl with the pending
contents of the designated version. The new version must have been
allocated by "prepare map/acl" prior to this. At the moment it is not
possible to force the version when adding new entries, so this may only
be used to atomically clear an ACL/map.
This command allocates a new version for the map/acl, that will be usable
later to prepare the addition of new values to atomically replace existing
ones. Technically speaking the operation consists in atomically incrementing
the next version. There's no "undo" operation here, if a version is not
committed, it will automatically be trashed when committing a newer version.
This will ease maintenance of versionned maps by allowing to clear old or
failed updates instead of the current version. Nothing was done to allow
clearing everyhing, though if there was a need for this, implementing "@all"
or something equivalent wouldn't require more than 3 lines of code.
Instead of being able to purge only values older than a specific value,
let's support arbitrary ranges and make pat_ref_purge_older() just be
one special case of this one.
The maps and ACLs internally all have two versions, the "current" one,
which is the one being matched against, and the "next" one, the one being
filled during an atomic replacement. Till now the "show" commands only used
to show the current one but it can be convenient to be able to show other
ones as well, so let's add the ability to do this with "show map" and
"show acl". The method used here consists in passing the version number
as "@<ver>" before the map/acl name or ID. It would have been better after
it but that could create confusion with keys already using such a format.
The "show map" command wasn't updated when pattern generations were
added for atomic reloads, let's report them in the "show map" command
that lists all known maps. It will be useful for users.
This function was only used once in cli_parse_add_map(), and half of the
work it used to do was already known from the caller or testable outside
of the lock. Given that we'll need to modify it soon to pass a generation
number, let's remerge it in the caller instead, using pat_ref_load() which
is the one we'll need.
The function uses two distinct code paths for single the key/value pair
and multiple pairs inserted as payload, each with a copy-paste of the
error handling. Let's modify the loop to factor them out.
Commit b8bd1ee89 ("MEDIUM: cli: add a new experimental "set var" command")
added "get var" and "set var" but "set var" was misplaced in the doc,
breaking the alphabetic ordering.
The map_redirect test already tests for "show map", "del map" and
"clear map" but doesn't have any "add map" command. Let's add some
trivial ones involving one regular entry and two other ones added as
payload, checking they are properly returned.
Error output for dynamic server creation if invalid lb algo has changed
since previous commit :
MINOR: server: fix doc/trace on lb algo for dynamic server creation
The vtest regex should have been updated has well to match it.
The text mentionned that only backends with consistent hash method were
supported for dynamic servers. In fact, it is only required that the lb
algorith is dynamic.
There is some serious confusion in the lua interface code related to
sockets and services coming from the hlua_appctx structs being called
"appctx" everywhere, and where the real appctx is reached using
appctx->appctx. This part is a bit of a pain to debug so let's rename
all occurrences of this local variable to "luactx".
During commit 7e4a557f6 ("MINOR: time: change the global timeval and the
the global tick at once") the approach made sure that the new now_ms was
always higher than or equal to global_now_ms, but by forgetting the old
value. This can cause the first update to global_now_ms to fail if it's
already out of sync, going back into the loop, and the subsequent call
would then succeed due to commit 4d01f3dcd ("MINOR: time: avoid
overwriting the same values of global_now").
And if it goes out of sync, it will fail to update forever, as observed
by Ashley Penney in github issue #1194, causing incorrect freq counters
calculations everywhere. One possible trigger for this issue is one thread
spinning for a few milliseconds while the other ones continue to work.
The issue really is that old_now_ms ought not to be modified in the loop
as it's used for the CAS. But we don't need to structurally guarantee that
global_now_ms grows monotonically as it's computed from the new global_now
which is already verified for this via the __tv_islt() test. Thus, dropping
any corrections on global_now_ms in the loop is the correct way to proceed
as long as this one is always updated to follow global_now.
No backport is needed, this is only for 2.4-dev.
This patch adds miscellenous informative flags raised during the initial
full resync process performed during the reload for debugging purpose.
0x00000010: Timeout waiting for a full resync from a local node
0x00000020: Timeout waiting for a full resync from a remote node
0x00000040: Session aborted learning from a local node
0x00000080: Session aborted learning from a remote node
0x00000100: A local node teach us and was fully up to date
0x00000200: A remote node teach us and was fully up to date
0x00000400: A local node teach us but was partially up to date
0x00000800: A remote node teach us but was partially up to date
0x00001000: A local node was assigned for a full resync
0x00002000: A remote node was assigned for a full resync
0x00004000: A resync was explicitly requested
This patch could be backported on any supported branch
Flags used as context to know current status of each table pushing a
full resync to a peer were correctly reset receiving a new resync
request or confirmation message but in case of local peer sync during
reload the resync request is implicit and those flags were not
correctly reset in this case.
This could result to a partial initial resync of some tables after reload
if the connection with the old process was broken and retried.
This patch reset those flags at the end of the handshake for all new
connections to be sure to push a entire full resync if needed.
This patch should be backported on all supported branches ( v >= 1.6 )
Only entries between the opposite of the last 'local update' rotating
counter were considered to be pushed. This processing worked in most
cases because updates are continually pushed trying to reach this point
but it remains some cases where updates id are more far away in the past
and appearing in futur and the push of updates is stuck until the head
reach again the tail which could take a very long time.
This patch re-work the lookup to consider that all positions on the
rotating counter is considered in the past until we reach exactly
the 'local update' value. Doing this, the updates push won't be stuck
anymore.
This patch should be backported on all supported branches ( >= 1.6 )
The commitupdate value of the table is used to check if the update
is still pending for a push for all peers. To be sure to not miss a
push we reset it just after a handshake success.
This patch should be backported on all supported branches ( >= 1.6 )
If two peers are disconnected and during this period they continue to
process a large amount of local updates, after a reconnection they
may take a long time before restarting to push their updates. because
the last pushed update would appear internally in futur.
This patch fix this resetting the cursor on acked updates at the maximum
point considered in the past if it appears in futur but it means we
may lost some updates. A clean fix would be to update the protocol to
be able to signal a remote peer that is was not updated for a too long
period and needs a full resync but this is not yet supported by the
protocol.
This patch should be backported on all supported branches ( >= 1.6 )
The re-con cursor was updated receiving any ack message
even if we are pushing a complete resync to a peer. This cursor
is reset at the end of the resync but if the connection is broken
during resync, we could re-start at an unwanted point.
With this patch, the peer stops to consider ack messages pushing
a resync since the resync process has is own acknowlegement and
is always restarted from the beginning in case of broken connection.
This patch should be backported on all supported branches ( >= 1.6 )
Receiving a resync request, the origins to start the full sync and
to reset after the full resync are mistakenly computed based on
the last update on the table instead of computed based on the
the last update acked by the node requesting the resync.
It could result in disordered or missing updates pushing to the
requester
This patch sets correctly those origins.
This patch should be backported on all supported branches ( >= 1.6 )
If a reload is performed and there is no incoming connections
from the old process to push a full resync, the new process
can be stuck waiting indefinitely for this conn and it never tries a
fallback requesting a full resync from a remote peer because the resync
timer was init to TICK_ETERNITY.
This patch forces a reset of the resync timer to default value (5 secs)
if we detect value is TICK_ETERNITY.
This patch should be backported on all supported branches ( >= 1.6 )
By default haproxy loads all files designated by a relative path from the
location the process is started in. In some circumstances it might be
desirable to force all relative paths to start from a different location
just as if the process was started from such locations. This is what this
directive is made for. Technically it will perform a temporary chdir() to
the designated location while processing each configuration file, and will
return to the original directory after processing each file. It takes an
argument indicating the policy to use when loading files whose path does
not start with a slash ('/').
A few options are offered, "current" (the default), "config" (files
relative to config file's dir), "parent" (files relative to config file's
parent dir), and "origin" with an absolute path.
This should address issue #1198.
In readcfgfile() when malloc() fails to allocate a buffer for the
config line, it currently says "parsing[<file>]: out of memory" while
the error is unrelated to the config file and may make one think it has
to do with the file's size. The second test (fopen() returning error)
needs to release the previously allocated line. Both directly return -1
which is not even documented as a valid error code for the function.
Let's simply make sure that the few variables freed at the end are
properly preset, and jump there upon error, after having displayed a
meaningful error message. Now at least we can get this:
$ ./haproxy -f /dev/kmem
[NOTICE] 116/191904 (23233) : haproxy version is 2.4-dev17-c3808c-13
[NOTICE] 116/191904 (23233) : path to executable is ./haproxy
[ALERT] 116/191904 (23233) : Could not open configuration file /dev/kmem : Permission denied
The alternative arguments are always in curly brackets, let's fix it for
set-timeout.
The Example in set-timeout does not have the one of the required argument.
This commit makes the PR https://github.com/cbonte/haproxy-dconv/pull/34
obsolete.
Thanks to the commit "BUG/MINOR: applet: Notify the other side if data were
consumed by an applet", it is no longer necessary to notify the producer when an
applet skips output data. Now, it is the default applet handler responsibility
to take care of that.
When a DATA frame is sent, we must take care to properly detect the EOM flag
on the HTX message to set ES flag on the frame when necessary, to finish the
stream. But it is only done when data are copied from the HTX message to the
mux buffer and not when the frame are sent via a zero-copy. This patch fixes
this bug.
It is a 2.4-specific bug. No backport is needed.
When an HTTP lua service is started, headers are consumed before calling the
script. When it was initialized, the headers were stored in a lua array,
thus they can be removed from the HTX message because the lua service will
no longer access them. But it is a problem with bodyless messages because
the EOM flag is lost. Indeed, once the headers are consumed, the message is
empty and the buffer is reset, included the flags.
Now, the headers are not immediately consumed. We will skip them if
applet:receive() or applet:getline(). This way, the EOM flag is preserved.
At the end, when the script is finished, all output data are consumed, thus
this remains safe.
It is a 2.4-specific bug. No backport is needed.
If an applet consumed output data (the amount of output data has changed
between before and after the call to the applet), the producer is
notified. It means CF_WRITE_PARTIAL and CF_WROTE_DATA are set on the output
channel and the opposite stream interface is notified some room was made in
its input buffer. This way, it is no longer the applet responsibility to
take care of it. However, it doesn't matter if the applet does the same.
Said like that, it looks like an improvement not a bug. But it really fixes
a bug in the lua, for HTTP applets. Indeed, applet:receive() and
applet:getline() are buggy for HTTP applets. Data are consumed but the
producer is not notified. It means if the payload is not fully received in
one time, the applet may be blocked because the producer remains blocked (it
is time dependent).
This patch must be backported as far as 2.0 (only for the HTX part).
A read error on the server side is also reported as a write error on the
client side. It means some times, a server side error is handled on the
client side. Among others, it is the case when the client side is waiting
for the response while the request processing is already finished. In this
case, the error is not handled as a server error. It is not accurate.
So now, when the request processing is finished but not the response
processing and if a read error was encountered on the server side, the error
is not immediatly processed on the client side, to let a chance to response
analysers to properly catch the error.
Since the input buffer is transferred to the stream when it is created,
there is no longer control on the request size to be sure the buffer's
reserve is still respected. It was automatically performed in h2_rcv_buf()
because the caller took care to provide the correct available space in the
buffer. The control is still there but it is no longer applied on the
request headers. Now, we should take care of the reserve when the headers
are decoded, before the stream creation.
The test is performed for the request and the response.
It is a 2.4-specific bug. No backport is needed.
It is the only function using the hdrs_bytes start-line field. Thus the
function has been refactored to no longer rely on it. To do so, we first
copy HTX blocks to the destination message, without removing them from the
source message. If the copy is interrupted on headers or trailers, we roll
back. Otherwise, data are drained from the source buffer.
Most of time, the copy will succeeds. So the roll back is only performed in
the worst but very rare case.
When all data of an HTX message are drained, we rely on htx_reset() to
reinit the message state. However, the flags must be preserved. It is, among
other things, important to preserve processing or parsing errors.
This patch must be backported as far as 2.0.
The compilation fails due to the following commit:
fc6ac53dca
BUG/MAJOR: fix build on musl with cpu_set_t support
The new global variable cpu_map conflicted with a local variable of the
same name in the code path for the apple platform when setting the
process affinity.
This does not need to be backported.
Move cpu_map structure outside of the global struct to a global
variable defined in cpuset.c compilation unit. This allows to reorganize
the includes without having to define _GNU_SOURCE everywhere for the
support of the cpu_set_t.
This fixes the compilation with musl libc, most notably used for the
alpine based docker image.
This fixes the github issue #1235.
No need to backport as this feature is new in the current
2.4-dev.
The return value check was wrongly based on error codes when the
function actually returns an error number.
This bug was introduced by f3eedfe195
which is a feature not present before branch 2.4.
It does not need to be backported.
The HTX functions used to add new HTX blocks in a message have been moved to
the header file to inline them in calling functions. These functions are
small enough.
A normalized URI is the internal term used to specify an URI is stored using
the absolute format (scheme + authority + path). For now, it is only used
for H2 clients. It is the default and recommended format for H2 request.
However, it is unusual for H1 servers to receive such URI. So in this case,
we only send the path of the absolute URI. It is performed for H1 servers,
but not for FCGI applications. This patch fixes the difference.
Note that it is not a real bug, because FCGI applications should support
abosolute URI.
Note also a normalized URI is only detected for H2 clients when a request is
received. There is no such test on the H1 side. It means an absolute URI
received from an H1 client will be sent without modification to an H1 server
or a FCGI application.
To make it possible, a dedicated function has been added to get the H1
URI. This function is called by the H1 and the FCGI multiplexer when a
request is sent to a server.
This patch should fix the issue #1232. It must be backported as far as 2.2.