Allocation error are now handled in bind_conf_alloc() functions. Thus
callers, when not already done, are also updated to catch NULL return value.
This patch may be backported (at least partially) to all stable
versions. However, it only fix errors durung configuration parsing. Thus it
is not mandatory.
Allocated variables are now released when an error occurred during
use_backend, use-server, force/ignore-parsing, stick-table, stick and stats
directives parsing. For some of these directives, allocation errors have
been added.
This patch may be backported to all stable version but it only fixes leaks
or allocation errors during configuration parsing. Thus, it is not
mandatory. It should fix issue #1119.
When an error occurred in hlua_register_cli(), the allocated lua function
and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_service(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_action(), the allocated lua function
and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
hen an error occurred in action_register_lua(), the allocated hlua rule and
arguments must be released to avoid memory leaks.
This patch may be backported in all stable versions.
When an error occurred in hlua_register_fetches(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions. It should fix#1112.
When an error occurred in hlua_register_converters(), the allocated lua
function and keyword must be released to avoid memory leaks.
This patch depends on "MINOR: hlua: Add function to release a lua
function". It may be backported in all stable versions.
When an error occurred in hlua_register_task(), the allocated lua context
and task must be released to avoid memory leaks.
This patch may be backported in all stable versions.
Add the trace support for the checks. Only tcp-check based health-checks are
supported, including the agent-check.
In traces, the first argument is always a check object. So it is easy to get
all info related to the check. The tcp-check ruleset, the conn-stream and
the connection, the server state...
Olivier spotted that I messed up during a rebase of commit 92c059c2a
("MINOR: atomic: implement native BTS/BTR for x86"), losing the x86
version of the BTS/BTR and leaving the generic version for it instead
of having this block in the #else. Since this variant is not used for
now it was easy to overlook it. Let's re-implement it here.
Since 1.8 for simplicity the time offset used to compensate for time
drift and jumps had been stored per thread. But with a global time,
the complexit has significantly increased.
What this patch does in order to address this is to get back to the
origins of the pre-thread time drift correction, and keep a single
offset between the system's date and the current global date.
The thread first verifies from the before_poll date if the time jumped
backwards or forward, then either fixes it by computing the new most
likely date, or applies the current offset to this latest system date.
In the first case, if the date is out of range, the old one is reused
with the max_wait offset or not depending on the interrupted flag.
Then it compares its date to the global date and updates both so that
both remain monotonic and that the local date always reflects the
latest known global date.
In order to support atomic updates to the offset, it's saved as a
ullong which contains both the tv_sec and tv_usec parts in its high
and low words. Note that a part of the patch comes from the inlining
of the equivalent of tv_add applied to the offset to make sure that
signed ints are permitted (otherwise it depends on how timeval is
defined).
This is significantly more reliable than the previous model as the
global time should move in a much smoother way, and not according
to what thread last updated it, and the thread-local time should
always be very close to the global one.
Note that (at least for debugging) a cheap way to measure processing
lag would consist in measuring the difference between global_now_ms
and now_ms, as long as other threads keep it up-to-date.
Instead of using two CAS loops, better compute the two units
simultaneously and update them at once. There is no guarantee that
the update will be synchronous, but we don't care, what matters is
that both are monotonically updated and that global_now_ms always
follows the last known value of global_now.
In the global_now loop, we used to set tmp_adj from adjusted, then
set update it from tmp_now, then set adjusted back to tmp_adj, and
finally set now from adjusted. This is a long and unneeded set of
moves resulting from years of code changes. Let's just set now
directly in the loop, stop using adjusted and remove tmp_adj.
The time initialization was made a bit complex because we rely on a
dummy negative argument to reset all fields, leaving no distinction
between process-level initialization and thread-level initialization.
This patch changes this by introducing two functions, one for the
process and the second one for the threads. This removes ambigous
test and makes sure that the relevant fields are always initialized
exactly once. This also offers a better solution to the bug fixed in
commit b48e7c001 ("BUG/MEDIUM: time: make sure to always initialize
the global tick") as there is no more special values for global_now_ms.
It's simple enough to be backported if any other time-related issues
are encountered in stable versions in the future.
It was only used by freq_ctr and is not used anymore. In addition the
local curr_sec_ms was removed, as well as the equivalent extern
definitions which did not exist anymore either.
update_freq_ctr_period() was still not very clean and didn't wait for
the rotation lock to be dropped before trying again, thus maintaining
the contention at a high level. In addition, the rotation update was
made in three steps, which are not very efficient in terms of bus
cycles.
Here the wait loop was reworked so that the fast path remains short
and that the contended path waits for the lock to be dropped before
attempting another write, but it only waits a relax cycle before
attempting a read. The rotation block was simplified to remove a
test that was already validated by the first loop, and so that the
retrieval of the current period, its reset and its increment are all
performed in a single atomic op and the store to the previous period
is performed immediately after.
All this results in significantly smaller code for the inline function
(~1kB total) and a shorter critical path.
When counters are rotated, there is contention between the threads which
can slow down the operation of the thread performing the rotation. Let's
apply a cpu_relax there to let the first thread finish faster.
It remains cumbersome to preserve two versions of the freq counters and
two different internal clocks just for this. In addition, the savings
from using two different mechanisms are not that important as the only
saving is a divide that is replaced by a multiply, but now thanks to
the freq_ctr_total() unificaiton the code could also be simplified to
optimize it in case of constants.
This patch turns all non-period freq_ctr functions to static inlines
which call the period-based ones with a period of 1 second. A direct
benefit is that a single internal clock is now needed for any counter
and that they now all rely on ticks.
These 1-second counters are essentially used to report request rates
and to enforce a connection rate limitation in listeners. It was
verified that these continue to work like before.
Both structures are identical except the name of the field starting
the period and its description. Let's call them all freq_ctr and the
period's start "curr_tick" which is generic.
This is only a temporary change and fields are expected to remain
the same with no code change (verified).
There was still no function to compute a wait time for periods, let's
implement it on top of freq_ctr_total() as we'll soon need it for the
per-second one. The divide here is applied on the frequency so that it
will be replaced with a reciprocal multiply when constant.
This one is the easiest to implement, it just requires a call and a
divide of the result. Anti-flapping correction for low-rates was
preserved.
Now calls using a constant period will be able to use a reciprocal
multiply for the period instead of a divide.
Most of the functions designed to read a counter over a period go through
the same complex loop and only differ in the way they use the returned
values, so it was worth implementing all this into freq_ctr_total() which
returns the total number of events over a period so that the caller can
finish its operation using a divide or a remaining time calculation. As
a special case, read_freq_ctr_period() doesn't take pending events but
requires to enable an anti-flapping correction at very low frequencies.
Thus the function implements it when pend<0.
Thanks to this function it will be possible to reimplement the other ones
as inline and merge the per-second ones with the arbitrary period ones
without always adding the cost of a 64 bit divide.
This variable almost never changes and is read a lot in time-critical
sections. threads_want_rdv_mask is read very often as well in
thread_harmless_end() and is almost never changed (only when someone
uses thread_isolate()). Let's move both to read_mostly.
This one only contains the list of per-thread kqueue FDs, and is used
a lot during updates. Let's mark it read_mostly to avoid false sharing
of FDs placed at the extremities.
This one only contains the list of per-thread epoll FDs, and is used
a lot during updates. Let's mark it read_mostly to avoid false sharing
of FDs placed at the extremities.
Some pointer to arrays such as fdtab, fdinfo, polled_mask etc are never
written to at run time but are used a lot. fdtab accesses appear a lot in
perf top because ha_used_fds is in the same cache line and is modified
all the time. This patch moves all these read-mostly variables to the
read_mostly section when defined. This way their cache lines will be
able to remain in shared state in all CPU caches.
Some variables are mostly read (mostly pointers) but they tend to be
merged with other ones in the same cache line, slowing their access down
in multi-thread setups. This patch declares an empty, aligned variable
in a section called "read_mostly". This will force a cache-line alignment
on this section so that any variable declared in it will be certain to
avoid false sharing with other ones. The section will be eliminated at
link time if not used.
A __read_mostly attribute was added to compiler.h to ease use of this
section.
HA_SECTION() is used as an attribute to force a section name. This is
required because OSX prepends "__DATA, " in front of the declaration.
HA_SECTION_START() and HA_SECTION_STOP() are used as post-attribute on
variable declaration to designate the section start/end (needed only on
OSX, empty on others).
For platforms with an obsolete linker, all macros are left empty. It would
possibly still work on some of them but this will not be needed anyway.
Due to length restrictions on OSX the initcall sections are called "i_"
there while they're called "init_" on other OSes. However the start and
end of sections are still called "__start_init_" and "__stop_init_",
which forces to have distinct code between the OSes. Let's switch everyone
to "i_" and rename the symbols accordingly.
The trace() function is convenient to avoid calling trace() when traces
are not enabled, but there starts to be some callers which place complex
expressions in their trace calls, which results in all of them to be
evaluated before being passed as arguments to the trace() function. This
needlessly wastes precious CPU cycles.
Let's change the function for a macro, so that the arguments are now only
evaluated when the surce has traces enabled. However having a generic
macro being called "trace()" can easily cause conflicts with innocent
code so we rename it "_trace".
Just doing this has resulted in a 2.5% increase of the HTTP/1 request rate.
Interestingly, all arrays used to declare patterns were read-write while
only hard-coded. Let's mark them const so that they move from data to
rodata and don't risk to experience false sharing.
Now travis should only run on cron, on non-amd64, with a configuration that
only has the standard features enabled. This should reduce the number of
valuable build minutes consumed while providing as much value as possible.
In mux_pt_io_cb(), if a connection error or a shutdown is detected, the mux
is destroyed. Thus we must be careful to not use it in a trace message once
destroyed.
No backport needed. This patch should fix the issue #1220.
As for the other muxes, traces are now supported in the pt mux. All parts of
the multiplexer is covered by these traces. Events are splitted by
categories (connection, stream, rx and tx).
In traces, the first argument is always a connection. So it is easy to get
the mux context (conn->ctx). The second argument is always a conn-stream and
mau be NUUL. The third one is a buffer and it may also be NULL. Depending on
the context it is the request or the response. In all cases it is owned by a
channel. Finally, the fourth argument is an integer value. Its meaning
depends on the calling context.
Released version 2.4-dev16 with the following main changes :
- CLEANUP: dev/flags: remove useless test in the stdin number parser
- MINOR: No longer rely on deprecated sample fetches for predefined ACLs
- MINOR: acl: Add HTTP_2.0 predefined macro
- BUG/MINOR: hlua: Detect end of request when reading data for an HTTP applet
- BUG/MINOR: tools: fix parsing "us" unit for timers
- MINOR: server/bind: add support of new prefixes for addresses.
- MINOR: log: register config file and line number on log servers.
- MEDIUM: log: support tcp or stream addresses on log lines.
- BUG/MEDIUM: log: fix config parse error logging on stdout/stderr or any raw fd
- CLEANUP: fd: remove FD_POLL_DATA and FD_POLL_STICKY
- MEDIUM: fd: prepare FD_POLL_* to move to bits 8-15
- MEDIUM: fd: merge fdtab[].ev and state for FD_EV_* and FD_POLL_* into state
- MINOR: fd: move .linger_risk into fdtab[].state
- MINOR: fd: move .cloned into fdtab[].state
- MINOR: fd: move .initialized into fdtab[].state
- MINOR: fd: move .et_possible into fdtab[].state
- MINOR: fd: move .exported into fdtab[].state
- MINOR: fd: implement an exclusive syscall bit to remove the ugly "log" lock
- MINOR: cli/show-fd: slightly reorganize the FD status flags
- MINOR: atomic/arm64: detect and use builtins for the double-word CAS
- CLEANUP: atomic: add an explicit _FETCH variant for add/sub/and/or
- CLEANUP: atomic: make all standard add/or/and/sub operations return void
- CLEANUP: atomic: add a fetch-and-xxx variant for common operations
- CLEANUP: atomic: add HA_ATOMIC_INC/DEC for unit increments
- CLEANUP: atomic/tree-wide: replace single increments/decrements with inc/dec
- CLEANUP: atomic: use the __atomic variant of BTS/BTR on modern compilers
- MINOR: atomic: implement native BTS/BTR for x86
- MINOR: ist: Add `istappend(struct ist, char)`
- MINOR: ist: Add `istshift(struct ist*)`
- MINOR: ist: Add `istsplit(struct ist*, char)`
- BUG/MAJOR: fd: switch temp values to uint in fd_stop_both()
- MINOR: opentracing: register config file and line number on log servers
- MEDIUM: resolvers: add support of tcp address on nameserver line.
- MINOR: ist: Rename istappend() to __istappend()
- CLEANUP: htx: Make http_get_stline take a `const struct`
- CLEANUP: ist: Remove unused `count` argument from `ist2str*`
- CLEANUP: Remove useless malloc() casts
This argument is not being used inside the function (and the functions
themselves are unused as well) and not documented. Its purpose is not clear.
Just remove it.