Commit Graph

14446 Commits

Author SHA1 Message Date
Willy Tarreau
70cb3026a8 MINOR: time: remove useless variable copies in tv_update_date()
In the global_now loop, we used to set tmp_adj from adjusted, then
set update it from tmp_now, then set adjusted back to tmp_adj, and
finally set now from adjusted. This is a long and unneeded set of
moves resulting from years of code changes. Let's just set now
directly in the loop, stop using adjusted and remove tmp_adj.
2021-04-11 23:47:01 +02:00
Willy Tarreau
c4c80fb4ea MINOR: time: move the time initialization out of tv_update_date()
The time initialization was made a bit complex because we rely on a
dummy negative argument to reset all fields, leaving no distinction
between process-level initialization and thread-level initialization.
This patch changes this by introducing two functions, one for the
process and the second one for the threads. This removes ambigous
test and makes sure that the relevant fields are always initialized
exactly once. This also offers a better solution to the bug fixed in
commit b48e7c001 ("BUG/MEDIUM: time: make sure to always initialize
the global tick") as there is no more special values for global_now_ms.

It's simple enough to be backported if any other time-related issues
are encountered in stable versions in the future.
2021-04-11 23:45:48 +02:00
Willy Tarreau
61c72c366e CLEANUP: time: remove the now unused ms_left_scaled
It was only used by freq_ctr and is not used anymore. In addition the
local curr_sec_ms was removed, as well as the equivalent extern
definitions which did not exist anymore either.
2021-04-11 14:01:53 +02:00
Willy Tarreau
d46ed5c26b MINOR: freq_ctr: simplify and improve the update function
update_freq_ctr_period() was still not very clean and didn't wait for
the rotation lock to be dropped before trying again, thus maintaining
the contention at a high level. In addition, the rotation update was
made in three steps, which are not very efficient in terms of bus
cycles.

Here the wait loop was reworked so that the fast path remains short
and that the contended path waits for the lock to be dropped before
attempting another write, but it only waits a relax cycle before
attempting a read. The rotation block was simplified to remove a
test that was already validated by the first loop, and so that the
retrieval of the current period, its reset and its increment are all
performed in a single atomic op and the store to the previous period
is performed immediately after.

All this results in significantly smaller code for the inline function
(~1kB total) and a shorter critical path.
2021-04-11 14:01:53 +02:00
Willy Tarreau
6339c19cac MINOR: freq_ctr: add cpu_relax in the rotation loop of update_freq_ctr_period()
When counters are rotated, there is contention between the threads which
can slow down the operation of the thread performing the rotation. Let's
apply a cpu_relax there to let the first thread finish faster.
2021-04-11 11:12:57 +02:00
Willy Tarreau
fc6323ad82 MEDIUM: freq_ctr: replace the per-second counters with the generic ones
It remains cumbersome to preserve two versions of the freq counters and
two different internal clocks just for this. In addition, the savings
from using two different mechanisms are not that important as the only
saving is a divide that is replaced by a multiply, but now thanks to
the freq_ctr_total() unificaiton the code could also be simplified to
optimize it in case of constants.

This patch turns all non-period freq_ctr functions to static inlines
which call the period-based ones with a period of 1 second. A direct
benefit is that a single internal clock is now needed for any counter
and that they now all rely on ticks.

These 1-second counters are essentially used to report request rates
and to enforce a connection rate limitation in listeners. It was
verified that these continue to work like before.
2021-04-11 11:12:55 +02:00
Willy Tarreau
fa1258f02c MINOR: freq_ctr: unify freq_ctr and freq_ctr_period into freq_ctr
Both structures are identical except the name of the field starting
the period and its description. Let's call them all freq_ctr and the
period's start "curr_tick" which is generic.

This is only a temporary change and fields are expected to remain
the same with no code change (verified).
2021-04-11 11:11:27 +02:00
Willy Tarreau
d209c87142 MINOR: freq_ctr: add the missing next_event_delay_period()
There was still no function to compute a wait time for periods, let's
implement it on top of freq_ctr_total() as we'll soon need it for the
per-second one. The divide here is applied on the frequency so that it
will be replaced with a reciprocal multiply when constant.
2021-04-11 11:11:03 +02:00
Willy Tarreau
607be24a85 MEDIUM: freq_ctr: reimplement freq_ctr_remain_period() from freq_ctr_total()
Now the function becomes an inline one and only contains a divide and
a max. The divide will automatically go away with constant periods.
2021-04-11 11:11:03 +02:00
Willy Tarreau
a7a31b2602 MEDIUM: freq_ctr: make read_freq_ctr_period() use freq_ctr_total()
This one is the easiest to implement, it just requires a call and a
divide of the result. Anti-flapping correction for low-rates was
preserved.

Now calls using a constant period will be able to use a reciprocal
multiply for the period instead of a divide.
2021-04-11 11:11:03 +02:00
Willy Tarreau
f3a9f8dc5a MINOR: freq_ctr: add a generic function to report the total value
Most of the functions designed to read a counter over a period go through
the same complex loop and only differ in the way they use the returned
values, so it was worth implementing all this into freq_ctr_total() which
returns the total number of events over a period so that the caller can
finish its operation using a divide or a remaining time calculation. As
a special case, read_freq_ctr_period() doesn't take pending events but
requires to enable an anti-flapping correction at very low frequencies.
Thus the function implements it when pend<0.

Thanks to this function it will be possible to reimplement the other ones
as inline and merge the per-second ones with the arbitrary period ones
without always adding the cost of a 64 bit divide.
2021-04-11 11:10:57 +02:00
Willy Tarreau
6eb3d37bf4 MINOR: trace: make trace sources read_mostly
The trace sources are checked at plenty of places in the code and their
contents only change when trace status changes, let's mark them read_mostly.
2021-04-10 19:29:26 +02:00
Willy Tarreau
295a89c029 MINOR: pattern: make the pat_lru_seed read_mostly
This seed is created once at boot and is used in every LRU hash when
caching results. Let's mark it read_mostly.
2021-04-10 19:27:41 +02:00
Willy Tarreau
ad6722ea3a MINOR: protocol: move __protocol_by_family to read_mostly
This one is used for each outgoing connection and never changes after
boot, move it to read_mostly.
2021-04-10 19:27:41 +02:00
Willy Tarreau
14015b8880 MINOR: server: move idle_conn_task to read_mostly
This pointer is used when adding connections to the idle list and is
never changed, let's move it to the read_mostly section.
2021-04-10 19:27:41 +02:00
Willy Tarreau
56c3b8b4e8 MINOR: threads: mark all_threads_mask as read_mostly
This variable almost never changes and is read a lot in time-critical
sections. threads_want_rdv_mask is read very often as well in
thread_harmless_end() and is almost never changed (only when someone
uses thread_isolate()). Let's move both to read_mostly.
2021-04-10 19:27:41 +02:00
Willy Tarreau
ff88270ef9 MINOR: pool: move pool declarations to read_mostly
All pool heads are accessed via a pointer and should not be shared with
highly written variables. Move them to the read_mostly section.
2021-04-10 19:27:41 +02:00
Willy Tarreau
8209c9aa18 MINOR: kqueue: move kqueue_fd to read_mostly
This one only contains the list of per-thread kqueue FDs, and is used
a lot during updates. Let's mark it read_mostly to avoid false sharing
of FDs placed at the extremities.
2021-04-10 19:27:41 +02:00
Willy Tarreau
26d212c744 MINOR: epoll: move epoll_fd to read_mostly
This one only contains the list of per-thread epoll FDs, and is used
a lot during updates. Let's mark it read_mostly to avoid false sharing
of FDs placed at the extremities.
2021-04-10 19:27:41 +02:00
Willy Tarreau
a1090a5b61 MINOR: fd: move a few read-mostly variables to their own section
Some pointer to arrays such as fdtab, fdinfo, polled_mask etc are never
written to at run time but are used a lot. fdtab accesses appear a lot in
perf top because ha_used_fds is in the same cache line and is modified
all the time. This patch moves all these read-mostly variables to the
read_mostly section when defined. This way their cache lines will be
able to remain in shared state in all CPU caches.
2021-04-10 19:27:41 +02:00
Willy Tarreau
f459640ef6 MINOR: global: declare a read_mostly section
Some variables are mostly read (mostly pointers) but they tend to be
merged with other ones in the same cache line, slowing their access down
in multi-thread setups. This patch declares an empty, aligned variable
in a section called "read_mostly". This will force a cache-line alignment
on this section so that any variable declared in it will be certain to
avoid false sharing with other ones. The section will be eliminated at
link time if not used.

A __read_mostly attribute was added to compiler.h to ease use of this
section.
2021-04-10 19:27:41 +02:00
Willy Tarreau
ba386f6a8d CLEANUP: initcall: rely on HA_SECTION_* instead of defining its own
Now initcalls are defined using the regular section definitions from
compiler.h in order to ease maintenance.
2021-04-10 19:27:41 +02:00
Willy Tarreau
5bec4c42ed MINOR: compiler: add macros to declare section names
HA_SECTION() is used as an attribute to force a section name. This is
required because OSX prepends "__DATA, " in front of the declaration.
HA_SECTION_START() and HA_SECTION_STOP() are used as post-attribute on
variable declaration to designate the section start/end (needed only on
OSX, empty on others).

For platforms with an obsolete linker, all macros are left empty. It would
possibly still work on some of them but this will not be needed anyway.
2021-04-10 19:27:41 +02:00
Willy Tarreau
731f0c6502 CLEANUP: initcall: rename HA_SECTION to HA_INIT_SECTION
The HA_SECTION name is too generic and will be reused globally. Let's
rename this one.
2021-04-10 19:27:41 +02:00
Willy Tarreau
afa9bc0ec5 MINOR: initcall: uniformize the section names between MacOS and other unixes
Due to length restrictions on OSX the initcall sections are called "i_"
there while they're called "init_" on other OSes. However the start and
end of sections are still called "__start_init_" and "__stop_init_",
which forces to have distinct code between the OSes. Let's switch everyone
to "i_" and rename the symbols accordingly.
2021-04-10 19:27:41 +02:00
Willy Tarreau
ad14c2681b MINOR: trace: replace the trace() inline function with an equivalent macro
The trace() function is convenient to avoid calling trace() when traces
are not enabled, but there starts to be some callers which place complex
expressions in their trace calls, which results in all of them to be
evaluated before being passed as arguments to the trace() function. This
needlessly wastes precious CPU cycles.

Let's change the function for a macro, so that the arguments are now only
evaluated when the surce has traces enabled. However having a generic
macro being called "trace()" can easily cause conflicts with innocent
code so we rename it "_trace".

Just doing this has resulted in a 2.5% increase of the HTTP/1 request rate.
2021-04-10 19:27:41 +02:00
Willy Tarreau
9057a0026e CLEANUP: pattern: make all pattern tables read-only
Interestingly, all arrays used to declare patterns were read-write while
only hard-coded. Let's mark them const so that they move from data to
rodata and don't risk to experience false sharing.
2021-04-10 17:49:41 +02:00
Tim Duesterhus
9dee2150f6 CI: travis: Drastically clean up .travis.yml
Now travis should only run on cron, on non-amd64, with a configuration that
only has the standard features enabled. This should reduce the number of
valuable build minutes consumed while providing as much value as possible.
2021-04-10 13:38:00 +02:00
Christopher Faulet
e2c65ba344 BUG/MINOR: mux-pt: Fix a possible UAF because of traces in mux_pt_io_cb
In mux_pt_io_cb(), if a connection error or a shutdown is detected, the mux
is destroyed. Thus we must be careful to not use it in a trace message once
destroyed.

No backport needed. This patch should fix the issue #1220.
2021-04-10 09:02:36 +02:00
Christopher Faulet
c0ae097b95 MINOIR: mux-pt/trace: Register a new trace source with its events
As for the other muxes, traces are now supported in the pt mux. All parts of
the multiplexer is covered by these traces. Events are splitted by
categories (connection, stream, rx and tx).

In traces, the first argument is always a connection. So it is easy to get
the mux context (conn->ctx). The second argument is always a conn-stream and
mau be NUUL. The third one is a buffer and it may also be NULL. Depending on
the context it is the request or the response. In all cases it is owned by a
channel. Finally, the fourth argument is an integer value. Its meaning
depends on the calling context.
2021-04-09 17:46:58 +02:00
Willy Tarreau
86512dd152 [RELEASE] Released version 2.4-dev16
Released version 2.4-dev16 with the following main changes :
    - CLEANUP: dev/flags: remove useless test in the stdin number parser
    - MINOR: No longer rely on deprecated sample fetches for predefined ACLs
    - MINOR: acl: Add HTTP_2.0 predefined macro
    - BUG/MINOR: hlua: Detect end of request when reading data for an HTTP applet
    - BUG/MINOR: tools: fix parsing "us" unit for timers
    - MINOR: server/bind: add support of new prefixes for addresses.
    - MINOR: log: register config file and line number on log servers.
    - MEDIUM: log: support tcp or stream addresses on log lines.
    - BUG/MEDIUM: log: fix config parse error logging on stdout/stderr or any raw fd
    - CLEANUP: fd: remove FD_POLL_DATA and FD_POLL_STICKY
    - MEDIUM: fd: prepare FD_POLL_* to move to bits 8-15
    - MEDIUM: fd: merge fdtab[].ev and state for FD_EV_* and FD_POLL_* into state
    - MINOR: fd: move .linger_risk into fdtab[].state
    - MINOR: fd: move .cloned into fdtab[].state
    - MINOR: fd: move .initialized into fdtab[].state
    - MINOR: fd: move .et_possible into fdtab[].state
    - MINOR: fd: move .exported into fdtab[].state
    - MINOR: fd: implement an exclusive syscall bit to remove the ugly "log" lock
    - MINOR: cli/show-fd: slightly reorganize the FD status flags
    - MINOR: atomic/arm64: detect and use builtins for the double-word CAS
    - CLEANUP: atomic: add an explicit _FETCH variant for add/sub/and/or
    - CLEANUP: atomic: make all standard add/or/and/sub operations return void
    - CLEANUP: atomic: add a fetch-and-xxx variant for common operations
    - CLEANUP: atomic: add HA_ATOMIC_INC/DEC for unit increments
    - CLEANUP: atomic/tree-wide: replace single increments/decrements with inc/dec
    - CLEANUP: atomic: use the __atomic variant of BTS/BTR on modern compilers
    - MINOR: atomic: implement native BTS/BTR for x86
    - MINOR: ist: Add `istappend(struct ist, char)`
    - MINOR: ist: Add `istshift(struct ist*)`
    - MINOR: ist: Add `istsplit(struct ist*, char)`
    - BUG/MAJOR: fd: switch temp values to uint in fd_stop_both()
    - MINOR: opentracing: register config file and line number on log servers
    - MEDIUM: resolvers: add support of tcp address on nameserver line.
    - MINOR: ist: Rename istappend() to __istappend()
    - CLEANUP: htx: Make http_get_stline take a `const struct`
    - CLEANUP: ist: Remove unused `count` argument from `ist2str*`
    - CLEANUP: Remove useless malloc() casts
2021-04-09 17:10:39 +02:00
Tim Duesterhus
403fd722ac CLEANUP: Remove useless malloc() casts
This is not C++.
2021-04-08 20:11:58 +02:00
Tim Duesterhus
fea59fcf79 CLEANUP: ist: Remove unused count argument from ist2str*
This argument is not being used inside the function (and the functions
themselves are unused as well) and not documented. Its purpose is not clear.
Just remove it.
2021-04-08 19:40:59 +02:00
Tim Duesterhus
b8ee894b66 CLEANUP: htx: Make http_get_stline take a const struct
Nothing is being modified there, so this can be `const`.
2021-04-08 19:40:59 +02:00
Tim Duesterhus
fbc2b79743 MINOR: ist: Rename istappend() to __istappend()
Indicate that this function is not inherently safe by adding two underscores as
a prefix.
2021-04-08 19:35:52 +02:00
Emeric Brun
c8f3e45c6a MEDIUM: resolvers: add support of tcp address on nameserver line.
This patch re-works configuration parsing, it removes the "server"
lines from "resolvers" sections introduced in commit 56fc5d9eb:
MEDIUM: resolvers: add supports of TCP nameservers in resolvers.

It also extends the nameserver lines to support stream server
addresses such as:

resolvers
  nameserver localhost tcp@127.0.0.1:53

Doing so, a part of nameserver's init code was factorized in
function 'parse_resolvers' and removed from 'post_parse_resolvers'.
2021-04-08 14:20:40 +02:00
Miroslav Zagorac
98272253d8 MINOR: opentracing: register config file and line number on log servers
In commit 9533a7038 new parameters have been added to the declaration
of function parse_logsrv().

This patch should be backported to all branches where the OpenTracing
filter is located.
2021-04-08 11:10:27 +02:00
Willy Tarreau
1197459e0a BUG/MAJOR: fd: switch temp values to uint in fd_stop_both()
With latest commit f50906519 ("MEDIUM: fd: merge fdtab[].ev and state
for FD_EV_* and FD_POLL_* into state") one occurrence of a pair of
chars was missed in fd_stop_both(), resulting in the operation to
fail if the upper flags were set. Interestingly it managed to fail
2 tests in all setups in the CI while all used to work fine on my
local machines. Probably that the reason is that the chars had enough
room above them for the CAS to fail then refill "old" overwriting the
upper parts of the stack, and that thanks to this the subsequent tests
worked. With ASAN being used on lots of tests, it very likely caught
it but used to only report failed tests with no more info.

No backport is needed, as this was never released nor backported.
2021-04-07 20:46:26 +02:00
Tim Duesterhus
8daf8dceb9 MINOR: ist: Add istsplit(struct ist*, char)
istsplit is a combination of iststop + istadv.
2021-04-07 19:50:43 +02:00
Tim Duesterhus
90aa8c7f02 MINOR: ist: Add istshift(struct ist*)
istshift() returns the first character and advances the ist by 1.
2021-04-07 19:50:43 +02:00
Tim Duesterhus
551eeaec91 MINOR: ist: Add istappend(struct ist, char)
This function appends the given char to the given `ist` and returns
the resulting `ist`.
2021-04-07 19:50:43 +02:00
Willy Tarreau
92c059c2ac MINOR: atomic: implement native BTS/BTR for x86
The current BTS/BTR operations on x86 are ugly because they rely on a
CAS, so they may be unfair and take time to converge. Fortunately,
where they are currently used (mostly FDs) the contention is expected
to be rare (mostly listeners). But this also limits their use to such
few low-load cases.

On x86 there is a set of BTS/BTR instructions which help for this,
but before the FD's state migrated to 32 bits there was little use of
them since they do not exist in 8 bits.

Now at least it makes sense to use them, at the very least in order
to significantly reduce the code size (one BTS instead of a CMPXCHG
loop). The implementation relies on modern gcc's ability to return
condition flags and limit code inflation and register spilling. The
fall back is retained on the old implementation for all other situations
(inappropriate target size or non-capable compiler). The code shrank
by 1.6 kB on the fast path.

As expected, for now on up to 4 threads there is no measurable difference
of performance.
2021-04-07 18:47:22 +02:00
Willy Tarreau
fa68d2641b CLEANUP: atomic: use the __atomic variant of BTS/BTR on modern compilers
Probably due to the result of an old copy-paste, HA_ATOMIC_BTS/BTR were
still implemented using the __sync_*  builtins instead of the more
modern __atomic_* which allow to specify the memory model. Let's update
this to use the newer there and also implement the relaxed variants
(which are not used for now).
2021-04-07 18:18:37 +02:00
Willy Tarreau
4781b1521a CLEANUP: atomic/tree-wide: replace single increments/decrements with inc/dec
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
2021-04-07 18:18:37 +02:00
Willy Tarreau
22d675cb77 CLEANUP: atomic: add HA_ATOMIC_INC/DEC for unit increments
Most ADD/SUB callers use them for a single unit (e.g. refcounts) and
it's a pain to always pass ",1". Let's add them to simplify the API.
However we currently don't add any return value. If needed in the future
better report zero/non-zero than a real value for the sake of efficiency
at the instruction level.
2021-04-07 18:18:37 +02:00
Willy Tarreau
185157201c CLEANUP: atomic: add a fetch-and-xxx variant for common operations
The fetch_and_xxx variant is often missing for add/sub/and/or. In fact
it was only provided for ADD under the name XADD which corresponds to
the x86 instruction name. But for destructive operations like AND and
OR it's missing even more as it's not possible to know the value before
modifying it.

This patch explicitly adds HA_ATOMIC_FETCH_{OR,AND,ADD,SUB} which
cover these standard operations, and renames XADD to FETCH_ADD (there
were only 6 call places).

In the future, backport of fixes involving such operations could simply
remap FETCH_ADD(x) to XADD(x), FETCH_SUB(x) to XADD(-x), and for the
OR/AND if needed, these could possibly be done using BTS/BTR.

It's worth noting that xchg could have been renamed to fetch_and_store()
but xchg already has well understood semantics and it wasn't needed to
go further.
2021-04-07 18:18:37 +02:00
Willy Tarreau
a477150fd7 CLEANUP: atomic: make all standard add/or/and/sub operations return void
In order to make sure these ones will not be used anymore in an expression,
let's make them always void. New callers will now be forced to use the
explicit _FETCH variant if required.
2021-04-07 18:18:37 +02:00
Willy Tarreau
1db427399c CLEANUP: atomic: add an explicit _FETCH variant for add/sub/and/or
Currently our atomic ops return a value but it's never known whether
the fetch is done before or after the operation, which causes some
confusion each time the value is desired. Let's create an explicit
variant of these operations suffixed with _FETCH to explicitly mention
that the fetch occurs after the operation, and make use of it at the
few call places.
2021-04-07 18:18:37 +02:00
Willy Tarreau
6756d95a8e MINOR: atomic/arm64: detect and use builtins for the double-word CAS
Gcc 10.2 implements outline atomics on aarch64. The replace all inline
atomic ops with a function call that checks if the machine supports LSE
atomics. This comes with a small cost but allows modern machines to scale
much better than with the old LL/SC ones even when built for full 8.0
compatibility.

This patch enables the use of the __atomic_compare_exchange() builtin
for the double-word CAS when detected as available instead of using the
hand-written LL/SC version. The extra cost is negligible because we do
very few DWCAS operations (essentially FD migrations and shared pools)
so the cost is low but under high contention it can still be beneficial.
As expected no performance difference was measured in either direction
on 4-core machines with this change.

This could be backported to 2.3 if it was shown that FD migrations were
representing a significant source of contention, but for now it does
not appear to be needed.
2021-04-07 18:18:37 +02:00
Willy Tarreau
184b21259b MINOR: cli/show-fd: slightly reorganize the FD status flags
Slightly reorder the status flags to better match their order in the
"state" field, and also decode the "shut" state which is particularly
useful and already part of this field.
2021-04-07 18:18:37 +02:00