Commit Graph

5106 Commits

Author SHA1 Message Date
Willy Tarreau
12840be005 BUILD: compression: switch SLZ from out-of-tree to in-tree
Now that SLZ is merged, let's update the makefile and compression
files to use it. As a result, SLZ_INC and SLZ_LIB are neither defined
nor used anymore.

USE_SLZ is enabled by default ("USE_SLZ=default") and can be disabled
by passing "USE_SLZ=" or by enabling USE_ZLIB=1.

The doc was updated to reflect the changes.
2021-04-22 16:08:25 +02:00
Willy Tarreau
ab2b7828e2 IMPORT: slz: import slz into the tree
SLZ is rarely packaged by distros and there have been complaints about
the CPU and memory usage of ZLIB, leading to some suggestions to better
address the issue by simply integrating SLZ into the tree (just 3 files).
See discussions below:

   https://www.mail-archive.com/haproxy@formilux.org/msg38037.html
   https://www.mail-archive.com/haproxy@formilux.org/msg40079.html
   https://www.mail-archive.com/haproxy@formilux.org/msg40365.html

This patch does just this, after minor adjustments to these files:
  - tables.h was renamed to slz-tables.h
  - tables.h had the precomputed tables removed since not used here
  - slz.c uses includes <import/slz*> instead of "slz*.h"

The slz commit imported here was b06c172 ("slz: avoid a build warning
with -Wimplicit-fallthrough"). No other change was performed either to
SLZ nor to haproxy at this point so that this operation may be replicated
if needed for a future version.
2021-04-22 15:50:41 +02:00
Maximilian Mader
ff3bb8b609 MINOR: uri_normalizer: Add a strip-dot normalizer
This normalizer removes "/./" segments from the path component.
Usually the dot refers to the current directory which renders those segments redundant.

See GitHub Issue #714.
2021-04-21 12:15:14 +02:00
Willy Tarreau
2b71810cb3 CLEANUP: lists/tree-wide: rename some list operations to avoid some confusion
The current "ADD" vs "ADDQ" is confusing because when thinking in terms
of appending at the end of a list, "ADD" naturally comes to mind, but
here it does the opposite, it inserts. Several times already it's been
incorrectly used where ADDQ was expected, the latest of which was a
fortunate accident explained in 6fa922562 ("CLEANUP: stream: explain
why we queue the stream at the head of the server list").

Let's use more explicit (but slightly longer) names now:

   LIST_ADD        ->       LIST_INSERT
   LIST_ADDQ       ->       LIST_APPEND
   LIST_ADDED      ->       LIST_INLIST
   LIST_DEL        ->       LIST_DELETE

The same is true for MT_LISTs, including their "TRY" variant.
LIST_DEL_INIT keeps its short name to encourage to use it instead of the
lazier LIST_DELETE which is often less safe.

The change is large (~674 non-comment entries) but is mechanical enough
to remain safe. No permutation was performed, so any out-of-tree code
can easily map older names to new ones.

The list doc was updated.
2021-04-21 09:20:17 +02:00
Willy Tarreau
942b89f7dc BUILD: pools: fix build with DEBUG_FAIL_ALLOC
Amaury noticed that I managed to break the build of DEBUG_FAIL_ALLOC
for the second time with 207c09509 ("MINOR: pools: move the fault
injector to __pool_alloc()"). The joy of endlessly reworking patch
sets... No backport is needed, that was in the just merged cleanup
series.
2021-04-19 18:36:48 +02:00
Willy Tarreau
096b6cf581 CLEANUP: pools: declare dummy pool functions to remove some ifdefs
By having a pair of dummy pool_get_from_cache() and pool_put_to_cache()
we can remove some ugly ifdefs, so let's do this. We've already done it
for the shared cache.
2021-04-19 15:24:33 +02:00
Willy Tarreau
b2a853d5f0 CLEANUP: pools: uninline pool_put_to_cache()
This function has become too big (251 bytes) and is now hurting
performance a lot, with up to 4% request rate being lost over the last
pool changes. Let's move it to pool.c as a regular function. Other
attempts were made to cut it in half but it's still inefficient. Doing
this results in saving ~90kB of object code, and even 112kB since the
pool changes, with code that is even slightly faster!

Conversely, pool_get_from_cache(), which remains half of this size, is
still faster inlined, likely in part due to the immediate use of the
returned pointer afterwards.
2021-04-19 15:24:33 +02:00
Willy Tarreau
43d4ed548f CLEANUP: pools: merge pool_{get_from,put_to}_local_caches with generic ones
Since pool_get_from_cache() and pool_put_to_cache() were now only wrappers
to the local cache versions which do all the job, let's merge them together
so that there is no more local-cache specific function.
2021-04-19 15:24:33 +02:00
Willy Tarreau
d56db11447 CLEANUP: pools: make the local cache allocator fall back to the shared cache
Now when pool_get_from_local_cache() fails, it automatically falls back
to pool_get_from_shared_cache(), which used to always be done in
pool_get_from_cache(). Thus now the API is simpler as we always allocate
and free from/to the local caches.
2021-04-19 15:24:33 +02:00
Willy Tarreau
fa19d20ac4 MEDIUM: pools: make pool_put_to_cache() always call pool_put_to_local_cache()
Till now it used to call it only if there were not too many objects into
the local cache otherwise would send the latest one directly into the
shared cache. Now it always sends to the local cache and it's up to the
local cache to free its oldest objects. From a cache freshness perspective
it's better this way since we always evict cold objects instead of hot
ones. From an API perspective it's better because it will help make the
shared cache invisible to the public API.
2021-04-19 15:24:33 +02:00
Willy Tarreau
147e1fa385 MINOR: pools: create unified pool_{get_from,put_to}_cache()
These two functions are now responsible for allocating directly from
the cache and releasing to the cache.

Now the pool_alloc() function simply does this:

    if cache enabled
       return pool_alloc_from_cache() if no NULL
    return pool_alloc_nocache() otherwise

and the pool_free() function does this:

    if cache enabled
       pool_put_to_cache()
    else
       pool_free_nocache()

For now this only introduces these two functions without changing anything
else, but the goal is to soon allow to make them implementation-specific.
2021-04-19 15:24:33 +02:00
Willy Tarreau
b8498e961a MEDIUM: pools: make CONFIG_HAP_POOLS control both local and shared pools
Continuing the unification of local and shared pools, now the usage of
pools is governed by CONFIG_HAP_POOLS without which allocations and
releases are performed directly from the OS using pool_alloc_nocache()
and pool_free_nocache().
2021-04-19 15:24:33 +02:00
Willy Tarreau
45e4e28161 MINOR: pools: factor the release code into pool_put_to_os()
There are two levels of freeing to the OS:
  - code that wants to keep the pool's usage counters updated uses
    pool_free_area() and handles the counters itself. That's what
    pool_put_to_shared_cache() does in the no-global-pools case.
  - code that does not want to update the counters because they were
    already updated only calls pool_free_area().

Let's extract these calls to establish the symmetry with pool_get_from_os()
and pool_alloc_nocache(), resulting in pool_put_to_os() (which only updates
the allocated counter) and pool_free_nocache() (which also updates the used
counter). This will later allow to simplify the generic code.
2021-04-19 15:24:33 +02:00
Willy Tarreau
acf0c54491 MINOR: pools: move pool_free_area() out of the lock in the locked version
Calling pool_free_area() inside a lock in pool_put_to_shared_cache() is
a very bad idea. Fortunately this only happens on the lowest end platforms
which almost never use threads or in very small counts.

This change consists in zeroing the pointer once already released to the
cache in the first test so that the second stage knows if it needs to
pass it to the OS or not. This has slightly reduced the length of the
2021-04-19 15:24:33 +02:00
Willy Tarreau
2b5579f6da MINOR: pools: always use atomic ops to maintain counters
A part of the code cannot be factored out because it still uses non-atomic
inc/dec for pool->used and pool->allocated as these are located under the
pool's lock. While it can make sense in terms of bus cycles, it does not
make sense in terms of code normalization. Further, some operations were
still performed under a lock that could be totally removed via the use of
atomic ops.

There is still one occurrence in pool_put_to_shared_cache() in the locked
code where pool_free_area() is called under the lock, which must absolutely
be fixed.
2021-04-19 15:24:33 +02:00
Willy Tarreau
13843641e5 MINOR: pools: split the OS-based allocator in two
Now there's one part dealing with the allocation itself and keeping
counters up to date, and another one on top of it to return such an
allocated pointer to the user and update the use count and stats.

This is in anticipation for being able to group cache-related parts.
The release code is still done at once.
2021-04-19 15:24:33 +02:00
Willy Tarreau
207c095098 MINOR: pools: move the fault injector to __pool_alloc()
Till now it was limited to objects allocated from the OS which means
it had little use as soon as pools were enabled. Let's move it upper
in the layers so that any code can benefit from fault injection. In
addition this allows to pass a new flag POOL_F_NO_FAIL to disable it
if some callers prefer a no-failure approach.
2021-04-19 15:24:33 +02:00
Willy Tarreau
84ebfabf7f MINOR: tools: add statistical_prng_range() to get a random number over a range
This is simply a multiply and shift from statistical_prng() but it's made
easily accessible.
2021-04-19 15:24:33 +02:00
Willy Tarreau
635cced32f CLEANUP: pools: rename __pool_free() to pool_put_to_shared_cache()
Now the multi-level cache becomes more visible:

    pool_get_from_local_cache()
    pool_put_to_local_cache()
    pool_get_from_shared_cache()
    pool_put_to_shared_cache()
2021-04-19 15:24:33 +02:00
Willy Tarreau
8c77ee5ae5 CLEANUP: pools: rename pool_*_{from,to}_cache() to *_local_cache()
The functions were rightfully called from/to_cache when the thread-local
cache was considered as the only cache, but this is getting terribly
confusing. Let's call them from/to local_cache to make it clear that
it is not related with the shared cache.

As a side note, since pool_evict_from_cache() used not to work for a
particular pool but for all of them at once, it was renamed to
pool_evict_from_local_caches()  (plural form).
2021-04-19 15:24:33 +02:00
Willy Tarreau
2f03dcde91 CLEANUP: pools: rename __pool_get_first() to pool_get_from_shared_cache()
This is exactly what it is, the entry is retrieved from the shared
cache when it is defined. The implementation that is enabled with
CONFIG_HAP_NO_GLOBAL_POOLS continues to return NULL.
2021-04-19 15:24:33 +02:00
Willy Tarreau
2543211830 CLEANUP: pools: move the lock to the only __pool_get_first() that needs it
Now that __pool_alloc() only surrounds __pool_get_first() with the lock,
let's move it to the only variant that requires it and remove the ugly
ifdefs from the function. This is safe because nobody else calls this
function.
2021-04-19 15:24:33 +02:00
Willy Tarreau
8ee9df57db MINOR: pools: call pool_alloc_nocache() out of the pool's lock
In __pool_alloc(), historically we used to use factor out the
pool's lock between __pool_get_first() and __pool_refill_alloc(),
resulting in real malloc() or mmap() calls being performed under
the pool lock (for platforms using the locked shared pools).

As this is not needed anymore, let's move the call out of the
lock, it may improve allocation patterns on some platforms. This
also makes __pool_alloc() cleaner as we see a first attempt to
allocate from the local cache, then a second from the shared
cache then a reall allocation.
2021-04-19 15:24:33 +02:00
Willy Tarreau
8fe726f118 CLEANUP: pools: re-merge pool_refill_alloc() and __pool_refill_alloc()
They were strictly equivalent, let's remerge them and rename them to
pool_alloc_nocache() as it's the call which performs a real allocation
which does not check nor update the cache. The only difference in the
past was the former taking the lock and not the second but now the lock
is not needed anymore at this stage since the pool's list is not touched.

In addition, given that the "avail" argument is no longer used by the
function nor by its callers, let's drop it.
2021-04-19 15:24:33 +02:00
Willy Tarreau
64383b8181 MINOR: pools: make the basic pool_refill_alloc()/pool_free() update needed_avg
This is a first step towards unifying all the fallback code. Right now
these two functions are the only ones which do not update the needed_avg
rate counter since there's currently no shared pool kept when using them.
But their code is similar to what could be used everywhere except for
this one, so let's make them capable of maintaining usage statistics.

As a side effect the needed field in "show pools" will now be populated.
2021-04-19 15:24:33 +02:00
Willy Tarreau
53a7fe49aa MINOR: pools: enable the fault injector in all allocation modes
The mem_should_fail() call enabled by DEBUG_FAIL_ALLOC used to be placed
only in the no-cache version of the allocator. Now we can generalize it
to all modes and remove the exclusive test on CONFIG_HAP_NO_GLOBAL_POOLS.
2021-04-19 15:24:33 +02:00
Willy Tarreau
2d6f628d34 MINOR: pools: rename CONFIG_HAP_LOCAL_POOLS to CONFIG_HAP_POOLS
We're going to make the local pool always present unless pools are
completely disabled. This means that pools are always enabled by
default, regardless of the use of threads. Let's drop this notion
of "local" pools and make it just "pool". The equivalent debug
option becomes DEBUG_NO_POOLS instead of DEBUG_NO_LOCAL_POOLS.

For now this changes nothing except the option and dropping the
dependency on USE_THREAD.
2021-04-19 15:24:33 +02:00
Willy Tarreau
d5140e7c6f MINOR: pool: remove the size field from pool_cache_head
Everywhere we have access to the pool so we don't need to cache a copy
of the pool's size into the pool_cache_head. Let's remove it.
2021-04-19 15:24:33 +02:00
Willy Tarreau
9f3129e583 MEDIUM: pools: move the cache into the pool header
Initially per-thread pool caches were stored into a fixed-size array.
But this was a bit ugly because the last allocated pools were not able
to benefit from the cache at all. As a work around to preserve
performance, a size of 64 cacheable pools was set by default (there
are 51 pools at the moment, excluding any addon and debugging code),
so all in-tree pools were covered, at the expense of higher memory
usage.

In addition an index had to be calculated for each pool, and was used
to acces the pool cache head into that array. The pool index was not
even stored into the pools so it was required to determine it to access
the cache when the pool was already known.

This patch changes this by moving the pool cache head into the pool
head itself. This way it is certain that each pool will have its own
cache. This removes the need for index calculation.

The pool cache head is 32 bytes long so it was aligned to 64B to avoid
false sharing between threads. The extra cost is not huge (~2kB more
per pool than before), and we'll make better use of that space soon.
The pool cache head contains the size, which should probably be removed
since it's already in the pool's head.
2021-04-19 15:24:33 +02:00
Willy Tarreau
fff96b441f CLEANUP: pools: remove unused arguments to pool_evict_from_cache()
In commit fb117e6a8 ("MEDIUM: memory: don't let pool_put_to_cache() free
the objects itself") pool_evict_from_cache() was introduced with no
argument, yet the only call place passes it the pool, the pointer and
the index number!

Let's remove these as they even let the reader think that the function
does something specific to the current pool while it's not the case.
2021-04-19 15:24:33 +02:00
Tim Duesterhus
5be6ab269e MEDIUM: http_act: Rename uri-normalizers
This patch renames all existing uri-normalizers into a more consistent naming
scheme:

1. The part of the URI that is being touched.
2. The modification being performed as an explicit verb.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
a407193376 MINOR: uri_normalizer: Add a percent-upper normalizer
This normalizer uppercases the hexadecimal characters used in percent-encoding.

See GitHub Issue #714.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
d7b89be30a MINOR: uri_normalizer: Add a sort-query normalizer
This normalizer sorts the `&` delimited query parameters by parameter name.

See GitHub Issue #714.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
560e1a6352 MINOR: uri_normalizer: Add support for supressing leading ../ for dotdot normalizer
This adds an option to supress `../` at the start of the resulting path.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
9982fc2bbd MINOR: uri_normalizer: Add a dotdot normalizer to http-request normalize-uri
This normalizer merges `../` path segments with the predecing segment, removing
both the preceding segment and the `../`.

Empty segments do not receive special treatment. The `merge-slashes` normalizer
should be executed first.

See GitHub Issue #714.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
d371e99d1c MINOR: uri_normalizer: Add a merge-slashes normalizer to http-request normalize-uri
This normalizer merges adjacent slashes into a single slash, thus removing
empty path segments.

See GitHub Issue #714.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
d2bedcc4ab MINOR: uri_normalizer: Add http-request normalize-uri
This patch adds the `http-request normalize-uri` action that was requested in
GitHub issue #714.

Normalizers will be added in the next patches.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
0ee1ad5675 MINOR: uri_normalizer: Add enum uri_normalizer_err
This enum will serve as the return type for each normalizer.
2021-04-19 09:05:57 +02:00
Tim Duesterhus
dbd25c34de MINOR: uri_normalizer: Add uri_normalizer module
This is in preparation for future patches.
2021-04-19 09:05:57 +02:00
Christopher Faulet
76b44195c9 MINOR: threads: Only consider running threads to end a thread harmeless period
When a thread ends its harmeless period, we must only consider running
threads when testing threads_want_rdv_mask mask. To do so, we reintroduce
all_threads_mask mask in the bitwise operation (It was removed to fix a
deadlock).

Note that for now it is useless because there is no way to stop threads or
to have threads reserved for another task. But it is safer this way to avoid
bugs in the future.
2021-04-17 11:14:58 +02:00
Christopher Faulet
f63a185500 BUG/MEDIUM: threads: Ignore current thread to end its harmless period
A previous patch was pushed to fix a deadlock when an isolated thread ends
its harmless period (a9a9e9aac ["BUG/MEDIUM: thread: Fix a deadlock if an
isolated thread is marked as harmless"]). But, unfortunately, the fix is
incomplete. The same must be done in the outer loop, in
thread_harmless_end() function. The current thread must be ignored when
threads_want_rdv_mask mask is tested.

This patch must also be backported as far as 2.0.
2021-04-17 11:14:58 +02:00
Alex
41007a6835 MINOR: sample: converter: Add mjson library.
This library is required for the subsequent patch which adds
the JSON query possibility.

It is necessary to change the include statement in "src/mjson.c"
because the imported includes in haproxy are in "include/import"

orig: #include "mjson.h"
new:  #include <import/mjson.h>
2021-04-15 17:05:38 +02:00
Tim Duesterhus
763342646f MINOR: ist: Add istclear(struct ist*)
istclear allows one to easily reset an ist to zero-size, while preserving the
previous size, indicating the length of the underlying buffer.
2021-04-14 19:49:33 +02:00
Moemen MHEDHBI
92f7d43c5d MINOR: sample: add ub64dec and ub64enc converters
ub64dec and ub64enc are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
RFC1421 mention in base64.c file is deprecated so it was replaced with
RFC4648 to which existing converters, base64/b64dec, still apply.

Example:
  HAProxy:
    http-request return content-type text/plain lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
  Client:
    Token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
    $ curl -H "Authorization: Bearer ${TOKEN}" http://haproxy.local
    {"user":"foo","key":"chae6AhXai6e"}
2021-04-13 17:28:13 +02:00
Christopher Faulet
0c6d1dcf7d BUG/MINOR: listener: Handle allocation error when allocating a new bind_conf
Allocation error are now handled in bind_conf_alloc() functions. Thus
callers, when not already done, are also updated to catch NULL return value.

This patch may be backported (at least partially) to all stable
versions. However, it only fix errors durung configuration parsing. Thus it
is not mandatory.
2021-04-12 21:33:43 +02:00
Christopher Faulet
147b8c919c MINOIR: checks/trace: Register a new trace source with its events
Add the trace support for the checks. Only tcp-check based health-checks are
supported, including the agent-check.

In traces, the first argument is always a check object. So it is easy to get
all info related to the check. The tcp-check ruleset, the conn-stream and
the connection, the server state...
2021-04-12 12:09:36 +02:00
Christopher Faulet
6d80b63e3c MINOR: trace: Add the checks as a possible trace source
To be able to add the trace support for the checks, a new kind of source
must be added for this purpose.
2021-04-12 12:09:36 +02:00
Willy Tarreau
7b1425a91b MINOR: atomic: reimplement the relaxed version of x86 BTS/BTR
Olivier spotted that I messed up during a rebase of commit 92c059c2a
("MINOR: atomic: implement native BTS/BTR for x86"), losing the x86
version of the BTS/BTR and leaving the generic version for it instead
of having this block in the #else. Since this variant is not used for
now it was easy to overlook it. Let's re-implement it here.
2021-04-12 10:01:44 +02:00
Willy Tarreau
c4c80fb4ea MINOR: time: move the time initialization out of tv_update_date()
The time initialization was made a bit complex because we rely on a
dummy negative argument to reset all fields, leaving no distinction
between process-level initialization and thread-level initialization.
This patch changes this by introducing two functions, one for the
process and the second one for the threads. This removes ambigous
test and makes sure that the relevant fields are always initialized
exactly once. This also offers a better solution to the bug fixed in
commit b48e7c001 ("BUG/MEDIUM: time: make sure to always initialize
the global tick") as there is no more special values for global_now_ms.

It's simple enough to be backported if any other time-related issues
are encountered in stable versions in the future.
2021-04-11 23:45:48 +02:00
Willy Tarreau
61c72c366e CLEANUP: time: remove the now unused ms_left_scaled
It was only used by freq_ctr and is not used anymore. In addition the
local curr_sec_ms was removed, as well as the equivalent extern
definitions which did not exist anymore either.
2021-04-11 14:01:53 +02:00
Willy Tarreau
d46ed5c26b MINOR: freq_ctr: simplify and improve the update function
update_freq_ctr_period() was still not very clean and didn't wait for
the rotation lock to be dropped before trying again, thus maintaining
the contention at a high level. In addition, the rotation update was
made in three steps, which are not very efficient in terms of bus
cycles.

Here the wait loop was reworked so that the fast path remains short
and that the contended path waits for the lock to be dropped before
attempting another write, but it only waits a relax cycle before
attempting a read. The rotation block was simplified to remove a
test that was already validated by the first loop, and so that the
retrieval of the current period, its reset and its increment are all
performed in a single atomic op and the store to the previous period
is performed immediately after.

All this results in significantly smaller code for the inline function
(~1kB total) and a shorter critical path.
2021-04-11 14:01:53 +02:00
Willy Tarreau
6339c19cac MINOR: freq_ctr: add cpu_relax in the rotation loop of update_freq_ctr_period()
When counters are rotated, there is contention between the threads which
can slow down the operation of the thread performing the rotation. Let's
apply a cpu_relax there to let the first thread finish faster.
2021-04-11 11:12:57 +02:00
Willy Tarreau
fc6323ad82 MEDIUM: freq_ctr: replace the per-second counters with the generic ones
It remains cumbersome to preserve two versions of the freq counters and
two different internal clocks just for this. In addition, the savings
from using two different mechanisms are not that important as the only
saving is a divide that is replaced by a multiply, but now thanks to
the freq_ctr_total() unificaiton the code could also be simplified to
optimize it in case of constants.

This patch turns all non-period freq_ctr functions to static inlines
which call the period-based ones with a period of 1 second. A direct
benefit is that a single internal clock is now needed for any counter
and that they now all rely on ticks.

These 1-second counters are essentially used to report request rates
and to enforce a connection rate limitation in listeners. It was
verified that these continue to work like before.
2021-04-11 11:12:55 +02:00
Willy Tarreau
fa1258f02c MINOR: freq_ctr: unify freq_ctr and freq_ctr_period into freq_ctr
Both structures are identical except the name of the field starting
the period and its description. Let's call them all freq_ctr and the
period's start "curr_tick" which is generic.

This is only a temporary change and fields are expected to remain
the same with no code change (verified).
2021-04-11 11:11:27 +02:00
Willy Tarreau
d209c87142 MINOR: freq_ctr: add the missing next_event_delay_period()
There was still no function to compute a wait time for periods, let's
implement it on top of freq_ctr_total() as we'll soon need it for the
per-second one. The divide here is applied on the frequency so that it
will be replaced with a reciprocal multiply when constant.
2021-04-11 11:11:03 +02:00
Willy Tarreau
607be24a85 MEDIUM: freq_ctr: reimplement freq_ctr_remain_period() from freq_ctr_total()
Now the function becomes an inline one and only contains a divide and
a max. The divide will automatically go away with constant periods.
2021-04-11 11:11:03 +02:00
Willy Tarreau
a7a31b2602 MEDIUM: freq_ctr: make read_freq_ctr_period() use freq_ctr_total()
This one is the easiest to implement, it just requires a call and a
divide of the result. Anti-flapping correction for low-rates was
preserved.

Now calls using a constant period will be able to use a reciprocal
multiply for the period instead of a divide.
2021-04-11 11:11:03 +02:00
Willy Tarreau
f3a9f8dc5a MINOR: freq_ctr: add a generic function to report the total value
Most of the functions designed to read a counter over a period go through
the same complex loop and only differ in the way they use the returned
values, so it was worth implementing all this into freq_ctr_total() which
returns the total number of events over a period so that the caller can
finish its operation using a divide or a remaining time calculation. As
a special case, read_freq_ctr_period() doesn't take pending events but
requires to enable an anti-flapping correction at very low frequencies.
Thus the function implements it when pend<0.

Thanks to this function it will be possible to reimplement the other ones
as inline and merge the per-second ones with the arbitrary period ones
without always adding the cost of a 64 bit divide.
2021-04-11 11:10:57 +02:00
Willy Tarreau
ff88270ef9 MINOR: pool: move pool declarations to read_mostly
All pool heads are accessed via a pointer and should not be shared with
highly written variables. Move them to the read_mostly section.
2021-04-10 19:27:41 +02:00
Willy Tarreau
f459640ef6 MINOR: global: declare a read_mostly section
Some variables are mostly read (mostly pointers) but they tend to be
merged with other ones in the same cache line, slowing their access down
in multi-thread setups. This patch declares an empty, aligned variable
in a section called "read_mostly". This will force a cache-line alignment
on this section so that any variable declared in it will be certain to
avoid false sharing with other ones. The section will be eliminated at
link time if not used.

A __read_mostly attribute was added to compiler.h to ease use of this
section.
2021-04-10 19:27:41 +02:00
Willy Tarreau
ba386f6a8d CLEANUP: initcall: rely on HA_SECTION_* instead of defining its own
Now initcalls are defined using the regular section definitions from
compiler.h in order to ease maintenance.
2021-04-10 19:27:41 +02:00
Willy Tarreau
5bec4c42ed MINOR: compiler: add macros to declare section names
HA_SECTION() is used as an attribute to force a section name. This is
required because OSX prepends "__DATA, " in front of the declaration.
HA_SECTION_START() and HA_SECTION_STOP() are used as post-attribute on
variable declaration to designate the section start/end (needed only on
OSX, empty on others).

For platforms with an obsolete linker, all macros are left empty. It would
possibly still work on some of them but this will not be needed anyway.
2021-04-10 19:27:41 +02:00
Willy Tarreau
731f0c6502 CLEANUP: initcall: rename HA_SECTION to HA_INIT_SECTION
The HA_SECTION name is too generic and will be reused globally. Let's
rename this one.
2021-04-10 19:27:41 +02:00
Willy Tarreau
afa9bc0ec5 MINOR: initcall: uniformize the section names between MacOS and other unixes
Due to length restrictions on OSX the initcall sections are called "i_"
there while they're called "init_" on other OSes. However the start and
end of sections are still called "__start_init_" and "__stop_init_",
which forces to have distinct code between the OSes. Let's switch everyone
to "i_" and rename the symbols accordingly.
2021-04-10 19:27:41 +02:00
Willy Tarreau
ad14c2681b MINOR: trace: replace the trace() inline function with an equivalent macro
The trace() function is convenient to avoid calling trace() when traces
are not enabled, but there starts to be some callers which place complex
expressions in their trace calls, which results in all of them to be
evaluated before being passed as arguments to the trace() function. This
needlessly wastes precious CPU cycles.

Let's change the function for a macro, so that the arguments are now only
evaluated when the surce has traces enabled. However having a generic
macro being called "trace()" can easily cause conflicts with innocent
code so we rename it "_trace".

Just doing this has resulted in a 2.5% increase of the HTTP/1 request rate.
2021-04-10 19:27:41 +02:00
Willy Tarreau
9057a0026e CLEANUP: pattern: make all pattern tables read-only
Interestingly, all arrays used to declare patterns were read-write while
only hard-coded. Let's mark them const so that they move from data to
rodata and don't risk to experience false sharing.
2021-04-10 17:49:41 +02:00
Tim Duesterhus
403fd722ac CLEANUP: Remove useless malloc() casts
This is not C++.
2021-04-08 20:11:58 +02:00
Tim Duesterhus
fea59fcf79 CLEANUP: ist: Remove unused count argument from ist2str*
This argument is not being used inside the function (and the functions
themselves are unused as well) and not documented. Its purpose is not clear.
Just remove it.
2021-04-08 19:40:59 +02:00
Tim Duesterhus
b8ee894b66 CLEANUP: htx: Make http_get_stline take a const struct
Nothing is being modified there, so this can be `const`.
2021-04-08 19:40:59 +02:00
Tim Duesterhus
fbc2b79743 MINOR: ist: Rename istappend() to __istappend()
Indicate that this function is not inherently safe by adding two underscores as
a prefix.
2021-04-08 19:35:52 +02:00
Willy Tarreau
1197459e0a BUG/MAJOR: fd: switch temp values to uint in fd_stop_both()
With latest commit f50906519 ("MEDIUM: fd: merge fdtab[].ev and state
for FD_EV_* and FD_POLL_* into state") one occurrence of a pair of
chars was missed in fd_stop_both(), resulting in the operation to
fail if the upper flags were set. Interestingly it managed to fail
2 tests in all setups in the CI while all used to work fine on my
local machines. Probably that the reason is that the chars had enough
room above them for the CAS to fail then refill "old" overwriting the
upper parts of the stack, and that thanks to this the subsequent tests
worked. With ASAN being used on lots of tests, it very likely caught
it but used to only report failed tests with no more info.

No backport is needed, as this was never released nor backported.
2021-04-07 20:46:26 +02:00
Tim Duesterhus
8daf8dceb9 MINOR: ist: Add istsplit(struct ist*, char)
istsplit is a combination of iststop + istadv.
2021-04-07 19:50:43 +02:00
Tim Duesterhus
90aa8c7f02 MINOR: ist: Add istshift(struct ist*)
istshift() returns the first character and advances the ist by 1.
2021-04-07 19:50:43 +02:00
Tim Duesterhus
551eeaec91 MINOR: ist: Add istappend(struct ist, char)
This function appends the given char to the given `ist` and returns
the resulting `ist`.
2021-04-07 19:50:43 +02:00
Willy Tarreau
92c059c2ac MINOR: atomic: implement native BTS/BTR for x86
The current BTS/BTR operations on x86 are ugly because they rely on a
CAS, so they may be unfair and take time to converge. Fortunately,
where they are currently used (mostly FDs) the contention is expected
to be rare (mostly listeners). But this also limits their use to such
few low-load cases.

On x86 there is a set of BTS/BTR instructions which help for this,
but before the FD's state migrated to 32 bits there was little use of
them since they do not exist in 8 bits.

Now at least it makes sense to use them, at the very least in order
to significantly reduce the code size (one BTS instead of a CMPXCHG
loop). The implementation relies on modern gcc's ability to return
condition flags and limit code inflation and register spilling. The
fall back is retained on the old implementation for all other situations
(inappropriate target size or non-capable compiler). The code shrank
by 1.6 kB on the fast path.

As expected, for now on up to 4 threads there is no measurable difference
of performance.
2021-04-07 18:47:22 +02:00
Willy Tarreau
fa68d2641b CLEANUP: atomic: use the __atomic variant of BTS/BTR on modern compilers
Probably due to the result of an old copy-paste, HA_ATOMIC_BTS/BTR were
still implemented using the __sync_*  builtins instead of the more
modern __atomic_* which allow to specify the memory model. Let's update
this to use the newer there and also implement the relaxed variants
(which are not used for now).
2021-04-07 18:18:37 +02:00
Willy Tarreau
4781b1521a CLEANUP: atomic/tree-wide: replace single increments/decrements with inc/dec
This patch replaces roughly all occurrences of an HA_ATOMIC_ADD(&foo, 1)
or HA_ATOMIC_SUB(&foo, 1) with the equivalent HA_ATOMIC_INC(&foo) and
HA_ATOMIC_DEC(&foo) respectively. These are 507 changes over 45 files.
2021-04-07 18:18:37 +02:00
Willy Tarreau
22d675cb77 CLEANUP: atomic: add HA_ATOMIC_INC/DEC for unit increments
Most ADD/SUB callers use them for a single unit (e.g. refcounts) and
it's a pain to always pass ",1". Let's add them to simplify the API.
However we currently don't add any return value. If needed in the future
better report zero/non-zero than a real value for the sake of efficiency
at the instruction level.
2021-04-07 18:18:37 +02:00
Willy Tarreau
185157201c CLEANUP: atomic: add a fetch-and-xxx variant for common operations
The fetch_and_xxx variant is often missing for add/sub/and/or. In fact
it was only provided for ADD under the name XADD which corresponds to
the x86 instruction name. But for destructive operations like AND and
OR it's missing even more as it's not possible to know the value before
modifying it.

This patch explicitly adds HA_ATOMIC_FETCH_{OR,AND,ADD,SUB} which
cover these standard operations, and renames XADD to FETCH_ADD (there
were only 6 call places).

In the future, backport of fixes involving such operations could simply
remap FETCH_ADD(x) to XADD(x), FETCH_SUB(x) to XADD(-x), and for the
OR/AND if needed, these could possibly be done using BTS/BTR.

It's worth noting that xchg could have been renamed to fetch_and_store()
but xchg already has well understood semantics and it wasn't needed to
go further.
2021-04-07 18:18:37 +02:00
Willy Tarreau
a477150fd7 CLEANUP: atomic: make all standard add/or/and/sub operations return void
In order to make sure these ones will not be used anymore in an expression,
let's make them always void. New callers will now be forced to use the
explicit _FETCH variant if required.
2021-04-07 18:18:37 +02:00
Willy Tarreau
1db427399c CLEANUP: atomic: add an explicit _FETCH variant for add/sub/and/or
Currently our atomic ops return a value but it's never known whether
the fetch is done before or after the operation, which causes some
confusion each time the value is desired. Let's create an explicit
variant of these operations suffixed with _FETCH to explicitly mention
that the fetch occurs after the operation, and make use of it at the
few call places.
2021-04-07 18:18:37 +02:00
Willy Tarreau
6756d95a8e MINOR: atomic/arm64: detect and use builtins for the double-word CAS
Gcc 10.2 implements outline atomics on aarch64. The replace all inline
atomic ops with a function call that checks if the machine supports LSE
atomics. This comes with a small cost but allows modern machines to scale
much better than with the old LL/SC ones even when built for full 8.0
compatibility.

This patch enables the use of the __atomic_compare_exchange() builtin
for the double-word CAS when detected as available instead of using the
hand-written LL/SC version. The extra cost is negligible because we do
very few DWCAS operations (essentially FD migrations and shared pools)
so the cost is low but under high contention it can still be beneficial.
As expected no performance difference was measured in either direction
on 4-core machines with this change.

This could be backported to 2.3 if it was shown that FD migrations were
representing a significant source of contention, but for now it does
not appear to be needed.
2021-04-07 18:18:37 +02:00
Willy Tarreau
1673c4a883 MINOR: fd: implement an exclusive syscall bit to remove the ugly "log" lock
There is a function called fd_write_frag_line() that's essentially used
by loggers and that is used to write an atomic message line over a file
descriptor using writev(). However a lock is required around the writev()
call to prevent messages from multiple threads from being interleaved.
Till now a SPIN_TRYLOCK was used on a dedicated lock that was common to
all FDs. This is quite not pretty as if there are multiple output pipes
to collect logs, there will be quite some contention. Now that there
are empty flags left in the FD state and that we can finally use atomic
ops on them, let's add a flag to indicate the FD is locked for exclusive
access by a syscall. At least the locking will now be on an FD basis and
not the whole process, so we can remove the log_lock.
2021-04-07 18:18:37 +02:00
Willy Tarreau
9063a660cc MINOR: fd: move .exported into fdtab[].state
No need to keep this flag apart any more, let's merge it into the global
state.
2021-04-07 18:10:36 +02:00
Willy Tarreau
5362bc9044 MINOR: fd: move .et_possible into fdtab[].state
No need to keep this flag apart any more, let's merge it into the global
state.
2021-04-07 18:09:43 +02:00
Willy Tarreau
0cc612818d MINOR: fd: move .initialized into fdtab[].state
No need to keep this flag apart any more, let's merge it into the global
state. The bit was not cleared in fd_insert() because the only user is
the function used to create and atomically send a log message to a pipe
FD, which never registers the fd. Here we clear it nevertheless for the
sake of clarity.

Note that with an extra cleaning pass we could have a bit number
here and simply use a BTS to test and set it.
2021-04-07 18:09:08 +02:00
Willy Tarreau
030dae13a0 MINOR: fd: move .cloned into fdtab[].state
No need to keep this flag apart any more, let's merge it into the global
state.
2021-04-07 18:08:29 +02:00
Willy Tarreau
b41a6e9101 MINOR: fd: move .linger_risk into fdtab[].state
No need to keep this flag apart any more, let's merge it into the global
state. The CLI's output state was extended to 6 digits and the linger/cloned
flags moved inside the parenthesis.
2021-04-07 18:07:49 +02:00
Willy Tarreau
f509065191 MEDIUM: fd: merge fdtab[].ev and state for FD_EV_* and FD_POLL_* into state
For a long time we've had fdtab[].ev and fdtab[].state which contain two
arbitrary sets of information, one is mostly the configuration plus some
shutdown reports and the other one is the latest polling status report
which also contains some sticky error and shutdown reports.

These ones used to be stored into distinct chars, complicating certain
operations and not even allowing to clearly see concurrent accesses (e.g.
fd_delete_orphan() would set the state to zero while fd_insert() would
only set the event to zero).

This patch creates a single uint with the two sets in it, still delimited
at the byte level for better readability. The original FD_EV_* values
remained at the lowest bit levels as they are also known by their bit
value. The next step will consist in merging the remaining bits into it.

The whole bits are now cleared both in fd_insert() and _fd_delete_orphan()
because after a complete check, it is certain that in both cases these
functions are the only ones touching these areas. Indeed, for
_fd_delete_orphan(), the thread_mask has already been zeroed before a
poller can call fd_update_event() which would touch the state, so it
is certain that _fd_delete_orphan() is alone. Regarding fd_insert(),
only one thread will get an FD at any moment, and it as this FD has
already been released by _fd_delete_orphan() by definition it is certain
that previous users have definitely stopped touching it.

Strictly speaking there's no need for clearing the state again in
fd_insert() but it's cheap and will remove some doubts during some
troubleshooting sessions.
2021-04-07 18:04:39 +02:00
Willy Tarreau
8d27c203ed MEDIUM: fd: prepare FD_POLL_* to move to bits 8-15
In preparation of merging FD_POLL* and FD_EV*, this only changes the
value of FD_POLL_* to use bits 8-15 (the second byte). The size of the
field has been temporarily extended to 32 bits already, as well as
the temporary variables that carry the new composite value inside
fd_update_events(). The resulting fdtab entry becomes temporarily
unaligned. All places making access to .ev or FD_POLL_* were carefully
inspected to make sure they were safe regarding this change. Only one
temporary update was needed for the "show fd" code. The code was only
slightly inflated at this step.
2021-04-07 15:08:40 +02:00
Willy Tarreau
fc0cdfb9b7 CLEANUP: fd: remove FD_POLL_DATA and FD_POLL_STICKY
The former was not used and the second was used only as a positive mask
of the flags to keep instead of having the flags that are updated. Both
were removed in favor of a new FD_POLL_UPDT_MASK that only mentions the
updated flags. This will ease merging of state and ev later.
2021-04-07 15:08:40 +02:00
Emeric Brun
9533a70381 MINOR: log: register config file and line number on log servers.
This patch registers the parsed file and the line where a log server
is declared to make those information available in configuration
post check.

Those new informations were added on error messages probed resolving
ring names on post configuration check.
2021-04-07 09:18:34 +02:00
Amaury Denoyelle
5a6926dcf0 MINOR: diag: create cfgdiag module
This module is intended to serve as a placeholder for various
diagnostics executed after the configuration file has been fully loaded.
2021-04-01 18:03:37 +02:00
Amaury Denoyelle
7b01a8dbdd MINOR: global: define diagnostic mode of execution
Define MODE_DIAG which is used to run haproxy in diagnostic mode. This
mode is used to output extra warnings about possible configuration
blunder or sub-optimal usage. It can be activated with argument '-dD'.

A new output function ha_diag_warning is implemented reserved for
diagnostic output. It serves to standardize the format of diagnostic
messages.

A macro HA_DIAG_WARN_COND is also available to automatically check if
diagnostic mode is on before executing the diagnostic check.
2021-04-01 18:03:37 +02:00
Christopher Faulet
021a8e4d7b MEDIUM: http-rules: Add wait-for-body action on request and response side
Historically, an option was added to wait for the request payload (option
http-buffer-request). This option has 2 drawbacks. First, it is an ON/OFF
option for the whole proxy. It cannot be enabled on demand depending on the
message. Then, as its name suggests, it only works on the request side. The
only option to wait for the response payload was to write a dedicated
filter. While it is an acceptable solution for complex applications, it is a
bit overkill to simply match strings in the body.

To make everyone happy, this patch adds a dedicated HTTP action to wait for
the message payload, for the request or the response depending it is used in
an http-request or an http-response ruleset. The time to wait is
configurable and, optionally, the minimum payload size to have before stop
to wait.

Both the http action and the old http analyzer rely on the same internal
function.
2021-04-01 16:27:40 +02:00
Christopher Faulet
581db2b829 MINOR: payload/config: Warn if a L6 sample fetch is used from an HTTP proxy
L6 sample fetches are now ignored when called from an HTTP proxy. Thus, a
warning is emitted during the startup if such usage is detected. It is true
for most ACLs and for log-format strings. Unfortunately, it is a bit painful
to do so for sample expressions.

This patch relies on the commit "MINOR: action: Use a generic function to
check validity of an action rule list".
2021-04-01 15:34:22 +02:00
Christopher Faulet
42c6cf9501 MINOR: action: Use a generic function to check validity of an action rule list
The check_action_rules() function is now used to check the validity of an
action rule list. It is used from check_config_validity() function to check
L5/6/7 rulesets.
2021-04-01 15:34:22 +02:00
Christopher Faulet
3b6446f4d9 MINOR: config/proxy: Don't warn for HTTP rules in TCP if 'switch-mode http' set
Warnings about ignored HTTP directives in a TCP proxy are inhibited if at
least one switch-mode tcp action is configured to perform HTTP upgraded.
2021-04-01 13:22:42 +02:00
Christopher Faulet
ae863c62e3 MEDIUM: Add tcp-request switch-mode action to perform HTTP upgrade
It is now possible to perform HTTP upgrades on a TCP stream from the
frontend side. To do so, a tcp-request content rule must be defined with the
switch-mode action, specifying the mode (for now, only http is supported)
and optionnaly the proto (h1 or h2).

This way it could be possible to set HTTP directives on a TCP frontend which
will only be evaluated if an upgrade is performed. This new way to perform
HTTP upgrades should replace progressively the old way, consisting to route
the request to an HTTP backend. And it should be also a good start to remove
all HTTP processing from tcp-request content rules.

This action is terminal, it stops the ruleset evaluation. It is only
available on proxy with the frontend capability.

The configuration manual has been updated accordingly.
2021-04-01 13:17:19 +02:00
Christopher Faulet
6c1fd987f6 MINOR: stream: Handle stream HTTP upgrade in a dedicated function
The code responsible to perform an HTTP upgrade from a TCP stream is moved
in a dedicated function, stream_set_http_mode().

The stream_set_backend() function is slightly updated, especially to
correctly set the request analysers.
2021-04-01 11:06:48 +02:00