Commit Graph

14527 Commits

Author SHA1 Message Date
Willy Tarreau
481795de13 MINOR: time: avoid unneeded updates to now_offset
The time adjustment is very rare, even at high pool rates. Tests show
that only 0.2% of tv_update_date() calls require a change of offset. Such
concurrent writes to a shared variable have an important impact on future
loads, so let's only update the variable if it changed.
2021-04-23 18:03:06 +02:00
Amaury Denoyelle
a6f9c5d2a7 BUG/MINOR: cpuset: fix compilation on platform without cpu affinity
The compilation is currently broken on platform without USE_CPU_AFFINITY
set. An error has been reported by the cygwin build of the CI.

This does not need to be backported.

In file included from include/haproxy/global-t.h:27,
                 from include/haproxy/global.h:26,
                 from include/haproxy/fd.h:33,
                 from src/ev_poll.c:22:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
   32 | # error "No cpuset support implemented on this platform"
      |   ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
   37 |  CPUSET_REPR cpuset;
      |  ^~~~~~~~~~~
make: *** [Makefile:944: src/ev_poll.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from include/haproxy/global-t.h:27,
                 from include/haproxy/global.h:26,
                 from include/haproxy/fd.h:33,
                 from include/haproxy/connection.h:30,
                 from include/haproxy/ssl_sock.h:27,
                 from src/ssl_sample.c:30:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
   32 | # error "No cpuset support implemented on this platform"
      |   ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
   37 |  CPUSET_REPR cpuset;
      |  ^~~~~~~~~~~
make: *** [Makefile:944: src/ssl_sample.o] Error 1
2021-04-23 17:04:24 +02:00
Amaury Denoyelle
c5ed1f9d87 BUG/MINOR: haproxy: fix compilation on macOS
Fix the warning treated as error on the CI for the macOS compilation :
"src/haproxy.c:2939:23: error: unused variable 'set'
 [-Werror,-Wunused-variable]"

This does not need to be backported.
2021-04-23 16:41:22 +02:00
Amaury Denoyelle
0f50cb9c73 MINOR: global: add option to disable numa detection
Render numa detection optional with a global configuration statement
'no numa-cpu-mapping'. This can be used if the applied affinity of the
algorithm is not optimal. Also complete the documentation with this new
keyword.
2021-04-23 16:06:49 +02:00
Amaury Denoyelle
b56a7c89a8 MEDIUM: cfgparse: detect numa and set affinity if needed
On process startup, the CPU topology of the machine is inspected. If a
multi-socket CPU machine is detected, automatically define the process
affinity on the first node with active cpus. This is done to prevent an
impact on the overall performance of the process in case the topology of
the machine is unknown to the user.

This step is not executed in the following condition :
- a non-null nbthread statement is present
- a restrictive 'cpu-map' statement is present
- the process affinity is already restricted, for example via a taskset
  call

For the record, benchmarks were executed on a machine with 2 CPUs
Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz. In both clear and ssl
scenario, the performance were sub-optimal without the automatic
rebinding on a single node.
2021-04-23 16:06:49 +02:00
Amaury Denoyelle
a80823543c MINOR: cfgparse: support the comma separator on parse_cpu_set
Allow to specify multiple cpu ids/ranges in parse_cpu_set separated by a
comma. This is optional and must be activated by a parameter.

The comma support is disabled for the parsing of the 'cpu-map' config
statement. However, it will be useful to parse files in sysfs when
inspecting the cpus topology for NUMA automatic process binding.
2021-04-23 16:06:49 +02:00
Amaury Denoyelle
4c9efdecf5 MINOR: thread: implement the detection of forced cpu affinity
Create a function thread_cpu_mask_forced. Its purpose is to report if a
restrictive cpu mask is active for the current proces, for example due
to a taskset invocation. It is only implemented for the linux platform
currently.
2021-04-23 16:06:49 +02:00
Amaury Denoyelle
982fb53390 MEDIUM: config: use platform independent type hap_cpuset for cpu-map
Use the platform independent type hap_cpuset for the cpu-map statement
parsing. This allow to address CPU index greater than LONGBITS.

Update the documentation to reflect the removal of this limit except for
platforms without cpu_set_t type or equivalent.
2021-04-23 16:06:49 +02:00
Amaury Denoyelle
c90932bc8e MINOR: cfgparse: use hap_cpuset for parse_cpu_set
Replace the unsigned long parameter by a hap_cpuset. This allows to
address CPU with index greater than LONGBITS.

This function is used to parse the 'cpu-map' statement. However at the
moment, the result is casted back to a long to store it in the global
structure. The next step is to replace ulong in in cpu_map in the
global structure with hap_cpuset.
2021-04-23 16:06:49 +02:00
Amaury Denoyelle
f75c640f7b MINOR: cpuset: define a platform-independent cpuset type
This module can be used to manipulate a cpu sets in a platform agnostic
way. Use the type cpu_set_t/cpuset_t if available on the platform, or
fallback to unsigned long, which limits de facto the maximum cpu index
to LONGBITS.
2021-04-23 16:06:49 +02:00
Christopher Faulet
de9d605aa5 BUG/MEDIUM: mux-h2: Properly handle shutdowns when received with data
The H2_CF_RCVD_SHUT flag is used to report a read0 was encountered. It is
used by the H2 mux to properly handle shutdowns. However, this flag is only
set when no data are received. If it is detected at the socket level when
some data are received, it is not handled. And because the event was
reported on the connection, any other read attempts are blocked. In this
case, we are unable to close the connection and release the mux
immediately. We must wait the mux timeout expires.

This patch should fix the issue #1231. It must be backported as far as 2.0.
2021-04-23 15:42:39 +02:00
Willy Tarreau
5e65f4276b CLEANUP: compression: remove calls to SLZ init functions
As we now embed the library we don't need to support the older 1.0 API
any more, so we can remove the explicit calls to slz_make_crc_table()
and slz_prepare_dist_table().
2021-04-22 16:11:19 +02:00
Willy Tarreau
aca389a483 CI: github: do not build libslz any more
As hinted by Tim, it's not needed any more since it's now integrated,
let's get rid of this step.
2021-04-22 16:10:32 +02:00
Willy Tarreau
12840be005 BUILD: compression: switch SLZ from out-of-tree to in-tree
Now that SLZ is merged, let's update the makefile and compression
files to use it. As a result, SLZ_INC and SLZ_LIB are neither defined
nor used anymore.

USE_SLZ is enabled by default ("USE_SLZ=default") and can be disabled
by passing "USE_SLZ=" or by enabling USE_ZLIB=1.

The doc was updated to reflect the changes.
2021-04-22 16:08:25 +02:00
Willy Tarreau
ab2b7828e2 IMPORT: slz: import slz into the tree
SLZ is rarely packaged by distros and there have been complaints about
the CPU and memory usage of ZLIB, leading to some suggestions to better
address the issue by simply integrating SLZ into the tree (just 3 files).
See discussions below:

   https://www.mail-archive.com/haproxy@formilux.org/msg38037.html
   https://www.mail-archive.com/haproxy@formilux.org/msg40079.html
   https://www.mail-archive.com/haproxy@formilux.org/msg40365.html

This patch does just this, after minor adjustments to these files:
  - tables.h was renamed to slz-tables.h
  - tables.h had the precomputed tables removed since not used here
  - slz.c uses includes <import/slz*> instead of "slz*.h"

The slz commit imported here was b06c172 ("slz: avoid a build warning
with -Wimplicit-fallthrough"). No other change was performed either to
SLZ nor to haproxy at this point so that this operation may be replicated
if needed for a future version.
2021-04-22 15:50:41 +02:00
Willy Tarreau
af6ae6395f BUILD: makefile: fix the "make clean" target on strict bourne shells
As reported by @axinojolais in issue #1217, some older bourne shells do
not expand on braces so some files were not cleaned since the recent
splitting of the contrib/ subdir. Let's fix that by explicitly listing
the patterns to be cleared (which are in much smaller quantity now that
contrib was removed), and for grouping them with their respective dirs.

At some point, some recursive makefiles would probably help there.
2021-04-21 17:22:33 +02:00
William Lallemand
aba7f8b313 BUG/MINOR: mworker: don't use oldpids[] anymore for reload
Since commit 3f12887 ("MINOR: mworker: don't use children variable
anymore"), the oldpids array is not used anymore to generate the new -sf
parameters. So we don't need to set nb_oldpids to 0 during the first
start of the master process.

This patch fixes a bug when 2 masters process tries to synchronize their
peers, there is a small chances that it won't work because nb_oldpids
equals 0.

Should be backported as far as 2.0.
2021-04-21 16:55:34 +02:00
William Lallemand
ea6bf83d62 BUG/MINOR: mworker/init: don't reset nb_oldpids in non-mworker cases
This bug affects the peers synchronisation code which rely on the
nb_oldpids variable to synchronize the peer from the old PID.

In the case the process is not started in master-worker mode and tries
to synchronize using the peers, there is a small chance that won't work
because nb_oldpids equals 0.

Fix the bug by setting the variable to 0 only in the case of the
master-worker when not reloaded.

It could also be a problem when trying to synchronize the peers between
2 masters process which should be fixed in another patch.

Bug exists since commit 8a361b5 ("BUG/MEDIUM: mworker: don't reuse PIDs
passed to the master").

Sould be backported as far as 1.8.
2021-04-21 16:42:18 +02:00
Amaury Denoyelle
a2944ecf5d MINOR: config: add a diag for invalid cpu-map statement
If a cpu-statement is refering to multiple processes and threads, it is
silently ignored. Add a diag message to report it to the user.
2021-04-21 15:18:57 +02:00
Amaury Denoyelle
af02c57406 BUG/MEDIUM: config: fix cpu-map notation with both process and threads
The application of a cpu-map statement with both process and threads
is broken (P-Q/1 or 1/P-Q notation).

For example, before the fix, when using P-Q/1, proc_t1 would be updated.
Then it would be AND'ed with thread which is still 0 and thus does
nothing.

Another problem is when using 1/1[-Q], thread[0] is defined. But if
there is multiple processes, every processes will use this define
affinity even if it should be applied only to 1st process.

The solution to the fix is a little bit too complex for my taste and
there is maybe a simpler solution but I did not wish to break the
storage of global.cpu_map, as it is quite painful to test all the
use-cases. Besides, this code will probably be clean up when
multiprocess support removed on the future version.

Let's try to explain my logic.

* either haproxy runs in multiprocess or multithread mode. If on
  multiprocess, we should consider proc_t1 (P-Q/1 notation). If on
  multithread, we should consider thread (1/P-Q notation). However
  during parsing, the final number of processes or threads is unknown,
  thus we have to consider the two possibilities.

* there is a special case for the first thread / first process which is
  present in both execution modes. And as a matter of fact cpu-map 1 or
  1/1 notation represents the same thing. Thus, thread[0] and proc_t1[0]
  represents the same thing. To solve this problem, only thread[0] is
  used for this special case.

This fix must be backported up to 2.0.
2021-04-21 15:18:57 +02:00
Willy Tarreau
580727f3af CLEANUP: contrib: remove the last references to the now dead contrib/ directory
Now with the last SPOA modules gone, contrib/ doesn't exist anymore
and does not need to be referenced in the Makefile nor .gitignore.
2021-04-21 15:13:58 +02:00
Willy Tarreau
046fbe6d50 CONTRIB: move mod_defender out of the tree
As previously mentioned SPOA code has nothing to do in the haproxy core
since they're not dependent on haproxy's version. This one was moved to
its own repository here with complete history:

  https://github.com/haproxytech/spoa-mod_defender
2021-04-21 15:13:58 +02:00
Maximilian Mader
ff3bb8b609 MINOR: uri_normalizer: Add a strip-dot normalizer
This normalizer removes "/./" segments from the path component.
Usually the dot refers to the current directory which renders those segments redundant.

See GitHub Issue #714.
2021-04-21 12:15:14 +02:00
Maximilian Mader
c9c79570d4 CLEANUP: uri_normalizer: Remove trailing whitespace
This patch removes a single trailing space.
2021-04-21 12:15:14 +02:00
Maximilian Mader
11f6f85c4b BUG/MINOR: uri_normalizer: Use delim parameter when building the sorted query in uri_normalizer_query_sort
Currently the delimiter is hardcoded as ampersand (&) but the function takes the delimiter as a paramter.
This patch replaces the hardcoded ampersand with the given delimiter.
2021-04-21 12:15:14 +02:00
Christopher Faulet
cb1847c772 BUG/MEDIUM: mux-h2: Fix dfl calculation when merging CONTINUATION frames
When header are splitted over several frames, payload of HEADERS and
CONTINUATION frames are merged to form a unique HEADERS frame before
decoding the payload. To do so, info about the current frame are updated
(dff, dfl..) with info of the next one. Here there is a bug when the frame
length (dfl) is update. We must add the next frame length (hdr.dfl) and not
only the amount of data found in the buffer (clen). Because HEADERS frames
are decoded in one pass, dfl value is the whole frame length or 0. nothing
intermediary.

This patch must be backported as far as 2.0.
2021-04-21 12:13:12 +02:00
Christopher Faulet
07f88d7582 BUG/MAJOR: mux-h2: Properly detect too large frames when decoding headers
In the function decoding payload of HEADERS frames, an internal error is
returned if the frame length is too large. it cannot exceed the buffer
size. The same is true when headers are splitted on several frames. The
payload of HEADERS and CONTINUATION frames are merged and the overall size
must not exceed the buffer size.

However, there is a bug when the current frame is big enough to only have
the space for a part of the header of the next frame. Because, in this case,
we wait for more data, to have the whole frame header. We don't properly
detect that the headers are too large to be stored in one buffer. In fact
the test to trigger this error is not accurate. When the buffer is full, the
error is reported if the frame length exceeds the amount of data in the
buffer. But in reality, an error must be reported when we are unable to
decode the current frame while the buffer is full. Because, in this case, we
know there is no way to change this state.

When the bug happens, the H2 connection is woken up in loop, consumming all
the CPU. But the traffic is not blocked for all that.

This patch must be backported as far as 2.0.
2021-04-21 12:13:12 +02:00
Amaury Denoyelle
d6b4b6da3f BUG/MINOR: server: fix potential null gcc error in delete server
gcc still reports a potential null pointer dereference in delete server
function event with a BUG_ON before it. Remove the misleading NULL check
in the for loop which should never happen.

This does not need to be backported.
2021-04-21 12:02:30 +02:00
Willy Tarreau
b77cd7f562 CONTRIB: move modsecurity out of the tree
As previously mentioned SPOA code has nothing to do in the haproxy core
since they're not dependent on haproxy's version. This one was moved to
its own repository here with complete history:

    https://github.com/haproxy/spoa-modsecurity
2021-04-21 11:30:43 +02:00
Willy Tarreau
da72723189 CONTRIB: move spoa_server out of the tree
As previously mentioned SPOA code has nothing to do in the haproxy core
since they're not dependent on haproxy's version. This one was moved to
its own repository here with complete history:

    https://github.com/haproxy/spoa-server
2021-04-21 11:30:43 +02:00
Amaury Denoyelle
e558043e13 MINOR: server: implement delete server cli command
Implement a new CLI command 'del server'. It can be used to removed a
dynamically added server. Only servers in maintenance mode can be
removed, and without pending/active/idle connection on it.

Add a new reg-test for this feature. The scenario of the reg-test need
to first add a dynamic server. It is then deleted and a client is used
to ensure that the server is non joinable.

The management doc is updated with the new command 'del server'.
2021-04-21 11:00:31 +02:00
Amaury Denoyelle
d38e7fa233 MINOR: server: add log on dynamic server creation
Add a notice log to report the creation of a new server. The log is
printed at the end of the function.
2021-04-21 11:00:31 +02:00
Amaury Denoyelle
cece918625 BUG/MEDIUM: server: ensure thread-safety of server runtime creation
cli_parse_add_server can be executed in parallel by several CLI
instances and so must be thread-safe. The critical points of the
function are :
- server duplicate detection
- insertion of the server in the proxy list

The mode of operation has been reversed. The server is first
instantiated and parsed. The duplicate check has been moved at the end
just before the insertion in the proxy list, under the thread isolation.
Thus, the thread safety is guaranteed and server allocation is kept
outside of locks/thread isolation.
2021-04-21 11:00:30 +02:00
Amaury Denoyelle
d688e01032 BUG/MINOR: logs: free logsrv.conf.file on exit
Config information has been added into the logsrv struct. The filename
is duplicated and should be freed on exit.

Introduced in the current release.
This does not need to be backported.
2021-04-21 11:00:29 +02:00
Amaury Denoyelle
fb247946a1 BUG/MINOR: server: free srv.lb_nodes in free_server
lb_nodes is allocated for servers using lb_chash (balance random or
hash-type consistent).

It can be backported up to 1.8.
2021-04-21 11:00:03 +02:00
Willy Tarreau
8695199aa8 CONTRIB: move spoa_example out of the tree
As previously mentioned SPOA code has nothing to do in the haproxy core
since they're not dependent on haproxy's version. This one was moved to
its own repository here with complete history:

     https://github.com/haproxy/spoa-example
2021-04-21 09:39:06 +02:00
Willy Tarreau
2b71810cb3 CLEANUP: lists/tree-wide: rename some list operations to avoid some confusion
The current "ADD" vs "ADDQ" is confusing because when thinking in terms
of appending at the end of a list, "ADD" naturally comes to mind, but
here it does the opposite, it inserts. Several times already it's been
incorrectly used where ADDQ was expected, the latest of which was a
fortunate accident explained in 6fa922562 ("CLEANUP: stream: explain
why we queue the stream at the head of the server list").

Let's use more explicit (but slightly longer) names now:

   LIST_ADD        ->       LIST_INSERT
   LIST_ADDQ       ->       LIST_APPEND
   LIST_ADDED      ->       LIST_INLIST
   LIST_DEL        ->       LIST_DELETE

The same is true for MT_LISTs, including their "TRY" variant.
LIST_DEL_INIT keeps its short name to encourage to use it instead of the
lazier LIST_DELETE which is often less safe.

The change is large (~674 non-comment entries) but is mechanical enough
to remain safe. No permutation was performed, so any out-of-tree code
can easily map older names to new ones.

The list doc was updated.
2021-04-21 09:20:17 +02:00
Tim Duesterhus
3b9cdf1cb7 CLEANUP: sample: Use explicit return for successful json_querys
Move the `return 1` into each of the cases, instead of relying on the single
`return 1` at the bottom of the function.
2021-04-20 20:33:38 +02:00
Tim Duesterhus
8f3bc8ffca CLEANUP: sample: Explicitly handle all possible enum values from mjson
This makes it easier to find bugs, because -Wswitch can help us.
2021-04-20 20:33:34 +02:00
Tim Duesterhus
4809c8c955 CLEANUP: sample: Improve local variables in sample_conv_json_query
This improves the use of local variables in sample_conv_json_query:

- Use the enum type for the return value of `mjson_find`.
- Do not use single letter variables.
- Reduce the scope of variables that are only needed in a single branch.
- Add missing newlines after variable declaration.
2021-04-20 20:33:31 +02:00
Willy Tarreau
7d9acced27 CONTRIB: modsecurity: make the code build with the embedded includes
From now on the code only needs its embedded dependencies and does not
depend any more on external haproxy dependencies. It can now be built
as a standalone project.
2021-04-20 19:34:02 +02:00
Willy Tarreau
0177ad6526 CONTRIB: modsecurity: import the minimal number of includes
Just like mod_defender, modsecurity depends on a few haproxy includes
and it shouldn't since it's expected to be agnostic to the version.

This imports the strictly minimum number of includes required to build
it. These have been manually stripped from their exported functions
prototypes and their unneeded dependencies.
2021-04-20 19:34:02 +02:00
Willy Tarreau
131799fc97 CONTRIB: mod_defender: make the code build with the embedded includes
From now on the code only needs its embedded dependencies and does not
depend any more on external haproxy dependencies. It can now be built
as a standalone project.
2021-04-20 19:34:02 +02:00
Willy Tarreau
fe5d501c4d CONTRIB: mod_defender: import the minimal number of includes
mod_defender currently depends on haproxy includes while it should be
totally autonomous since it should build without and not even depend
on any specific haproxy version.

This imports the strictly minimum number of includes required to build
it. These have been manually stripped from their exported functions
prototypes and their unneeded dependencies.

In reality, the defender.c mostly needs sample.h because it stores its
data this way, spoe.h for the protocol definitions, and a few intops
and tools to decode varints. The rest mostly comes as intermediate
dependencies.
2021-04-20 19:34:02 +02:00
Willy Tarreau
dcb121fd9c BUG/MINOR: server: make srv_alloc_lb() allocate lb_nodes for consistent hash
The test in srv_alloc_lb() to allocate the lb_nodes[] array used in the
consistent hash was incorrect, it wouldn't do it for consistent hash and
could do it for regular random.

No backport is needed as this was added for dynamic servers in 2.4-dev by
commit f99f77a50 ("MEDIUM: server: implement 'add server' cli command").
2021-04-20 11:39:54 +02:00
Willy Tarreau
942b89f7dc BUILD: pools: fix build with DEBUG_FAIL_ALLOC
Amaury noticed that I managed to break the build of DEBUG_FAIL_ALLOC
for the second time with 207c09509 ("MINOR: pools: move the fault
injector to __pool_alloc()"). The joy of endlessly reworking patch
sets... No backport is needed, that was in the just merged cleanup
series.
2021-04-19 18:36:48 +02:00
Willy Tarreau
096b6cf581 CLEANUP: pools: declare dummy pool functions to remove some ifdefs
By having a pair of dummy pool_get_from_cache() and pool_put_to_cache()
we can remove some ugly ifdefs, so let's do this. We've already done it
for the shared cache.
2021-04-19 15:24:33 +02:00
Willy Tarreau
b2a853d5f0 CLEANUP: pools: uninline pool_put_to_cache()
This function has become too big (251 bytes) and is now hurting
performance a lot, with up to 4% request rate being lost over the last
pool changes. Let's move it to pool.c as a regular function. Other
attempts were made to cut it in half but it's still inefficient. Doing
this results in saving ~90kB of object code, and even 112kB since the
pool changes, with code that is even slightly faster!

Conversely, pool_get_from_cache(), which remains half of this size, is
still faster inlined, likely in part due to the immediate use of the
returned pointer afterwards.
2021-04-19 15:24:33 +02:00
Willy Tarreau
43d4ed548f CLEANUP: pools: merge pool_{get_from,put_to}_local_caches with generic ones
Since pool_get_from_cache() and pool_put_to_cache() were now only wrappers
to the local cache versions which do all the job, let's merge them together
so that there is no more local-cache specific function.
2021-04-19 15:24:33 +02:00
Willy Tarreau
d56db11447 CLEANUP: pools: make the local cache allocator fall back to the shared cache
Now when pool_get_from_local_cache() fails, it automatically falls back
to pool_get_from_shared_cache(), which used to always be done in
pool_get_from_cache(). Thus now the API is simpler as we always allocate
and free from/to the local caches.
2021-04-19 15:24:33 +02:00