Commit Graph

6519 Commits

Author SHA1 Message Date
Willy Tarreau
7ed0597ce8 BUILD: sample: use __fallthrough in smp_is_rw() and smp_dup()
This avoids three build warnings when preprocessing happens before compiling
with gcc >= 7.
2022-11-14 11:14:02 +01:00
Willy Tarreau
1f344c0f30 BUILD: compiler: define a __fallthrough statement for switch/case
When the code is preprocessed first and compiled later, such as when
built under distcc, a lot of fallthrough warnings are emitted because
the preprocessor has already stripped the comments.

As an alternative, a "fallthrough" attribute was added with the same
compilers as those which started to emit those warnings. However it's
not portable to older compilers. Let's just define a __fallthrough
statement that corresponds to this attribute on supported compilers
and only switches to the classical empty do {} while (0) on other ones.

This way the code will support being cleaned up using __fallthrough.
2022-11-14 11:14:02 +01:00
Willy Tarreau
2b080f713f BUILD: compiler: add a default definition for __has_attribute()
It happens that gcc since 5.x has this macro which is only mentioned
once in the doc, associated with __builtin_has_attribute(). Clang had
it at least since 3.0. In addition it validates #ifdef when present,
so it's easy to detect it. Here we're providing a fallback to another
macro __has_attribute_<name> so that it's possible to define that macro
to the value 1 for older compilers when the attribute is supported.
2022-11-14 11:14:02 +01:00
Willy Tarreau
08e09f0b3c BUILD: compiler: add a macro to detect if another one is set and equals 1
In order to simplify compiler-specific checks, we'll need to check if some
attributes exist. In order to ease declarations, we'll only focus on those
that exist and will set them to 1. Let's first add a macro aimed at doing
this. Passed a macro name in argument, it will return 1 if the macro is
defined and equals 1, otherwise it will return 0. This is based on the
concatenation of the macro's value with a name to form the name of a macro
which contains one comma, resulting in some other macros arguments being
shifted by one when the macro is defined. As such it's only a matter of
pushing both a 1 and a 0 and picking the correct argument to see the
desired one. It was verified to work since at least gcc-3.4 so it should
be portable enough.
2022-11-14 11:14:02 +01:00
Willy Tarreau
71de04134e IMPORT: slz: define and use a __fallthrough statement for switch/case
When the code is preprocessed first and compiled later, such as when
built under distcc, the "fall through" comments are dropped and warnings
are emitted. Let's use the alternative "fallthrough" attribute instead,
that is supported by versions of gcc and clang that also produce this
warning.

This is libslz upstream commit 0fdf8ae218f3ecb0b7f22afd1a6b35a4f94053e2
2022-11-14 11:14:02 +01:00
Dridi Boukelmoune
4bd53c397c IMPORT: slz: mention the potential header in slz_finish()
There may be 2 or 10 bytes sent respectively for zlib and gzip.

This is libslz upstream commit de1cac155ac730ba0491a6c866a510760c01fa9b
2022-11-14 11:14:02 +01:00
Willy Tarreau
eab4256a9c IMPORT: xxhash: update xxHash to version 0.8.1
This is the latest released version and a minor update on top of the
current one (0.8.0). It addresses a few build issues (some for which
patches were already backported), and particularly the fallthrough
issue by using an attribute instead of a comment.
2022-11-14 11:14:02 +01:00
Willy Tarreau
eedcea8b90 BUILD: debug: remove unnecessary quotes in HA_WEAK() calls
HA_WEAK() is supposed to take a symbol in argument, not a string, since
the asm statements it produces already quote the argument. Having it
quoted twice doesn't work on older compilers and was the only reason
why DEBUG_MEM_STATS didn't work on older compilers.
2022-11-14 11:12:49 +01:00
Amaury Denoyelle
24e9961a8f MINOR: cli: define usermsgs print context
CLI 'add server' handler relies on usermsgs_ctx to display errors in
internal function on CLI output. This may be also extended to other
handlers.

However, to not clutter stderr from another contextes, usermsgs_ctx must
be resetted when it is not needed anymore. This operation cannot be
conducted in the CLI parse handler as display is conducted after it.

To achieve this, define new CLI states CLI_ST_PRINT_UMSG /
CLI_ST_PRINT_UMSGERR. Their principles is nearly identical to states for
dynamic messages printing.
2022-11-10 16:42:47 +01:00
Amaury Denoyelle
56f50a03b7 CLEANUP: cli: rename dynamic error printing state
Rename CLI_ST_PRINT_FREE to CLI_ST_PRINT_DYNERR.

Most notably, this highlights that this is reserved to error printing.

This is done to ensure consistency between CLI_ST_PRINT/CLI_ST_PRINT_DYN
and CLI_ST_PRINT_ERR/CLI_ST_PRINT_DYNERR. The name is also consistent
with the function cli_dynerr() which activates it.
2022-11-10 16:42:47 +01:00
William Lallemand
960fb74cae MEDIUM: ssl: {ca,crt}-ignore-err can now use error constant name
The ca-ignore-err and crt-ignore-err directives are now able to use the
openssl X509_V_ERR constant names instead of the numerical values.

This allow a configuration to survive an OpenSSL upgrade, because the
numerical ID can change between versions. For example
X509_V_ERR_INVALID_CA was 24 in OpenSSL 1 and is 79 in OpenSSL 3.

The list of errors must be updated when a new major OpenSSL version is
released.
2022-11-10 13:28:37 +01:00
Remi Tricot-Le Breton
9b25982716 BUG/MEDIUM: ssl: Verify error codes can exceed 63
The CRT and CA verify error codes were stored in 6 bits each in the
xprt_st field of the ssl_sock_ctx meaning that only error code up to 63
could be stored. Likewise, the ca-ignore-err and crt-ignore-err options
relied on two unsigned long longs that were used as bitfields for all
the ignored error codes. On the latest OpenSSL1.1.1 and with OpenSSLv3
and newer, verify errors have exceeded this value so these two storages
must be increased. The error codes will now be stored on 7 bits each and
the ignore-err bitfields are replaced by a big enough array and
dedicated bit get and set functions.

It can be backported on all stable branches.

[wla: let it be tested a little while before backport]
Signed-off-by: William Lallemand <wlallemand@haproxy.org>
2022-11-10 11:45:48 +01:00
Ilya Shipitsin
4a689dad03 CLEANUP: assorted typo fixes in the code and comments
This is 32nd iteration of typo fixes
2022-10-30 17:17:56 +01:00
Amaury Denoyelle
735b44f5df MINOR: quic: add counter for interrupted reception
Add a new counter "quic_rxbuf_full". It is incremented each time
quic_sock_fd_iocb() is interrupted on full buffer.

This should help to debug github issue #1903. It is suspected that
QUIC receiver buffers are full which in turn cause quic_sock_fd_iocb()
to be called repeatedly resulting in a high CPU consumption.
2022-10-27 18:35:42 +02:00
Amaury Denoyelle
bbb1c68508 BUG/MINOR: quic: fix subscribe operation
Subscribing was not properly designed between quic-conn and quic MUX
layers. Align this as with in other haproxy components : <subs> field is
moved from the MUX to the quic-conn structure. All mention of qcc MUX is
cleaned up in quic_conn_subscribe()/quic_conn_unsubscribe().

Thanks to this change, ACK reception notification has been simplified.
It's now unnecessary to check for the MUX existence before waking it.
Instead, if <subs> quic-conn field is set, just wake-up the upper layer
tasklet without mentionning MUX. This should probably be extended to
other part in quic-conn code.

This should be backported up to 2.6.
2022-10-26 18:18:26 +02:00
Frdric Lcaille
36d1565640 MINOR: peers: Support for peer shards
Add "shards" new keyword for "peers" section to configure the number
of peer shards attached to such secions. This impact all the stick-tables
attached to the section.
Add "shard" new "server" parameter to configure the peers which participate to
all the stick-tables contents distribution. Each peer receive the stick-tables updates
only for keys with this shard value as distribution hash. The "shard" value
is stored in ->shard new server struct member.
cfg_parse_peers() which is the function which is called to parse all
the lines of a "peers" section is modified to parse the "shards" parameter
stored in ->nb_shards new peers struct member.
Add srv_parse_shard() new callback into server.c to pare the "shard"
parameter.
Implement stksess_getkey_hash() to compute the distribution hash for a
stick-table key as the 64-bits xxhash of the key concatenated to the stick-table
name. This function is called by stksess_setkey_shard(), itself
called by the already implemented function which create a new stick-table
key (stksess_new()).
Add ->idlen new stktable struct member to store the stick-table name length
to not have to compute it each time a stick-table key hash is computed.
2022-10-24 10:55:53 +02:00
Amaury Denoyelle
7941ead3aa MINOR: quic: display unknown error sendto counter on stat page
This patch complete the previous incomplete commit. The new counter
sendto_err_unknown is now displayed on stats page/CLI show stats.

This is related to github issue #1903.

This should be backported up to 2.6.
2022-10-24 10:52:59 +02:00
Amaury Denoyelle
1d9f170edd MINOR: quic: do not crash on unhandled sendto error
Remove ABORT_NOW() statement on unhandled sendto error. Instead use a
dedicated counter sendto_err_unknown to report these cases.

If we detect increment of this counter, strace can be used to detect
errno value :
  $ strace -p $(pidof haproxy) -f -e trace=sendto -Z

This should be backported up to 2.6.

This should help to debug github issue #1903.
2022-10-24 10:18:44 +02:00
Amaury Denoyelle
176174f7e4 BUG/MINOR: mux-quic: complete flow-control for uni streams
Max stream data was not enforced and respect for local/remote uni
streams. Previously, qcs instances incorrectly reused the limit defined
from bidirectional ones.

This is now fixed. Two fields are added in qcc structure connection :
* value for local flow control to enforce on remote uni streams
* value for remote flow control to respect on local uni streams

These two values can be reused to properly initialized msd field of a
qcs instance in qcs_new(). The rest of the code is similar.

This must be backported up to 2.6.
2022-10-21 17:31:18 +02:00
Aurelien DARRAGON
e951c3435c MINOR: list: adding MT_LIST_APPEND_LOCKED macro
adding a new mt macro: MT_LIST_APPEND_LOCKED.

This macro may be used to append an item to an existing
list, like MT_LIST_APPEND.

But here the item will be forced into locked/busy state
prior to appending, so that it is already referenced
in the list while still preventing concurrent accesses
until we decide to unlock it.

The macro returns a struct mt_list "np", that is needed
at unlock time using regular MT_LIST_UNLOCK_ELT macro.
2022-10-21 16:26:27 +02:00
Aurelien DARRAGON
18c284c126 DOC/MINOR: list: fixing MT_LIST_LOCK_ELT macro documentation
MT_LIST_LOCK_ELT macro was documented with an ambiguous
usage restriction, implying that concurrent list deletion
was not supported.

But it seems that either the code has evolved, or the comment is
wrong because the locking behavior implemented here is exactly
the same one used in MT_LIST_DELETE, and no such restriction is
described for MT_LIST_DELETE.

I made some tests to make sure concurrent MT_LIST_DELETE (or deletion
from mt_list_for_each_entry_safe) don't cause unexepected results.

At the present time, this macro is not used, this fix only
targets upcoming developments that might rely on this.

No backport needed.
2022-10-21 16:26:27 +02:00
Aurelien DARRAGON
bcaa401646 MINOR: list: fixing typo in MT_LIST_LOCK_ELT
A minor typo was made in MT_LIST_LOCK_ELT, preventing
haproxy from compiling if MT_LIST_LOCK_ELT is
used in the code.

Today, the macro is unused, and that's the reason why
the typo has remained unnoticed for such a long time.

Fixing it so it can be used in upcoming developments.

No backport required.
2022-10-21 16:26:27 +02:00
William Lallemand
bb581423b3 BUG/MEDIUM: httpclient/lua: crash when the lua task timeout before the httpclient
When the lua task finished  before the httpclient that are associated to
it, there is a risk that the httpclient try to task_wakeup() the lua
task which does not exist anymore.

To fix this issue the httpclient used in a lua task are stored in a
list, and the httpclient are destroyed at the end of the lua task.

Must be backported in 2.5 and 2.6.
2022-10-20 18:47:15 +02:00
Amaury Denoyelle
deb7c87f55 MINOR: quic: define first packet flag
Received packets treatment has some difference regarding if this is the
first one or not of the encapsulating datagram. Previously, this was set
via a function argument. Simplify this by defining a new Rx packet flag
named QUIC_FL_RX_PACKET_DGRAM_FIRST.

This change does not have functional impact. It will simplify API when
qc_lstnr_pkt_rcv() is broken into several functions : their number of
arguments will be reduced thanks to this patch.

This should be backported up to 2.6.
2022-10-19 18:12:56 +02:00
Amaury Denoyelle
845169da58 MINOR: quic: extend pn_offset field from quic_rx_packet
pn_offset field was only set if header protection cannot be removed.
Extend the usage of this field : it is now set everytime on packet
parsing in qc_lstnr_pkt_rcv().

This change helps to clean up API of Rx functions by removing
unnecessary variables and function argument.

This change has no functional impact. It is a part of a refactoring
series on qc_lstnr_pkt_rcv(). The objective is facilitate integration of
FD-owned socket patches.

This should be backported up to 2.6.
2022-10-19 18:12:56 +02:00
Amaury Denoyelle
0eae57273b MINOR: quic: add version field on quic_rx_packet
Add a new field version on quic_rx_packet structure. This is set on
header parsing in qc_lstnr_pkt_rcv() function.

This change has no functional impact. It is a part of a refactoring
series on qc_lstnr_pkt_rcv(). The objective is facilitate integration of
FD-owned socket patches.

This should be backported up to 2.6.
2022-10-19 18:12:56 +02:00
Willy Tarreau
f5a0c8abf5 MEDIUM: quic: respect the threads assigned to a bind line
Right now the QUIC thread mapping derives the thread ID from the CID
by dividing by global.nbthread. This is a problem because this makes
QUIC work on all threads and ignores the "thread" directive on the
bind lines. In addition, only 8 bits are used, which is no more
compatible with the up to 4096 threads we may have in a configuration.

Let's modify it this way:
  - the CID now dedicates 12 bits to the thread ID
  - on output we continue to place the TID directly there.
  - on input, the value is extracted. If it corresponds to a valid
    thread number of the bind_conf, it's used as-is.
  - otherwise it's used as a rank within the current bind_conf's
    thread mask so that in the end we still get a valid thread ID
    for this bind_conf.

The extraction function now requires a bind_conf in order to get the
group and thread mask. It was better to use bind_confs now as the goal
is to make them support multiple listeners sooner or later.
2022-10-13 18:08:05 +02:00
William Lallemand
eba6a54cd4 MINOR: logs: startup-logs can use a shm for logging the reload
When compiled with USE_SHM_OPEN=1 the startup-logs are now able to use
an shm which is used to keep the logs when switching to mworker wait
mode. This allows to keep the failed reload logs.

When allocating the startup-logs at first start of the process, haproxy
will do a shm_open with a unique path using the PID of the process, the
file is unlink immediatly so we don't let unwelcomed files be. The fd
resulting from this shm is stored in the HAPROXY_STARTUPLOGS_FD
environment variable so it can be mmap again when switching to wait
mode.

When forking children, the process is copying the mmap to a a mallocated
ring so we never share the same memory section between the master and
the workers. When switching to wait mode, the shm is not used anymore as
it is also copied to a mallocated structure.

This allow to use the "show startup-logs" command over the master CLI,
to get the logs of the latest startup or reload. This way the logs of
the latest failed reload are also kept.

This is only activated on the linux-glibc target for now.
2022-10-13 16:50:22 +02:00
William Lallemand
35df34223b MINOR: buffers: split b_force_xfer() into b_cpy() and b_force_xfer()
Split the b_force_xfer() into b_ncat() and b_force_xfer().

The previous b_force_xfer() implementation was basically a copy with a
b_del on the src buffer. Keep this implementation to make b_ncat(), and
just call b_ncat() + b_del() into b_force_xfer().
2022-10-13 16:45:28 +02:00
William Lallemand
9e4ead3095 MINOR: ring: ring_cast_from_area() cast from an allocated area
Cast an unified ring + storage area to a ring from area, without
reinitializing the data buffer. Reinitialize the waiters and the lock.

It helps retrieving a previously allocated ring, from an mmap for
example.
2022-10-13 16:45:28 +02:00
Amaury Denoyelle
1cba8d60f3 CLEANUP: quic: improve naming for rxbuf/datagrams handling
QUIC datagrams are read from a random thread. They are then redispatch
to the connection thread according to the first packet DCID. These
operations are implemented through a special buffer designed to avoid
locking.

Refactor this code with the following changes :
* <rxbuf> type is renamed <quic_receiver_buf>. Its list element is also
  renamed to highligh its attach point to a receiver.
* <quic_dgram> and <quic_receiver_buf> definition are moved to
  quic_sock-t.h. This helps to reduce the size of quic_conn-t.h.
* <quic_dgram> list elements are renamed to highlight their attach point
  into a <quic_receiver_buf> and a <quic_dghdlr>.

This should be backported up to 2.6.
2022-10-13 11:06:48 +02:00
Amaury Denoyelle
8c4d062d25 CLEANUP: quic: remove unused rxbufs member in receiver
rxbuf is the structure used to store QUIC datagrams and redispatch them
to the connection thread.

Each receiver manages a list of rxbuf. This was stored both as an array
and a mt_list. Currently, only mt_list is needed so removed <rxbufs>
member from receiver structure.

This should be backported up to 2.6.
2022-10-13 11:05:41 +02:00
Frédéric Lécaille
e1a49cfd4d MINOR: quic: Split the secrets key allocation in two parts
Implement quic_tls_secrets_keys_alloc()/quic_tls_secrets_keys_free() to allocate
the memory for only one direction (RX or TX).
Modify ha_quic_set_encryption_secrets() to call these functions for one of this
direction (or both). So, for now on we can rely on the value of the secret keys
to know if it was derived.
Remove QUIC_FL_TLS_SECRETS_SET flag which is no more useful.
Consequently, the secrets are dumped by the traces only if derived.

Must be backported to 2.6.
2022-10-13 10:12:03 +02:00
Frédéric Lécaille
4aa7d8197a BUG/MINOR: quic: Stalled 0RTT connections with big ClientHello TLS message
This issue was reproduced with -Q picoquic client option to split a big ClientHello
message into two Initial packets and haproxy as server without any knowledged of
any previous ORTT session (restarted after a firt 0RTT session). The ORTT received
packets were removed from their queue when the second Initial packet was parsed,
and the QUIC handshake state never progressed and remained at Initial state.

To avoid such situations, after having treated some Initial packets we always
check if there are ORTT packets to parse and we never remove them from their
queue. This will be done after the hanshake is completed or upon idle timeout
expiration.

Also add more traces to be able to analize the handshake progression.

Tested with ngtcp2 and picoquic

Must be backported to 2.6.
2022-10-13 10:12:03 +02:00
Frédéric Lécaille
9f9263ed13 MINOR: quic: Use a non-contiguous buffer for RX CRYPTO data
Implement quic_get_ncbuf() to dynamically allocate a new ncbuf to be attached to
any quic_cstream struct which needs such a buffer. Note that there is no quic_cstream
for 0RTT encryption level. quic_free_ncbuf() is added to release the memory
allocated for a non-contiguous buffer.

Modify qc_handle_crypto_frm() to call this function and allocate an ncbuf for
crypto data which are not received in order. The crypto data which are received in
order are not buffered but provide to the TLS stack (calling qc_provide_cdata()).

Modify qc_treat_rx_crypto_frms() which is called after having provided the
in order received crypto data to the TLS stack to provide again the remaining
crypto data which has been buffered, if possible (if they are in order). Each time
buffered CRYPTO data were consumed, we try to release the memory allocated for
the non-contiguous buffer (ncbuf).
Also move rx.crypto.offset quic_enc_level struct member to rx.offset quic_cstream
struct member.

Must be backported to 2.6.
2022-10-13 10:12:03 +02:00
Frédéric Lécaille
7e3f7c47e9 MINOR: quic: New quic_cstream object implementation
Add new quic_cstream struct definition to implement the CRYPTO data stream.
This is a simplication of the qcs object (QUIC streams) for the CRYPTO data
without any information about the flow control. They are not attached to any
tree, but to a QUIC encryption level, one by encryption level except for
the early data encryption level (for 0RTT). A stream descriptor is also allocated
for each CRYPTO data stream.

Must be backported to 2.6
2022-10-13 10:12:03 +02:00
Willy Tarreau
d114f4a68f MEDIUM: checks: spread the checks load over random threads
The CPU usage pattern was found to be high (5%) on a machine with
48 threads and only 100 servers checked every second That was
supposed to be only 100 connections per second, which should be very
cheap. It was figured that due to the check tasks unbinding from any
thread when going back to sleep, they're queued into the shared queue.

Not only this requires to manipulate the global queue lock, but in
addition it means that all threads have to check the global queue
before going to sleep (hence take a lock again) to figure how long
to sleep, and that they would all sleep only for the shortest amount
of time to the next check, one would pick it and all other ones would
go down to sleep waiting for the next check.

That's perfectly visible in time-to-first-byte measurements. A quick
test consisting in retrieving the stats page in CSV over a 48-thread
process checking 200 servers every 2 seconds shows the following tail:

  percentile   ttfb(ms)
  99.98        2.43
  99.985       5.72
  99.99       32.96
  99.995     82.176
  99.996     82.944
  99.9965    83.328
  99.997      83.84
  99.9975    84.288
  99.998      85.12
  99.9985    86.592
  99.999         88
  99.9995    89.728
  99.9999   100.352

One solution could consist in forcefully binding checks to threads at
boot time, but that's annoying, will cause trouble for dynamic servers
and may cause some skew in the load depending on some server patterns.

Instead here we take a different approach. A check remains bound to its
thread for as long as possible, but upon every wakeup, the thread's load
is compared with another random thread's load. If it's found that that
other thread's load is less than half of the current one's, the task is
bounced to that thread. In order to prevent that new thread from doing
the same, we set a flag "CHK_ST_SLEEPING" that indicates that it just
woke up and we're bouncing the task only on this condition.

Tests have shown that the initial load was very unfair before, with a few
checks threads having a load of 15-20 and the vast majority having zero.
With this modification, after two "inter" delays, the load is either zero
or one everywhere when checks start. The same test shows a CPU usage that
significantly drops, between 0.5 and 1%. The same latency tail measurement
is much better, roughly 10 times smaller:

  percentile   ttfb(ms)
  99.98        1.647
  99.985       1.773
  99.99        4.912
  99.995        8.76
  99.996        8.88
  99.9965      8.944
  99.997       9.016
  99.9975      9.104
  99.998       9.224
  99.9985      9.416
  99.999         9.8
  99.9995      10.04
  99.9999     10.432

In fact one difference here is that many threads work while in the past
they were waking up and going down to sleep after having perturbated the
shared lock. Thus it is anticipated that this will scale way smoother
than before. Under strace it's clearly visible that all threads are
sleeping for the time it takes to relaunch a check, there's no more
thundering herd wakeups.

However it is also possible that in some rare cases such as very short
check intervals smaller than a scheduler's timeslice (such as 4ms),
some users might have benefited from the work being concentrated on
less threads and would instead observe a small increase of apparent
CPU usage due to more total threads waking up even if that's for less
work each and less total work. That's visible with 200 servers at 4ms
where show activity shows that a few threads were overloaded and others
doing nothing. It's not a problem, though as in practice checks are not
supposed to eat much CPU and to wake up fast enough to represent a
significant load anyway, and the main issue they could have been
causing (aside the global lock) is an increase last-percentile latency.
2022-10-12 21:49:30 +02:00
Christopher Faulet
c8db114afc MINOR: flags/mux-fcgi: Decode FCGI connection and stream flags
The new functions fconn_show_flags() and fstrm_show_flags() decode the flags
state into a string, and are used by dev/flags:

$ /dev/flags/flags fconn 0x3100
fconn->flags = FCGI_CF_GET_VALUES | FCGI_CF_KEEP_CONN | FCGI_CF_MPXS_CONNS

./dev/flags/flags fstrm  0x3300
fstrm->flags = FCGI_SF_WANT_SHUTW | FCGI_SF_WANT_SHUTR | FCGI_SF_OUTGOING_DATA | FCGI_SF_BEGIN_SENT
2022-10-12 17:10:41 +02:00
Christopher Faulet
3965aa7494 REORG: mux-fcgi: Extract flags and enums into mux_fcgi-t.h
The same was performed for the H2 and H1 multiplexers. FCGI connection and
stream flags are moved in a dedicated header file. It will be mainly used to
be able to decode mux-fcgi flags from the flags utility.

In this patch, we move the flags and enums to mux_fcgi-t.h, as well as the
two state decoding inline functions.
2022-10-12 17:10:37 +02:00
Willy Tarreau
dbae89e09c MEDIUM: stick-table: always use atomic ops to requeue the table's task
We're generalizing the change performed in previous commit "MEDIUM:
stick-table: requeue the expiration task out of the exclusive lock"
to stktable_requeue_exp() so that it can also be used by callers of
__stktable_store(). At the moment there's still no visible change
since it's still called under the write lock. However, the previous
code in stitable_touch_with_exp() was updated to use this function.
2022-10-12 14:19:05 +02:00
Willy Tarreau
8d3c3336f9 MEDIUM: stick-table: make stksess_kill_if_expired() avoid the exclusive lock
stream_store_counters() calls stksess_kill_if_expired() for each active
counter. And this one takes an exclusive lock on the table before
checking if it has any work to do (hint: it almost never has since it
only wants to delete expired entries). However a lock is still neeed for
now to protect the ref_cnt, but we can do it atomically under the read
lock.

Let's change the mechanism. Now what we do is to check out of the lock
if the entry is expired. If it is, we take the write lock, expire it,
and decrement the refcount. Otherwise we just decrement the refcount
under a read lock. With this change alone, the config based on 3
trackers without the previous patches saw a 2.6x improvement, but here
it doesn't yet change anything because some heavy contention remains
on the lookup part.
2022-10-12 14:19:05 +02:00
Willy Tarreau
9f5cb435b6 MINOR: stick-table: move the write lock inside stktable_touch_with_exp()
Taking the write lock prior to entering that function is a problem
because this function is full of conditions that most of the time can
lead to eliminating the lock.

This commit first moves the write lock inside the function and passes
the extra argument required to implement stktable_touch_remote() and
stktable_touch_local(). It also renames the function to remove the
underscores since there's no other variant and it's exported under
this name (probably an old rename that was not propagated). The code
was stressed under 48 threads using 3 trackers on the same table. It
already shows a tiny 3% improvement from 187k to 193k rps.
2022-10-12 14:19:05 +02:00
Willy Tarreau
76642223f0 MEDIUM: stick-table: switch the table lock to rwlock
Right now a spinlock is used, but most accesses are for reads, so let's
switch the lock to an rwlock and switch all accesses to exclusive locks
for now. There should be no visible difference at this point.
2022-10-12 14:19:05 +02:00
Willy Tarreau
f6a42c3a37 MINOR: freq_ctr: use the thread's local time whenever possible
Right now when dealing with freq_ctr updates, we're using the process-
wide monotinic time, and accessing it is expensive since every thread
needs to update it, so this adds some contention. However we don't need
it all the time, the thread's local time is most of the time strictly
equal to the global time, and may be off by one millisecond when the
global time is switched to the next one by another thread, and in this
case we don't want to use the local time because it would risk to cause
a rotation of the counter. But that's precisely the condition we're
already relying on for the slow path!

What this patch does is to add a check for the period against the
local time prior to anything else, and immediately return after
updating the counter if still within the period, otherwise fall back
to the existing code. Given that the function starts to inflate a bit,
it was split between s very short inline part that does the hot path,
and the slower fallback that's in a cold function. It was measured that
on a 24-CPU machine it was called ~0.003% of the time.

The resulting improvement sits between 2 and 3% at 500k req/s tracking
an http_req_rate counter.
2022-10-12 14:19:05 +02:00
Willy Tarreau
b13044cc1a MINOR: plock: support disabling exponential back-off
The new macro PLOCK_DISABLE_EBO may be defined to disable exponential
backoff. This can be useful to more easily spot functions that cause
contention. In this case the CPU will be spent inside the functions
themselves instead of the pl_wait_unlock_{long,int}() functions, making
them easier to spot using "perf top" even if that causes a significant
degradation of the thread scalability.
2022-10-12 14:19:05 +02:00
Willy Tarreau
cab054bbf9 CLEANUP: quic/receiver: remove the now unused tx_qring list
The tx_qrings[] and tx_qring_list in the receiver are not used
anymore since commit f2476053f ("MINOR: quic: replace custom buf on Tx
by default struct buffer"), the only place where they're referenced
was in quic_alloc_tx_rings_listener(), which by the way implies that
these were not even freed on exit.

Let's just remove them. This should be backported to 2.6 since the
commit above also was.
2022-10-11 08:40:38 +02:00
Amaury Denoyelle
97ecc7a8ea MEDIUM: quic: retrieve frontend destination address
Retrieve the frontend destination address for a QUIC connection. This
address is retrieve from the first received datagram and then stored in
the associated quic-conn.

This feature relies on IP_PKTINFO or affiliated flags support on the
socket. This flag is set for each QUIC listeners in
sock_inet_bind_receiver(). To retrieve the destination address,
recvfrom() has been replaced by recvmsg() syscall. This operation and
parsing of msghdr structure has been extracted in a wrapper quic_recv().

This change is useful to finalize the implementation of 'dst' sample
fetch. As such, quic_sock_get_dst() has been edited to return local
address from the quic-conn. As a best effort, if local address is not
available due to kernel non-support of IP_PKTINFO, address of the
listener is returned instead.

This should be backported up to 2.6.
2022-10-10 11:48:27 +02:00
Amaury Denoyelle
2ed840015f MINOR: quic: limit usage of ssl_sock_ctx in favor of quic_conn
Continue on the cleanup of QUIC stack and components.

quic_conn uses internally a ssl_sock_ctx to handle mandatory TLS QUIC
integration. However, this is merely as a convenience, and it is not
equivalent to stackable ssl xprt layer in the context of HTTP1 or 2.

To better emphasize this, ssl_sock_ctx usage in quic_conn has been
removed wherever it is not necessary : namely in functions not related
to TLS. quic_conn struct now contains its own wait_event for tasklet
quic_conn_io_cb().

This should be backported up to 2.6.
2022-10-05 11:08:32 +02:00
Willy Tarreau
922a907926 MINOR: fd: add a new function to only raise RLIMIT_NOFILE
In issue #1866 an issue was reported under docker, by which a user cannot
lower the number of FD needed. It looks like a restriction imposed in this
environment, but it results in an error while it ought not have to in the
case of shrinking.

This patch adds a new function raise_rlim_nofile() that takes the desired
new setting, compares it to the current one, and only calls setrlimit() if
one of the values in the new setting is larger than the older one. As such
it will continue to emit warnings and errors in case of failure to raise
the limit but will never shrink it.

This patch is only preliminary to another one, but will have to be
backported where relevant (likely only 2.6).
2022-10-04 08:38:47 +02:00
Amaury Denoyelle
92fa63f735 CLEANUP: quic: create a dedicated quic_conn module
xprt_quic module was too large and did not reflect the true architecture
by contrast to the other protocols in haproxy.

Extract code related to XPRT layer and keep it under xprt_quic module.
This code should only contains a simple API to communicate between QUIC
lower layer and connection/MUX.

The vast majority of the code has been moved into a new module named
quic_conn. This module is responsible to the implementation of QUIC
lower layer. Conceptually, it overlaps with TCP kernel implementation
when comparing QUIC and HTTP1/2 stacks of haproxy.

This should be backported up to 2.6.
2022-10-03 16:25:17 +02:00