Commit Graph

6306 Commits

Author SHA1 Message Date
Amaury Denoyelle
b143723411 REORG: mux-quic: rename stream initialization function
Rename both qcc_open_stream_local/remote() functions to
qcc_init_stream_local/remote(). This change is purely cosmetic. It will
reduces the ambiguity with the soon to be implemented OPEN states for
QCS instances.
2022-07-11 16:24:03 +02:00
Willy Tarreau
481edaceb8 BUILD: debug: silence warning on gcc-5
In 2.6, 8a0fd3a36 ("BUILD: debug: work around gcc-12 excessive
-Warray-bounds warnings") disabled some warnings that were reported
around the the BUG() statement. But the -Wnull-dereference warning
isn't known from gcc-5, it only arrived in gcc-6, hence makes gcc-5
complain loudly that it doesn't know this directive. Let's just
condition this one to gcc-6.
2022-07-10 14:13:48 +02:00
Christopher Faulet
ca7218aaf0 MINOR: http: Add function to detect default port
http_is_default_port() can be used to test if a port is a default HTTP/HTTPS
port. A scheme may be specified. In this case, it is used to detect defaults
ports, 80 for "http://" and 443 for "https://". Otherwise, with no scheme, both
are considered as default ports.
2022-07-06 17:54:03 +02:00
Christopher Faulet
658f971621 MINOR: http: Add function to get port part of a host
http_get_host_port() function can be used to get the port part of a host. It
will be used to get the port of an uri authority or a host header
value. This function only look for a port starting from the end of the
host. It is the caller responsibility to call it with a valid host value. An
indirect string is returned.
2022-07-06 17:54:03 +02:00
Amaury Denoyelle
3f39b40fe0 MINOR: mux-quic: rename qcs flag FIN_RECV to SIZE_KNOWN
Rename QC_SF_FIN_RECV to the more generic name QC_SF_SIZE_KNOWN. This
better align with the QUIC RFC 9000 which uses the "Size Known" state
definition. This change is purely cosmetic.
2022-07-05 16:18:27 +02:00
Amaury Denoyelle
a509ffb505 MEDIUM: mux-quic: refactor streams opening
Review the whole API used to access/instantiate qcs.

A public function qcc_open_stream_local() is available to the
application protocol layer. It allows to easily opening a local stream.
The ID is automatically attributed to the next one available.

For remote streams, qcc_open_stream_remote() has been implemented. It
will automatically take care of allocating streams in a linear way
according to the ID. This function is called via qcc_get_qcs() which can
be used for each qcc_recv*() operations. For the moment, it is only used
for STREAM frames via qcc_recv(), but soon it will be implemented for
other frames types which can also be used to open a new stream.

qcs_new() and qcs_free() has been restricted to the MUX QUIC only as
they are now reserved for internal usage.

This change is a pure refactoring and should not have any noticeable
impact. It clarifies the developer intent and help to ensure that a
stream is not automatically opened when not desired.
2022-07-05 16:18:27 +02:00
Amaury Denoyelle
321fa7733c REORG: mux-quic: reorganize flow-control fields
<qcc.cl_bidi_r> is used to implement STREAM ID flow control enforcement.
Move it with all fields related to this operation and separated from MAX
STREAM DATA calcul.
2022-07-05 11:20:02 +02:00
Amaury Denoyelle
a441ec9c7a CLEANUP: mux-quic: do not export qc_get_ncbuf
qc_get_ncbuf() is only used internally : thus its prototype in QUIC MUX
include is not required.
2022-07-05 11:06:52 +02:00
Emeric Brun
36d9097cf3 MINOR: fd: Add BUG_ON checks on fd_insert()
This patch adds two BUG_ON on fd_insert() into the fdtab checking
if the fd has been correctly re-initialized into the fdtab
before a new insert.

It will raise a BUG if we try to insert the same fd multiple times
without an intermediate fd_delete().

First one checks that the owner for this fd in fdtab was reset to NULL.

Second one checks that the state flags for this fd in fdtab was reset
to 0.

This patch could be backported on version >= 2.4
2022-07-05 05:18:51 +02:00
Willy Tarreau
1229ef312d MINOR: wdt: do not rely on threads_to_dump anymore
This flag is not needed anymore as we're already marking the waiting
threads as harmless, thus the thread's bit is already covered by this
information. The variable was unexported.
2022-07-01 19:26:35 +02:00
Willy Tarreau
a2b8ed4b44 MINOR: thread: add is_thread_harmless() to know if a thread already is harmless
The harmless status is not re-entrant, so sometimes for signal handling
it can be useful to know if we're already harmless or not. Let's add a
function doing that, and make the debugger use it instead of manipulating
the harmless mask.
2022-07-01 19:26:35 +02:00
Willy Tarreau
598cf3f22e MAJOR: threads: change thread_isolate to support inter-group synchronization
thread_isolate() and thread_isolate_full() were relying on a set of thread
masks for all threads in different states (rdv, harmless, idle). This cannot
work anymore when the number of threads increases beyond LONGBITS so we need
to change the mechanism.

What is done here is to have a counter of requesters and the number of the
current isolated thread. Threads which want to isolate themselves increment
the request counter and wait for all threads to be marked harmless (or idle)
by scanning all groups and watching the respective masks. This is possible
because threads cannot escape once they discover this counter, unless they
also want to isolate and possibly pass first. Once all threads are harmless,
the requesting thread tries to self-assign the isolated thread number, and
if it fails it loops back to checking all threads. If it wins it's guaranted
to be alone, and can drop its harmless bit, so that other competing threads
go back to the loop waiting for all threads to be harmless. The benefit of
proceeding this way is that there's very little write contention on the
thread number (none during work), hence no cache line moves between caches,
thus frozen threads do not slow down the isolated one.

Once it's done, the isolated thread resets the thread number (hence lets
another thread take the place) and decrements the requester count, thus
possibly releasing all harmless threads.

With this change there's no more need for any global mask to synchronize
any thread, and we only need to loop over a number of groups to check
64 threads at a time per iteration. As such, tinfo's threads_want_rdv
could be dropped.

This was tested with 64 threads spread into 2 groups, running 64 tasks
(from the debug dev command), 20 "show sess" (thread_isolate()), 20
"add server blah/blah" (thread_isolate()), and 20 "del server blah/blah"
(thread_isolate_full()). The load remained very low (limited by external
socat forks) and no stuck nor starved thread was found.
2022-07-01 19:15:15 +02:00
Willy Tarreau
ef422ced91 MEDIUM: thread: make stopping_threads per-group and add stopping_tgroups
Stopping threads need a mask to figure who's still there without scanning
everything in the poll loop. This means this will have to be per-group.
And we also need to have a global stopping groups mask to know what groups
were already signaled. This is used both to figure what thread is the first
one to catch the event, and which one is the first one to detect the end of
the last job. The logic isn't changed, though a loop is required in the
slow path to make sure all threads are aware of the end.

Note that for now the soft-stop still takes time for group IDs > 1 as the
poller is not yet started on these threads and needs to expire its timeout
as there's no way to wake it up. But all threads are eventually stopped.
2022-07-01 19:15:15 +02:00
Willy Tarreau
03f9b35114 MEDIUM: tinfo: add a dynamic thread-group context
The thread group info is not sufficient to represent a thread group's
current state as it's read-only. We also need something comparable to
the thread context to represent the aggregate state of the threads in
that group. This patch introduces ha_tgroup_ctx[] and tg_ctx for this.
It's indexed on the group id and must be cache-line aligned. The thread
masks that were global and that do not need to remain global were moved
there (want_rdv, harmless, idle).

Given that all the masks placed there now become group-specific, the
associated thread mask (tid_bit) now switches to the thread's local
bit (ltid_bit). Both are the same for nbtgroups 1 but will differ for
other values.

There's also a tg_ctx pointer in the thread so that it can be reached
from other threads.
2022-07-01 19:15:15 +02:00
Willy Tarreau
22b2a24eb2 CLEANUP: thread: remove thread_sync_release() and thread_sync_mask
This function was added in 2.0 when reworking the thread isolation
mechanism to make it more reliable. However it if fundamentally
incompatible with the full isolation mechanism provided by
thread_isolate_full() since that one will wait for all threads to
become idle while the former will wait for all threads to finish
waiting, causing a deadlock.

Given that it's not used, let's just drop it entirely before it gets
used by accident.
2022-07-01 19:15:15 +02:00
Willy Tarreau
cce203aae5 MINOR: thread: add a new all_tgroups_mask variable to know about active tgroups
In order to kill all_threads_mask we'll need to have an equivalent for
the thread groups. The all_tgroups_mask does just this, it keeps one bit
set per enabled group.
2022-07-01 19:15:15 +02:00
Willy Tarreau
377e37a80f MINOR: tinfo: add the mask of enabled threads in each group
In order to replace the global "all_threads_mask" we'll need to have an
equivalent per group. Take this opportunity for calling it threads_enabled
and make sure which ones are counted there (in case in the future we allow
to stop some).
2022-07-01 19:15:14 +02:00
Willy Tarreau
60fe4a95a2 MINOR: tinfo: replace the tgid with tgid_bit in tgroup_info
Now that the tgid is accessible from the thread, it's pointless to have
it in the group, and it was only set but never used. However we'll soon
frequently need the mask corresponding to the group ID and the risk of
getting it wrong with the +1 or to shift 1 instead of 1UL is important,
so let's store the tgid_bit there.
2022-07-01 19:15:14 +02:00
Willy Tarreau
66ad98a772 MINOR: tinfo: add the tgid to the thread_info struct
At several places we're dereferencing the thread group just to catch
the group number, and this will become even more required once we start
to use per-group contexts. Let's just add the tgid in the thread_info
struct to make this easier.
2022-07-01 19:15:14 +02:00
Willy Tarreau
e7475c8e79 MEDIUM: tasks/fd: replace sleeping_thread_mask with a TH_FL_SLEEPING flag
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.

The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.

This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():

   frontend test
        bind :8888 maxconn 10 thread 1-2
        rate-limit sessions 5

Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
2022-07-01 19:15:14 +02:00
Willy Tarreau
dce4ad755f MEDIUM: thread: add a new per-thread flag TH_FL_NOTIFIED to remember wakeups
Right now when an inter-thread wakeup happens, we preliminary check if the
thread was asleep, and if so we wake the poller up and remove its bit from
the sleeping mask. That's not very clean since the sleeping mask cannot be
entirely trusted since a thread that's about to wake up will already have
its sleeping bit removed.

This patch adds a new per-thread flag (TH_FL_NOTIFIED) to remember that a
thread was notified to wake up. It's cleared before checking the task lists
last, so that new wakeups can be considered again (since wake_thread() is
only used to notify about task wakeups and FD polling changes). This way
we do not need to modify a remote thread's sleeping mask anymore. As such
wake_thread() now only tests and sets the TH_FL_NOTIFIED flag but doesn't
clear sleeping anymore.
2022-07-01 19:15:14 +02:00
Willy Tarreau
058b2c1015 MINOR: poller: centralize poll return handling
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
2022-07-01 19:15:14 +02:00
Willy Tarreau
bdcd32598f MINOR: thread: only use atomic ops to touch the flags
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
2022-07-01 19:15:14 +02:00
Willy Tarreau
8e079cdd44 MINOR: thread: move the flags to the shared cache line
The thread flags were once believed to be local to the thread, but as
it stands, even the STUCK flag is shared since it's looked at by the
watchdog. As such we'll need to use atomic ops to manipulate them, and
likely to move them into the shared area.

This patch only moves the flag into the shared area so that we can later
decide whether it's best to leave them there or to move them back to the
local area. Interestingly, some tests have shown a 3% better performance
on dequeuing with this, while they're not used by other threads yet, so
there are definitely alignment effects that might change over time.
2022-07-01 19:15:14 +02:00
Willy Tarreau
f3efef4d60 MINOR: thread: make wake_thread() take care of the sleeping threads mask
Almost every call place of wake_thread() checks for sleeping threads and
clears the sleeping mask itself, while the function is solely used for
there. Let's move the check and the clearing of the bit inside the function
itself. Note that updt_fd_polling() still performs the check because its
rules are a bit different.
2022-07-01 19:15:14 +02:00
Willy Tarreau
319d136ff9 MEDIUM: task: use regular eb32 trees for the run queues
Since we don't mix tasks from different threads in the run queues
anymore, we don't need to use the eb32sc_ trees and we can switch
to the regular eb32 ones. This uses cheaper lookup and insert code,
and a 16-thread test on the queues shows a performance increase
from 570k RPS to 585k RPS.
2022-07-01 19:15:14 +02:00
Willy Tarreau
c958c70ec8 MINOR: task: replace global_tasks_mask with a check for tree's emptiness
This bit field used to be a per-thread cache of the result of the last
lookup of the presence of a task for each thread in the shared cache.
Since we now know that each thread has its own shared cache, a test of
emptiness is now sufficient to decide whether or not the shared tree
has a task for the current thread. Let's just remove this mask.
2022-07-01 19:15:14 +02:00
Willy Tarreau
da195e8aab MINOR: task: remove grq_total and use rq_total instead
grq_total was only used to know how many tasks were being queued in the
global runqueue for stats purposes, and that was transferred to the per
thread rq_total counter once assigned. We don't need this anymore since
we know where they are, so let's just directly update rq_total and drop
that one.
2022-07-01 19:15:14 +02:00
Willy Tarreau
b17dd6cc19 MEDIUM: task: replace the global rq_lock with a per-rq one
There's no point having a global rq_lock now that we have one shared RQ
per thread, let's have one lock per runqueue instead.
2022-07-01 19:15:14 +02:00
Willy Tarreau
6f78038d72 MEDIUM: task: move the shared runqueue to one per thread
Since we only use the shared runqueue to put tasks only assigned to
known threads, let's move that runqueue to each of these threads. The
goal will be to arrange an N*(N-1) mesh instead of a central contention
point.

The global_rqueue_ticks had to be dropped (for good) since we'll now
use the per-thread rqueue_ticks counter for both trees.

A few points to note:
  - the rq_lock stlil remains the global one for now so there should not
    be any gain in doing this, but should this trigger any regression, it
    is important to detect whether it's related to the lock or to the tree.

  - there's no more reason for using the scope-based version of the ebtree
    now, we could switch back to the regular eb32_tree.

  - it's worth checking if we still need TASK_GLOBAL (probably only to
    delete a task in one's own shared queue maybe).
2022-07-01 19:15:14 +02:00
Willy Tarreau
a4fb79b4a2 MINOR: task: make rqueue_ticks atomic
The runqueue ticks counter is per-thread and wasn't initially meant to
be shared. We'll soon have to share it so let's make it atomic. It's
only updated when waking up a task, and no performance difference was
observed. It was moved in the thread_ctx struct so that it doesn't
pollute the local cache line when it's later updated by other threads.
2022-07-01 19:15:14 +02:00
Willy Tarreau
fc5de15baa CLEANUP: task: remove the now unused TASK_GLOBAL flag
TASK_GLOBAL was exclusively used by task_unlink_rq(), as such it can be
dropped.
2022-07-01 19:15:14 +02:00
Willy Tarreau
3961608f63 CLEANUP: task: remove the unused task_unlink_rq()
This function stopped being used before 2.4 because either the task is
dequeued by the scheduler itself and it knows where to find it, or it's
killed by any thread, and task_kill() must be used for this as only this
one is safe.

It's difficult to say whether task_unlink_rq() is still safe, but once
the lock moves to a thread declared in the task itself, it will be even
more difficult to keep it safe.

Let's just remove it now before someone reuses it and causes trouble.
2022-07-01 19:15:14 +02:00
Willy Tarreau
eed3911a54 MINOR: task: replace task_set_affinity() with task_set_thread()
The latter passes a thread ID instead of a mask, making the code simpler.
2022-07-01 19:15:14 +02:00
Willy Tarreau
159e3acf5d MEDIUM: task: remove TASK_SHARED_WQ and only use t->tid
TASK_SHARED_WQ was set upon task creation and never changed afterwards.
Thus if a task was created to run anywhere (e.g. a check or a Lua task),
all its timers would always pass through the shared timers queue with a
lock. Now we know that tid<0 indicates a shared task, so we can use that
to decide whether or not to use the shared queue. The task might be
migrated using task_set_affinity() but it's always dequeued first so
the check will still be valid.

Not only this removes a flag that's difficult to keep synchronized with
the thread ID, but it should significantly lower the load on systems with
many checks. A quick test with 5000 servers and fast checks that were
saturating the CPU shows that the check rate increased by 20% (hence the
CPU usage dropped by 17%). It's worth noting that run_task_lists() almost
no longer appears in perf top now.
2022-07-01 19:15:14 +02:00
Willy Tarreau
1f4bf7215a MEDIUM: task: only keep task_new_*() and drop task_new()
As previously advertised in comments, the mask-based task_new() is now
gone. The low-level function now is task_new_on() which takes a thread
number or a negative value for "any thread", which is turned to zero
for thread-less builds since there's no shared WQ in thiscase. The
task_new_here() and task_new_anywhere() functions were adjusted
accordingly.
2022-07-01 19:15:14 +02:00
Willy Tarreau
cb8542755e MEDIUM: applet: only keep appctx_new_*() and drop appctx_new()
This removes the mask-based variant so that from now on the low-level
function becomes appctx_new_on() and it takes either a thread number or
a negative value for "any thread". This way we can use task_new_on() and
task_new_anywhere() instead of task_new() which will soon disappear.
2022-07-01 19:15:14 +02:00
Willy Tarreau
0ad00befc1 CLEANUP: task: remove thread_mask from the struct task
It was not used anymore since everything moved to ->tid, so let's
remove it.
2022-07-01 19:15:14 +02:00
Willy Tarreau
29ffe26733 MAJOR: task: use t->tid instead of ffsl(t->thread_mask) to take the thread ID
At several places we need to figure the ID of the first thread allowed
to run a task. Till now this was performed using my_ffsl(t->thread_mask)
but since we now have the thread ID stored into the task, let's use it
instead. This is tagged major because it starts to assume that tid<0 is
strictly equivalent to atleast2(thread_mask), and that as such, among
the allowed threads are the current one.
2022-07-01 19:15:14 +02:00
Willy Tarreau
5b8e054732 MEDIUM: task/debug: move the ->thread_mask integrity checks to ->tid
Let's make sure the new ->tid field is always correct instead of checking
the thread mask.
2022-07-01 19:15:14 +02:00
Willy Tarreau
6ef52f4479 MEDIUM: task: add and preset a thread ID in the task struct
The tasks currently rely on a mask but do not have an assigned thread ID,
contrary to tasklets. However, in practice they're either running on a
single thread or on any thread, so that it will be worth simplifying all
this in order to ease the transition to the thread groups.

This patch introduces a "tid" field in the task struct, that's either
the number of the thread the task is attached to, or a negative value
if the task is not bound to a thread, (i.e. its mask is all_threads_mask).

The new ID is only set and updated but not used yet.
2022-07-01 19:15:14 +02:00
Willy Tarreau
66ef2c3af6 CLEANUP: config: remove unused proc_mask()
The function was used to return the mask of enabled processes, but it
now always returns 1 and is not called anymore. Let's drop it.
2022-07-01 19:15:14 +02:00
Willy Tarreau
252754c745 MINOR: tinfo: make tid temporarily still reflect global ID
For now we still set tid_bit to (1UL << tid) because FDs will not
work with more than one group without this, but once FDs start to
adopt local masks this must change to thr->ltid_bit.
2022-07-01 19:14:42 +02:00
Emeric Brun
f41a3f6762 MINOR: fd: add a new FD_DISOWN flag to prevent from closing a deleted FD
Some FDs might be offered to some external code (external libraries)
which will deal with them until they close them. As such we must not
close them upon fd_delete() but we need to delete them anyway so that
they do not appear anymore in the fdtab. This used to be handled by
fd_remove() before 2.3 but we don't have this anymore.

This patch introduces a new flag FD_DISOWN to let fd_delete() know that
the core doesn't own the fd and it must not be closed upon removal from
the fd_tab. This way it's totally unregistered from the poller but still
open.

This patch must be backported on branches >= 2.3 because it will be
needed to fix a bug affecting SSL async. it should be adapted on 2.3
because state flags were stored in a different way (via bits in the
structure).
2022-07-01 17:41:40 +02:00
Amaury Denoyelle
e0a92a7e56 MINOR: ncbuf: implement ncb_is_fragmented()
Implement a new status function for ncbuf. It allows to quickly report
if a buffer contains data in a fragmented way, i.e. with gaps in between
or at start of the buffer.

To summarize, a buffer is considered as non-fragmented in the following
cases :
- a null or empty buffer
- a full buffer
- a buffer containing exactly one data block at the beginning, following
  by a gap until the end.
2022-07-01 15:54:23 +02:00
Frédéric Lécaille
649b3fd9aa MINOR: quic: Increase the QUIC connections RX buffer size (upto 64Kb)
With ~1500 bytes QUIC datagrams, we can handle less than 200 datagrams
which is less than the default maxpollevents value. This should reduce
the chances of fulfilling the connections RX buffers as reported by
Tristan in GH #1737.

Must be backported to 2.6.
2022-06-30 14:34:32 +02:00
Frédéric Lécaille
1ca14950e6 MINOR: quic: Duplicated QUIC_RX_BUFSZ definition
This macro is already defined in src/quic_sock.c which is the correct place.

Must be backported to 2.6
2022-06-30 14:24:04 +02:00
Frédéric Lécaille
45a16295e3 MINOR: quic: Add new stats counter to diagnose RX buffer overrun
Remove the call to qc_list_all_rx_pkts() which print messages on stderr
during RX buffer overruns and add a new counter for the number of dropped packets
because of such events.

Must be backported to 2.6
2022-06-30 14:24:04 +02:00
Frédéric Lécaille
ad548b54a7 MINOR: task: Add tasklet_wakeup_after()
We want to be able to schedule a tasklet onto a thread after the current tasklet
is done. What we have to do is to insert this tasklet at the head of the thread
task list. Furthermore, we would like to serialize the tasklets. They must be
run in the same order as the order in which they have been scheduled. This is
implemented passing a list of tasklet as parameter (see <head> parameters) which
must be reused for subsequent calls.
_tasklet_wakeup_after_on() is implemented to accomplish this job.
tasklet_wakeup_after_on() and tasklet_wake_after() are only wrapper macros around
_tasklet_wakeup_after_on(). tasklet_wakeup_after_on() does exactly the same thing
as _tasklet_wakeup_after_on() without having to pass the filename and line in the
filename as parameters (usefull when DEBUG_TASK is enabled).
tasklet_wakeup_after() hides also the usage of the thread parameter which is
<tl> tasklet thread ID.
2022-06-30 14:24:04 +02:00
Frédéric Lécaille
628e89cfae BUILD: quic+h3: 32-bit compilation errors fixes
In GH #1760 (which is marked as being a feature), there were compilation
errors on MacOS which could be reproduced in Linux when building 32-bit code
(-m32 gcc option). Most of them were due to variables types mixing in QUIC_MIN macro
or using size_t type in place of uint64_t type.

Must be backported to 2.6.
2022-06-24 12:13:53 +02:00