Released version 2.7-dev2 with the following main changes :
- BUG/MINOR: qpack: fix build with QPACK_DEBUG
- MINOR: h3: handle errors on HEADERS parsing/QPACK decoding
- BUG/MINOR: qpack: abort on dynamic index field line decoding
- MINOR: qpack: properly handle invalid dynamic table references
- MINOR: task: Add tasklet_wakeup_after()
- BUG/MINOR: quic: Dropped packets not counted (with RX buffers full)
- MINOR: quic: Add new stats counter to diagnose RX buffer overrun
- MINOR: quic: Duplicated QUIC_RX_BUFSZ definition
- MINOR: quic: Improvements for the datagrams receipt
- CLEANUP: h2: Typo fix in h2_unsubcribe() traces
- MINOR: quic: Increase the QUIC connections RX buffer size (upto 64Kb)
- CLEANUP: mux-quic: adjust comment on qcs_consume()
- MINOR: ncbuf: implement ncb_is_fragmented()
- BUG/MINOR: mux-quic: do not signal FIN if gap in buffer
- MINOR: fd: add a new FD_DISOWN flag to prevent from closing a deleted FD
- BUG/MEDIUM: ssl/fd: unexpected fd close using async engine
- MINOR: tinfo: make tid temporarily still reflect global ID
- CLEANUP: config: remove unused proc_mask()
- MINOR: debug: remove mask support from "debug dev sched"
- MEDIUM: task: add and preset a thread ID in the task struct
- MEDIUM: task/debug: move the ->thread_mask integrity checks to ->tid
- MAJOR: task: use t->tid instead of ffsl(t->thread_mask) to take the thread ID
- MAJOR: task: replace t->thread_mask with 1<<t->tid when thread mask is needed
- CLEANUP: task: remove thread_mask from the struct task
- MEDIUM: applet: only keep appctx_new_*() and drop appctx_new()
- MEDIUM: task: only keep task_new_*() and drop task_new()
- MINOR: applet: always use task_new_on() on applet creation
- MEDIUM: task: remove TASK_SHARED_WQ and only use t->tid
- MINOR: task: replace task_set_affinity() with task_set_thread()
- CLEANUP: task: remove the unused task_unlink_rq()
- CLEANUP: task: remove the now unused TASK_GLOBAL flag
- MINOR: task: make rqueue_ticks atomic
- MEDIUM: task: move the shared runqueue to one per thread
- MEDIUM: task: replace the global rq_lock with a per-rq one
- MINOR: task: remove grq_total and use rq_total instead
- MINOR: task: replace global_tasks_mask with a check for tree's emptiness
- MEDIUM: task: use regular eb32 trees for the run queues
- MEDIUM: queue: revert to regular inter-task wakeups
- MINOR: thread: make wake_thread() take care of the sleeping threads mask
- MINOR: thread: move the flags to the shared cache line
- MINOR: thread: only use atomic ops to touch the flags
- MINOR: poller: centralize poll return handling
- MEDIUM: polling: make update_fd_polling() not care about sleeping threads
- MINOR: poller: update_fd_polling: wake a random other thread
- MEDIUM: thread: add a new per-thread flag TH_FL_NOTIFIED to remember wakeups
- MEDIUM: tasks/fd: replace sleeping_thread_mask with a TH_FL_SLEEPING flag
- MINOR: tinfo: add the tgid to the thread_info struct
- MINOR: tinfo: replace the tgid with tgid_bit in tgroup_info
- MINOR: tinfo: add the mask of enabled threads in each group
- MINOR: debug: use ltid_bit in ha_thread_dump()
- MINOR: wdt: use ltid_bit in wdt_handler()
- MINOR: clock: use ltid_bit in clock_report_idle()
- MINOR: thread: use ltid_bit in ha_tkillall()
- MINOR: thread: add a new all_tgroups_mask variable to know about active tgroups
- CLEANUP: thread: remove thread_sync_release() and thread_sync_mask
- MEDIUM: tinfo: add a dynamic thread-group context
- MEDIUM: thread: make stopping_threads per-group and add stopping_tgroups
- MAJOR: threads: change thread_isolate to support inter-group synchronization
- MINOR: thread: add is_thread_harmless() to know if a thread already is harmless
- MINOR: debug: mark oneself harmless while waiting for threads to finish
- MINOR: wdt: do not rely on threads_to_dump anymore
- MEDIUM: debug: make the thread dumper not rely on a thread mask anymore
- BUILD: debug: fix build issue on clang with previous commit
- BUILD: debug: re-export thread_dump_state
- BUG/MEDIUM: threads: fix incorrect thread group being used on soft-stop
- BUG/MEDIUM: thread: check stopping thread against local bit and not global one
- MINOR: proxy: use tg->threads_enabled in hard_stop() to detect stopped threads
- BUILD: Makefile: Add Lua 5.4 autodetect
- CI: re-enable gcc asan builds
- MEDIUM: mworker: set the iocb of the socketpair without using fd_insert()
- MINOR: fd: Add BUG_ON checks on fd_insert()
- CLEANUP: mworker: rename mworker_pipe to mworker_sockpair
- CLEANUP: mux-quic: do not export qc_get_ncbuf
- REORG: mux-quic: reorganize flow-control fields
- MINOR: mux-quic: implement accessor for sedesc
- MEDIUM: mux-quic: refactor streams opening
- MINOR: mux-quic: rename qcs flag FIN_RECV to SIZE_KNOWN
- MINOR: mux-quic: emit FINAL_SIZE_ERROR on invalid STREAM size
- BUG/MINOR: peers/config: always fill the bind_conf's argument
- BUG/MEDIUM: peers/config: properly set the thread mask
- CLEANUP: bwlim: Set pointers to NULL when memory is released
- BUG/MINOR: http-check: Preserve headers if not redefined by an implicit rule
- BUG/MINOR: http-act: Properly generate 103 responses when several rules are used
- BUG/MEDIUM: thread: mask stopping_threads with threads_enabled when checking it
- CLEANUP: thread: also remove a thread's bit from stopping_threads on stop
- BUG/MINOR: peers: fix possible NULL dereferences at config parsing
- BUG/MINOR: http-htx: Fix scheme based normalization for URIs wih userinfo
- MINOR: http: Add function to get port part of a host
- MINOR: http: Add function to detect default port
- BUG/MEDIUM: h1: Improve authority validation for CONNCET request
- MINOR: http-htx: Use new HTTP functions for the scheme based normalization
- BUG/MEDIUM: http-fetch: Don't fetch the method if there is no stream
- REGTEESTS: filters: Fix CONNECT request in random-forwarding script
- MEDIUM: mworker/systemd: send STATUS over sd_notify
- BUG/MINOR: mux-h1: Be sure to commit htx changes in the demux buffer
- BUG/MEDIUM: http-ana: Don't wait to have an empty buf to switch in TUNNEL state
- BUG/MEDIUM: mux-h1: Handle connection error after a synchronous send
- MEDIUM: epoll: don't synchronously delete migrated FDs
- BUILD: debug: silence warning on gcc-5
- BUILD: http: silence an uninitialized warning affecting gcc-5
- BUG/MEDIUM: mux-quic: fix server chunked encoding response
- REORG: mux-quic: rename stream initialization function
- MINOR: mux-quic: rename stream purge function
- MINOR: mux-quic: add traces on frame parsing functions
- MINOR: mux-quic: implement qcs_alert()
- MINOR: mux-quic: filter send/receive-only streams on frame parsing
- MINOR: mux-quic: do not ack STREAM frames on unrecoverable error
- MINOR: mux-quic: support stream opening via MAX_STREAM_DATA
- MINOR: mux-quic: define basic stream states
- MINOR: mux-quic: use stream states to mark as detached
- MEDIUM: mux-quic: implement RESET_STREAM emission
- MEDIUM: mux-quic: implement STOP_SENDING handling
- BUG/MEDIUM: debug: fix possible hang when multiple threads dump at once
- BUG/MINOR: quic: fix closing state on NO_ERROR code sent
- CLEANUP: quic: clean up include on quic_frame-t.h
- MINOR: quic: define a generic QUIC error type
- MINOR: mux-quic: support app graceful shutdown
- MINOR: mux-quic/h3: prepare CONNECTION_CLOSE on release
- MEDIUM: quic: send CONNECTION_CLOSE on released MUX
- CLEANUP: mux-quic: move qc_release()
- MINOR: mux-quic: send one last time before release
- MINOR: h3: store control stream in h3c
- MINOR: h3: implement graceful shutdown with GOAWAY
- BUG/MINOR: threads: produce correct global mask for tgroup > 1
- BUG/MEDIUM: cli/threads: make "show threads" more robust on applets
- BUG/MINOR: thread: use the correct thread's group in ha_tkillall()
- BUG/MINOR: debug: enter ha_panic() only once
- BUG/MEDIUM: debug: fix parallel thread dumps again
- MINOR: cli/streams: show a stream's tgid next to its thread ID
- DEBUG: cli: add a new "debug dev deadlock" expert command
- MINOR: cli/activity: add a thread number argument to "show activity"
- CLEANUP: applet: remove the obsolete command context from the appctx
- MEDIUM: config: remove deprecated "bind-process" directives from frontends
- MEDIUM: config: remove the "process" keyword on "bind" lines
- MINOR: listener/config: make "thread" always support up to LONGBITS
- CLEANUP: fd: get rid of the __GET_{NEXT,PREV} macros
- MEDIUM: debug/threads: make the lock debugging take tgroups into account
- MEDIUM: proto: stop protocols under thread isolation during soft stop
- MEDIUM: poller: program the update in fd_update_events() for a migrated FD
- MEDIUM: poller: disable thread-groups for poll() and select()
- MINOR: thread: remove MAX_THREADS limitation
- MEDIUM: cpu-map: replace the process number with the thread group number
- MINOR: mworker/threads: limit the mworker sockets to group 1
- MINOR: cli/threads: always bind CLI to thread group 1
- MINOR: fd/thread: get rid of thread_mask()
- MEDIUM: task/thread: move the task shared wait queues per thread group
- MINOR: task: move the niced_tasks counter to the thread group context
- DOC: design: add some thoughts about how to handle the update_list
- MEDIUM: conn: make conn_backend_get always scan the same group
- MAJOR: fd: remove pending updates upon real close
- MEDIUM: fd/poller: make the update-list per-group
- MINOR: fd: delete unused updates on close()
- MINOR: fd: make fd_insert() apply the thread mask itself
- MEDIUM: fd: add the tgid to the fd and pass it to fd_insert()
- MINOR: cli/fd: show fd's tgid and refcount in "show fd"
- MINOR: fd: add functions to manipulate the FD's tgid
- MINOR: fd: add fd_get_running() to atomically return the running mask
- MAJOR: fd: grab the tgid before manipulating running
- MEDIUM: fd/poller: turn polled_mask to group-local IDs
- MEDIUM: fd/poller: turn update_mask to group-local IDs
- MEDIUM: fd/poller: turn running_mask to group-local IDs
- MINOR: fd: make fd_clr_running() return the previous value instead
- MEDIUM: fd: make thread_mask now represent group-local IDs
- MEDIUM: fd: make fd_insert() take local thread masks
- MEDIUM: fd: make fd_insert/fd_delete atomically update fd.tgid
- MEDIUM: fd: quit fd_update_events() when FD is closed
- MEDIUM: thread: change thread_resolve_group_mask() to return group-local values
- MEDIUM: listener: switch bind_thread from global to group-local
- MINOR: fd: add fd_reregister_all() to deal with boot-time FDs
- MEDIUM: fd: support stopping FDs during starting
- MAJOR: pollers: rely on fd_reregister_all() at boot time
- MAJOR: poller: only touch/inspect the update_mask under tgid protection
- MEDIUM: fd: support broadcasting updates for foreign groups in updt_fd_polling
- CLEANUP: threads: remove the now unused all_threads_mask and tid_bit
- MINOR: config: change default MAX_TGROUPS to 16
- BUG/MEDIUM: tools: avoid calling dlsym() in static builds
Since 2.4 with commit 64192392c ("MINOR: tools: add functions to retrieve
the address of a symbol"), we can resolve symbols. However some old glibc
crash in dlsym() when the program is statically built.
Fortunately even on these old libs we can detect lack of support by
calling dlopen(NULL). Normally it returns a handle to the current
program, but on a static build it returns NULL. This is sufficient to
refrain from calling dlsym() (which will be of very limited use anyway),
so we check this once at boot and use the result when needed.
This may be backported to 2.4. On stable versions, be careful to place
the init code inside an if/endif guard that checks for DL support.
This will allows nbtgroups > 1 to be declared in the config without
recompiling. The theoretical limit is 64, though we'd rather not push
it too far for now as some structures might be enlarged to be indexed
per group. Let's start with 16 groups max, allowing to experiment with
dual-socket machines suffering from up to 8 loosely coupled L3 caches.
It's a good start and doesn't engage us too far.
Since these are not used anymore, let's now remove them. Given the
number of places where we're using ti->ldit_bit, maybe an equivalent
might be useful though.
We're still facing the situation where it's impossible to update an FD
for a foreign group. That's of particular concern when disabling/enabling
listeners (e.g. pause/resume on signals) since we don't decide which thread
gets the signal and it needs to process all listeners at once.
Fortunately, not that much is unprotected in FDs. This patch adds a test for
tgid's equality in updt_fd_polling() so that if a change is applied for a
foreing group, then it's detected and taken care of separately. The method
consists in forcing the update on all bound threads in this group, adding it
to the group's update_list, and sending a wake-up as would be done for a
remote thread in the local group, except that this is done by grabbing a
reference to the FD's tgid.
Thanks to this, SIGTTOU/SIGTTIN now work for nbtgroups > 1 (after that was
temporarily broken by "MEDIUM: fd/poller: make the update-list per-group").
With thread groups and group-local masks, the update_mask cannot be
touched nor even checked if it may change below us. In order to avoid
this, we have to grab a reference to the FD's tgid before checking the
update mask. The operations are cheap enough so that we don't notice it
in performance tests. This is expected because the risk of meeting a
reassigned FD during an update remains very low.
It's worth noting that the tgid cannot be trusted during startup nor
during soft-stop since that may come from anywhere at the moment. Since
soft-stop runs under thread isolation we use that hint to decide whether
or not to check that the FD's tgid matches the current one.
The modification is applied to the 3 thread-aware pollers, i.e. epoll,
kqueue, and evports. Also one poll_drop counter was missing for shared
updates, though it might be hard to trigger it.
With this change applied, thread groups are usable in benchmarks.
The poller-specific thread init code now uses that new function to
safely register boot events. This ensures that we don't register an
event for another group and that we properly deal with parallel
thread startup.
It's only done for thread-aware pollers, there's no point in using
that in poll/select though that should work as well.
There's a nasty case during boot, which is the master process. It stops
all listeners from the main thread, and as such we're seeing calls to
fd_delete() from a thread that doesn't match the FD's mask, but more
importantly from a group that doesn't match either. Fortunately this
happens in a process that doesn't see the threads creation, so the FDs
are left intact in the table and we can overwrite the tgid there.
The approach is ugly, it probably shows that we should use a dummy
value for the tgid during boot, that would be replaced once the FDs
migrate to their target, but we also need a way to make sure not to
miss them. Also that doesn't solve the possibility of closing a
listener at run time from the wrong thread group.
At boot the pollers are allocated for each thread and they need to
reprogram updates for all FDs they will manage. This code is not
trivial, especially when trying to respect thread groups, so we'd
rather avoid duplicating it.
Let's centralize this into fd.c with this function. It avoids closed
FDs, those whose thread mask doesn't match the requested one or whose
thread group doesn't match the requested one, and performs the update
if required under thread-group protection.
It requires to both adapt the parser and change the algorithm to
redispatch incoming traffic so that local threads IDs may always
be used.
The internal structures now only reference thread group IDs and
group-local masks which are compatible with those now used by the
FD layer and the rest of the code.
It used to turn group+local to global but now we're doing the exact
opposite as we want to stick to group-local masks. This means that
"thread 3-4" might very well emit what "thread 2/1-2" used to emit
till now for 2 groups and 4 threads. This is needed because we'll
have to support group-local thread masks in receivers.
However the rest of the code (receivers) is not ready yet for this,
so using this code with more than one thread group will definitely
break some bindings.
The IOCB might have closed the FD itself, so it's not an error to
have fd.tgid==0 or anything else, nor to have a null running_mask.
In fact there are different conditions under which we can leave the
IOCB, all of them have been enumerated in the code's comments (namely
FD still valid and used, hence has running bit, FD closed but not yet
reassigned thus running==0, FD closed and reassigned, hence different
tgid and running becomes irrelevant, just like all other masks). For
this reason we have no other solution but to try to grab the tgid on
return before checking the other bits. In practice it doesn't represent
a big cost, because if the FD was closed and reassigned, it's instantly
detected and the bit is immediately released without blocking other
threads, and if the FD wasn't closed this doesn't prevent it from
being migrated to another thread. In the worst case a close by another
thread after a migration will be postponed till the moment the running
bit is cleared, which is the same as before.
These functions need to set/reset the FD's tgid but when they're called
there may still be wakeups on other threads that discover late updates
and have to touch the tgid at the same time. As such, it is not possible
to just read/write the tgid there. It must only be done using operations
that are compatible with what other threads may be doing.
As we're using inc/dec on the refcount, it's safe to AND the area to zero
the lower part when resetting the value. However, in order to set the
value, there's no other choice but fd_claim_tgid() which will assign it
only if possible (via a CAS). This is convenient in the end because it
protects the FD's masks from being modified by late threads, so while
we hold this refcount we can safely reset the thread_mask and a few other
elements. A debug test for non-null masks was added to fd_insert() as it
must not be possible to face this situation thanks to the protection
offered by the tgid.
fd_insert() was already given a thread group ID and a global thread mask.
Now we're changing the few callers to take the group-local thread mask
instead. It's passed directly into the FD's thread mask. Just like for
previous commit, it must not change anything when a single group is
configured.
With the change that was started on other masks, the thread mask was
still not fully converted, sometimes being used as a global mask and
sometimes as a local one. This finishes the code modifications so that
the mask is always considered as a group-local mask. This doesn't
change anything as long as there's a single group, but is necessary
for groups 2 and above since it's used against running_mask and so on.
It's an AND so it destroys information and due to this there's a call
place where we have to perform two reads to know the previous value
then to change it. With a fetch-and-and instead, in a single operation
we can know if the bit was previously present, which is more efficient.
From now on, the FD's running_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
From now on, the FD's update_mask only refers to local thread IDs. However,
there remains a limitation, in updt_fd_polling(), we temporarily have to
check and set shared FDs against .thread_mask, which still contains global
ones. As such, nbtgroups > 1 may break (but this is not yet supported without
special build options).
This changes the signification of each bit in the polled_mask so that
now each bit represents a local thread ID for the current group instead
of a global thread ID. As such, all tests now apply to ltid_bit instead
of tid_bit.
No particular check was made to verify that the FD's tgid matches the
current one because there should be no case where this is not true. A
check was added in epoll's __fd_clo() to confirm it never differs unless
expected (soft stop under thread isolation, or master in starting mode
going to exec mode), but that doesn't prevent from doing the job: it
only consists in checking in the group's threads those that are still
polling this FD and to remove them.
Some atomic loads were added at the various locations, and most repetitive
references to polled_mask[fd].xx were turned to a local copy instead making
the code much more clear.
We now grab a reference to the FD's tgid before manipulating the
running_mask so that we're certain it corresponds to our own group
(hence bits), and we drop it once we've set the bit. For now there's
no measurable performance impact in doing this, which is great. The
lock can be observed by perf top as taking a small share of the time
spent in fd_update_events(), itself taking no more than 0.28% of CPU
under 8 threads.
However due to the fact that the thread groups are not yet properly
spread across the pollers and the thread masks are still wrong, this
will trigger some BUG_ON() in fd_insert() after a few tens of thousands
of connections when threads other than those of group 1 are reached,
and this is expected.
The running mask is only valid if the tgid is the expected one. This
function takes a reference on the tgid before reading the running mask,
so that both are checked at once. It returns either the mask or zero if
the tgid differs, thus providing a simple way for a caller to check if
it still holds the FD.
The FD's tgid is refcounted and must be atomically manipulated. Function
fd_grab_tgid() will increase the refcount but only if the tgid matches the
one in argument (likely the current one). fd_claim_tgid() will be used to
self-assign the tgid after waiting for its refcount to reach zero.
fd_drop_tgid() will be used to drop a temporarily held tgid. All of these
are needed to prevent an FD from being reassigned to another group, either
when inspecting/modifying the running_mask, or when checking for updates,
in order to be certain that the mask being seen corresponds to the desired
group. Note that once at least one bit is set in the running mask of an
active FD, it cannot be closed, thus not migrated, thus the reference does
not need to be held long.
The file descriptors will need to know the thread group ID in addition
to the mask. This extends fd_insert() to take the tgid, and will store
it into the FD.
In the FD, the tgid is stored as a combination of tgid on the lower 16
bits and a refcount on the higher 16 bits. This allows to know when it's
really possible to trust the tgid and the running mask. If a refcount is
higher than 1 it indeed indicates another thread else might be in the
process of updating these values.
Since a closed FD must necessarily have a zero refcount, a test was
added to fd_insert() to make sure that it is the case.
It's a bit ugly to see that half of the callers of fd_insert() have to
apply all_threads_mask themselves to the bit field they're passing,
because usually it comes from a listener that may have other bits set.
Let's make the function apply the mask itself.
After a poller's ->clo() was called to completely terminate operations
on an FD, there's no reason for keeping updates on this FD, so if any
updates were already programmed it would be nice if we could delete them.
Tests show that __fd_clo() is called roughly half of the time with the
last FD from the local update list, which possibly makes sense if a close
has to appear after a polling change resulting from an incomplete read or
the end of a send().
We can detect this and remove the last entry, which gives less work to
do during the update() call, and eliminates most of the poll_drop_fd
event reports.
Note that while tempting, this must not be backported because it's only
safe to be done now that fd_delete_orphan() clears the update mask as
we need to be certain not to miss it:
- if the update mask is kept up with no entry, we can miss future
updates ;
- if the update mask is cleared too fast, it may result in failure
to add a shared event.
The update-list needs to be per-group because its inspection is based
on a mask and we need to be certain when scanning it if a mask is for
the same thread or another one. Once per-group there's no doubt about
it, even if the FD's polling changes, the entry remains valid. It will
be needed to check the tgid though.
Note that a soft-stop or pause/resume might not necessarily work here
with tgroups>1, because the operation might be delivered to a thread
that doesn't belong to the group and whoe update mask will not reflect
one that is interesting here. We can't do better at this stage.
Dealing with long-lasting updates that outlive a close() is always
going to be quite a problem, not because of the thread that will
discover such updates late, but mostly due to the shared update_list
that will have an entry on hold making it difficult to reuse it, and
requiring that the fd's tgid is changed and the update_mask reset
from a safe location.
After careful inspection, it turns out that all our pollers that support
automatic event removal upon close() do not need any extra bookkeeping,
and that poll and select that use an internal representation already
provide a poller->clo() callback that is already used to update the
local event. As such, it is already safe to reset the update mask and
to remove the event from the shared list just before the final close,
because nothing remains to be done with this FD by the poller.
Doing so considerably simplifies the handling of updates, which will
only have to be inspected by the pollers, while the writers can continue
to consider that the entries are always valid. Another benefit is that
it will be possible to reduce contention on the update_list by just
having one update_list per group (left to be done later if needed).
We don't want to pick idle connections from another thread group,
this would be very slow by forcing to share undesirable data.
This patch makes sure that we start seeking from the current thread
group's threads only and loops over that range exclusively.
It's worth noting that the next_takeover pointer remains per-server
and will bounce when multiple groups use it at the same time. But we
preserve the perturbation by applying a modulo when retrieving it,
so that when groups are of the same size (most common case), the
index will not even change. At this time it doesn't seem worth
storing one index per group in servers, but that might be an option
if any contention is detected later.
This one is only used as a hint to improve scheduling latency, so there
is no more point in keeping it global since each thread group handles
its own run q
Their migration was postponed for convenience only but now's time for
having the shared wait queues per thread group and not just per process,
otherwise the WQ lock uses a huge amount of CPU alone.
Since commit d2494e048 ("BUG/MEDIUM: peers/config: properly set the
thread mask") there must not remain any single case of a receiver that
is bound nowhere, so there's no need anymore for thread_mask().
We're adding a test in fd_insert() to make sure this doesn't happen by
accident though, but the function was removed and its rare uses were
replaced with the original value of the bind_thread msak.
When using multiple groups, the stats socket starts to emit errors and
it's not natural to have to touch the global section just to specify
"thread 1/all". Let's pre-attach these sockets to thread group 1. This
will cause errors when trying to change the group but this really is not
a problem for now as thread groups are not enabled by default. This will
make sure configs remain portable and may possibly be relaxed later.
As a side effect of commit 34aae2fd1 ("MEDIUM: mworker: set the iocb of
the socketpair without using fd_insert()"), a config may now refuse to
start if there are multiple groups configured because the default bind
mask may span over multiple groups, and it is not possible to force it
to work differently.
Let's just assign thread group 1 to the master<->worker sockets so that
the thread bindings automatically resolve to a single group. The same was
done for the master side of the socket even if it's not used. It will
avoid being forgotten in the future.
The principle remains the same, but instead of having a single process
and ignoring extra ones, now we set the affinity masks for the respective
threads of all groups.
The doc was updated with a few extra examples.
These old legacy pollers are not designed for this. They're still
using a shared list of events for all threads, this will not scale at
all, so there's no point in enabling thread-groups there. Modern
systems have epoll, kqueue or event ports and do not need these ones.
We arrange for failing at boot time, only when thread-groups > 1 so
that existing setups will remain unaffected.
If there's a compelling reason for supporting thread groups with these
pollers in the future, the rework should not be too hard, it would just
consume a lot of memory to have an fd_evts[] array per thread, but that
is doable.
When an FD is migrated, all pollers program an update. That's useless
code duplication, and when thread groups will be supported, this will
require an extra round of locking just to verify the update_mask on
return. Let's just program the update direction from fd_update_events()
as it already does for closed FDs, this becomes more logical.
protocol_stop_now() is called from do_soft_stop_now() running on any
thread that received the signal. The problem is that it will call some
listener handlers to close the FD, resulting in an fd_delete() being
called from the wrong group. That's not clean and we cannot even rely
on the thread mask to show up.
One interesting long-term approach could be to have kill queues for
FDs, and maybe we'll need them in the long run. However that doesn't
work well for listeners in this situation.
Let's simply isolate ourselves during this instant. We know we'll be
alone dealing with the close and that the FD will be instantly deleted
since not in use by any other thread. It's not the cleanest solution
but it should last long enough without causing trouble.
Since we have to use masks to verify owners/waiters, we have no other
option but to have them per group. This definitely inflates the size
of the locks, but this is only used for extreme debugging anyway so
that's not dramatic.
Thus as of now, all masks in the lock stats are local bit masks, derived
from ti->ltid_bit. Since at boot ltid_bit might not be set, we just take
care of this situation (since some structs are initialized under look
during boot), and use bit 0 from group 0 only.
They were initially made to deal with both the cache and the update list
but there's no cache anymore and keeping them for the update list adds a
lot of obfuscation that is really not desired. Let's get rid of them now.
Their purpose was simply to get a pointer to fdtab[fd].update.{,next,prev}
in order to perform atomic tests and modifications. The offset passed in
argument to the functions (fd_add_to_fd_list() and fd_rm_from_fd_list())
was the offset of the ->update field in fdtab, and as it's not used anymore
it was removed. This also removes a number of casts, though those used by
the atomic ops have to remain since only scalars are supported.
It was deprecated, marked for removal in 2.7 and was already emitting a
warning, let's get rid of it. Note that we've kept the keyword detection
to suggest to use "thread" instead.
This was already causing a deprecation warning and was marked for removal
in 2.7, now it happens. An error message indicates this doesn't exist
anymore.
The "ctx" and "st2" parts in the appctx were marked for removal in 2.7
and were emulated using memcpy/memset etc for possible external code.
Let's remove this now.
The output of "show activity" can be so large that the output is visually
unreadable on a screen. Let's add an option to filter on the desired
column (actually the thread number), use "0" to report only the first
column (aggregated/sum/avg), and use "-1", the default, for the normal
detailed dump.
This command will create the requested number of tasks competing on a
lock, resulting in triggering the watchdog and crashing the process.
This will help stress the watchdog and inspect the lock debugging parts.
The previous attempt to fix thread dumps in commit 672972604 ("BUG/MEDIUM:
debug: fix possible hang when multiple threads dump at once") still had
some shortcomings. Sometimes parallel dumps are jerky essentially due to
the way that threads synchronize on startup and end. In addition the risk
of waiting forever for a stopped thread exists, and panics happening in
parallel to thread dumps are not more reliable either.
This commit revisits the state transitions so that all threads may request
a dump in parallel, that all of them wait for each other in the handler,
and that one thread is responsible for counting every other and checking
that the total matches the number of active threads.
Then for stopping there's a finishing phase that all threads wait for so
that none quits this area too early. Given that we now know the number of
participants to the dump, we can let them each decrement the counter when
leaving so that another dump may only start after the last participant
has completely left.
Now many thread dumps in parallel are running fine, so do panics. No
backport is needed as this was the result of the changes for thread
groups.