Commit Graph

18026 Commits

Author SHA1 Message Date
Christopher Faulet
0eab050b04 BUG/MINOR: http-htx: Fix scheme based normalization for URIs wih userinfo
The scheme based normalization is not properly handled the URI's userinfo,
if any. First, the authority parser is not called with "no_userinfo"
parameter set. Then it is skipped from the URI normalization.

This patch must be backported as far as 2.4.
2022-07-06 17:54:02 +02:00
William Lallemand
1d93217a05 BUG/MINOR: peers: fix possible NULL dereferences at config parsing
Patch 49f6f4b ("BUG/MEDIUM: peers: fix segfault using multiple bind on
peers sections") introduced possible NULL dereferences when parsing the
peers configuration.

Fix the issue by checking the return value of bind_conf_uniq_alloc().

This patch should be backported as far as 2.0.
2022-07-06 14:40:11 +02:00
Willy Tarreau
ad92fdf196 CLEANUP: thread: also remove a thread's bit from stopping_threads on stop
As much as possible we should take care of not leaving bits from stopped
threads in shared thread masks. It can avoid issues like the previous
fix and will also make debugging less confusing.
2022-07-06 10:19:46 +02:00
Willy Tarreau
f34a3fa33d BUG/MEDIUM: thread: mask stopping_threads with threads_enabled when checking it
When soft-stopping, there's a comparison between stopping_threads and
threads_enabled to make sure all threads are stopped, but this is not
correct and is racy since the threads_enabled bit is removed when a
thread is stopped but not its stopping_threads bit. The consequence is
that depending on timing, when stopping, if the first stopping thread
is fast enough to remove its bit from threads_enabled, the other threads
will see that stopping_threads doesn't match threads_enabled anymore and
will wait forever. As such the mask must be applied to stopping_threads
during the test. This issue was introduced in recent commit ef422ced9
("MEDIUM: thread: make stopping_threads per-group and add stopping_tgroups"),
no backport is needed.
2022-07-06 10:19:46 +02:00
Christopher Faulet
4c3d3d2a68 BUG/MINOR: http-act: Properly generate 103 responses when several rules are used
When several "early-hint" rules are used, we try, as far as possible, to
merge links into the same 103-early-hints response. However, it only works
if there is no ACLs. If a "early-hint" rule is not executed an invalid
response is generated. the EOH block or the start-line may be missing,
depending on the rule order.

To fix the bug, we use the transaction status code. It is unused at this
stage. Thus, it is set to 103 when a 103-early-hints response is in
progress. And it is reset when the response is forwarded. In addition, the
response is forwarded if the next rule is an "early-hint" rule with an
ACL. This way, the response is always valid.

This patch must be backported as far as 2.2.
2022-07-06 09:37:43 +02:00
Christopher Faulet
4c8e58def6 BUG/MINOR: http-check: Preserve headers if not redefined by an implicit rule
When an explicit "http-check send" rule is used, if it is the first one, it
is merge with the implicit rule created by "option httpchk" statement. The
opposite is also true. Idea is to have only one send rule with the merged
info. It means info defined in the second rule override those defined in the
first one. However, if an element is not defined in the second rule, it must
be ignored, keeping this way info from the first rule. It works as expected
for the method, the uri and the request version. But it is not true for the
header list.

For instance, with the following statements, a x-forwarded-proto header is
added to healthcheck requests:

  option httpchk
  http-check send meth GET hdr x-forwarded-proto https

while by inverting the statements, no extra headers are added:

  http-check send meth GET hdr x-forwarded-proto https
  option httpchk

Now the old header list is overriden if the new one is not empty.

This patch should fix the issue #1772. It must be backported as far as 2.2.
2022-07-06 09:35:13 +02:00
Christopher Faulet
f0196f4f71 CLEANUP: bwlim: Set pointers to NULL when memory is released
Calls to free() are replaced by ha_free(). And otherwise, the pointers are
explicitly set to NULL after a release. There is no issue here but it could
help debugging sessions.
2022-07-06 09:34:54 +02:00
Willy Tarreau
d2494e0489 BUG/MEDIUM: peers/config: properly set the thread mask
The peers didn't have their bind_conf thread mask nor group set, because
they're still not part of the global proxy list. Till 2.6 it seems it does
not have any visible impact, since most listener-oriented operations pass
through thread_mask() which detects null masks and turns them to
all_threads_mask. But starting with 2.7 it becomes a problem as won't
permit these null masks anymore.

This patch duplicates (yes, sorry) the loop that applies to the frontend's
bind_conf, though it is simplified (no sharding, etc).

As the code is right now, it simply seems impossible to trigger the second
(and largest) part of the check when leaving thread_resolve_group_mask()
on success, so it looks like it might be removed.

No backport is needed, unless a report in 2.6 or earlier mentions an issue
with a null thread_mask.
2022-07-05 19:10:26 +02:00
Willy Tarreau
8d158132bd BUG/MINOR: peers/config: always fill the bind_conf's argument
Some generic frontend errors mention the bind_conf by its name as
"bind '%s'", but if this is used on peers "bind" lines it shows
"(null)" because the argument is set to NULL in the call to
bind_conf_uniq_alloc() instead of passing the argument. Fortunately
that's trivial to fix.

This may be backported to older versions.
2022-07-05 19:06:47 +02:00
Amaury Denoyelle
bf91e3922b MINOR: mux-quic: emit FINAL_SIZE_ERROR on invalid STREAM size
Add a check on stream size when the stream is in state Size Known. In
this case, a STREAM frame cannot change the stream size. If this is not
respected, a CONNECTION_CLOSE with FINAL_SIZE_ERROR will be emitted as
specified in the RFC 9000.
2022-07-05 16:44:01 +02:00
Amaury Denoyelle
3f39b40fe0 MINOR: mux-quic: rename qcs flag FIN_RECV to SIZE_KNOWN
Rename QC_SF_FIN_RECV to the more generic name QC_SF_SIZE_KNOWN. This
better align with the QUIC RFC 9000 which uses the "Size Known" state
definition. This change is purely cosmetic.
2022-07-05 16:18:27 +02:00
Amaury Denoyelle
a509ffb505 MEDIUM: mux-quic: refactor streams opening
Review the whole API used to access/instantiate qcs.

A public function qcc_open_stream_local() is available to the
application protocol layer. It allows to easily opening a local stream.
The ID is automatically attributed to the next one available.

For remote streams, qcc_open_stream_remote() has been implemented. It
will automatically take care of allocating streams in a linear way
according to the ID. This function is called via qcc_get_qcs() which can
be used for each qcc_recv*() operations. For the moment, it is only used
for STREAM frames via qcc_recv(), but soon it will be implemented for
other frames types which can also be used to open a new stream.

qcs_new() and qcs_free() has been restricted to the MUX QUIC only as
they are now reserved for internal usage.

This change is a pure refactoring and should not have any noticeable
impact. It clarifies the developer intent and help to ensure that a
stream is not automatically opened when not desired.
2022-07-05 16:18:27 +02:00
Amaury Denoyelle
3abeb57909 MINOR: mux-quic: implement accessor for sedesc
Implement a function <qcs_sc> to easily access to the stconn associated
with a QCS. This takes care of qcs.sd which may be NULL, for example for
unidirectional streams.

It is expected that in the future when implementing
STOP_SENDING/RESET_STREAM, stconn must be notify about the event. This
accessor will allow to easily test if the stconn is instantiated or not.
2022-07-05 11:22:15 +02:00
Amaury Denoyelle
321fa7733c REORG: mux-quic: reorganize flow-control fields
<qcc.cl_bidi_r> is used to implement STREAM ID flow control enforcement.
Move it with all fields related to this operation and separated from MAX
STREAM DATA calcul.
2022-07-05 11:20:02 +02:00
Amaury Denoyelle
a441ec9c7a CLEANUP: mux-quic: do not export qc_get_ncbuf
qc_get_ncbuf() is only used internally : thus its prototype in QUIC MUX
include is not required.
2022-07-05 11:06:52 +02:00
William Lallemand
2ee490f613 CLEANUP: mworker: rename mworker_pipe to mworker_sockpair
The function mworker_pipe_register_per_thread() is called this way
because the master first used pipes instead of socketpairs.

Rename mworker_pipe_register_per_thread() to
mworker_sockpair_register_per_thread() in order to be more consistent.

Also update a comment inside the function.
2022-07-05 09:06:04 +02:00
Emeric Brun
36d9097cf3 MINOR: fd: Add BUG_ON checks on fd_insert()
This patch adds two BUG_ON on fd_insert() into the fdtab checking
if the fd has been correctly re-initialized into the fdtab
before a new insert.

It will raise a BUG if we try to insert the same fd multiple times
without an intermediate fd_delete().

First one checks that the owner for this fd in fdtab was reset to NULL.

Second one checks that the state flags for this fd in fdtab was reset
to 0.

This patch could be backported on version >= 2.4
2022-07-05 05:18:51 +02:00
William Lallemand
34aae2fd12 MEDIUM: mworker: set the iocb of the socketpair without using fd_insert()
The worker was previously changing the iocb of the socketpair in the
worker by mworker_accept_wrapper(). However, it was done using
fd_insert() instead of changing directly the callback in the
fdtab[].iocb pointer.

This patch cleans up this by part by removing fd_insert().

It also stops setting tid_bit on the thread mask, the socketpair will be
handled by any thread from now.
2022-07-05 05:18:51 +02:00
Ilya Shipitsin
cfba1f93af CI: re-enable gcc asan builds
for some unclear reasons asan builds were limited to clang only. let us
enable them for gcc as well
2022-07-04 17:28:58 +02:00
Christian Ruppert
3214b44702 BUILD: Makefile: Add Lua 5.4 autodetect
This patch is based on:
https://www.mail-archive.com/haproxy@formilux.org/msg39689.html
Thanks to Callum Farmer!

Signed-off-by: Christian Ruppert <idl0r@qasl.de>
2022-07-04 17:28:48 +02:00
Willy Tarreau
7509ec369a MINOR: proxy: use tg->threads_enabled in hard_stop() to detect stopped threads
Let's rely on tg->threads_enabled there to detect running threads. We
should probably have a dedicated function for this in order to simplify
the code and avoid the risk of using the wrong group ID.
2022-07-04 14:09:39 +02:00
Willy Tarreau
24cfc9f76e BUG/MEDIUM: thread: check stopping thread against local bit and not global one
Commit ef422ced9 ("MEDIUM: thread: make stopping_threads per-group and add
stopping_tgroups") moved the stopping_threads mask to per-group, but one
test in the loop preserved its global value instead, resulting in stopping
threads never sleeping on stop and eating 100% CPU until all were stopped.

No backport is needed.
2022-07-04 14:09:39 +02:00
Willy Tarreau
291f6ff885 BUG/MEDIUM: threads: fix incorrect thread group being used on soft-stop
Commit 377e37a80 ("MINOR: tinfo: add the mask of enabled threads in each
group") forgot -1 on the tgid, thus the groups was not always correctly
tested, which is visible only when running with more than one group. No
backport is needed.
2022-07-04 13:37:31 +02:00
Willy Tarreau
89ed89e895 BUILD: debug: re-export thread_dump_state
Building with threads and without thread dump (e.g. macos, freebsd)
warns that thread_dump_state is unused. This happened in fact with
recentcommit 1229ef312 ("MINOR: wdt: do not rely on threads_to_dump
anymore"). The solution would be to mark it unused, but after a
second thought, it can be convenient to keep it exported to help
debug crashes, so let's export it again. It's just not referenced in
include files since it's not needed outside.
2022-07-01 21:18:03 +02:00
Willy Tarreau
039972b4e5 BUILD: debug: fix build issue on clang with previous commit
Since the thread_dump_state type changed to uint, the old value in the
CAS needs to be the same as well.
2022-07-01 19:37:42 +02:00
Willy Tarreau
00c27b50c0 MEDIUM: debug: make the thread dumper not rely on a thread mask anymore
The thread mask is too short to dump more than 64 bits. Thus here we're
using a different approach with two counters, one for the next thread ID
to dump (which always exists, as it's looked up), and the second one for
the number of threads done dumping. This allows to dump threads in ascending
order then to let them wait for all others to be done, then to leave without
the risk of an overlapping dump until the done count is null again.

This allows to remove threads_to_dump which was the last non-FD variable
using a global thread mask.
2022-07-01 19:31:39 +02:00
Willy Tarreau
1229ef312d MINOR: wdt: do not rely on threads_to_dump anymore
This flag is not needed anymore as we're already marking the waiting
threads as harmless, thus the thread's bit is already covered by this
information. The variable was unexported.
2022-07-01 19:26:35 +02:00
Willy Tarreau
f7afdd910b MINOR: debug: mark oneself harmless while waiting for threads to finish
The debug_handler() function waits for other threads to join, but does
not mark itself as harmless, so if at the same time another thread tries
to isolate, this may deadlock. In practice this does not happen as the
signal is received during epoll_wait() hence under harmless mode, but
it can possibly arrive under other conditions.

In order to improve this, while waiting for other threads to join, we're
now marking the current thread as harmless, as it's doing nothing but
waiting for the other ones. This way another harmless waiter will be able
to proceed. It's valid to do this since we're not doing anything else in
this loop.

One improvement could be to also check for the thread being idle and
marking it idle in addition to harmless, so that it can even release a
full isolation requester. But that really doesn't look worth it.
2022-07-01 19:26:35 +02:00
Willy Tarreau
a2b8ed4b44 MINOR: thread: add is_thread_harmless() to know if a thread already is harmless
The harmless status is not re-entrant, so sometimes for signal handling
it can be useful to know if we're already harmless or not. Let's add a
function doing that, and make the debugger use it instead of manipulating
the harmless mask.
2022-07-01 19:26:35 +02:00
Willy Tarreau
598cf3f22e MAJOR: threads: change thread_isolate to support inter-group synchronization
thread_isolate() and thread_isolate_full() were relying on a set of thread
masks for all threads in different states (rdv, harmless, idle). This cannot
work anymore when the number of threads increases beyond LONGBITS so we need
to change the mechanism.

What is done here is to have a counter of requesters and the number of the
current isolated thread. Threads which want to isolate themselves increment
the request counter and wait for all threads to be marked harmless (or idle)
by scanning all groups and watching the respective masks. This is possible
because threads cannot escape once they discover this counter, unless they
also want to isolate and possibly pass first. Once all threads are harmless,
the requesting thread tries to self-assign the isolated thread number, and
if it fails it loops back to checking all threads. If it wins it's guaranted
to be alone, and can drop its harmless bit, so that other competing threads
go back to the loop waiting for all threads to be harmless. The benefit of
proceeding this way is that there's very little write contention on the
thread number (none during work), hence no cache line moves between caches,
thus frozen threads do not slow down the isolated one.

Once it's done, the isolated thread resets the thread number (hence lets
another thread take the place) and decrements the requester count, thus
possibly releasing all harmless threads.

With this change there's no more need for any global mask to synchronize
any thread, and we only need to loop over a number of groups to check
64 threads at a time per iteration. As such, tinfo's threads_want_rdv
could be dropped.

This was tested with 64 threads spread into 2 groups, running 64 tasks
(from the debug dev command), 20 "show sess" (thread_isolate()), 20
"add server blah/blah" (thread_isolate()), and 20 "del server blah/blah"
(thread_isolate_full()). The load remained very low (limited by external
socat forks) and no stuck nor starved thread was found.
2022-07-01 19:15:15 +02:00
Willy Tarreau
ef422ced91 MEDIUM: thread: make stopping_threads per-group and add stopping_tgroups
Stopping threads need a mask to figure who's still there without scanning
everything in the poll loop. This means this will have to be per-group.
And we also need to have a global stopping groups mask to know what groups
were already signaled. This is used both to figure what thread is the first
one to catch the event, and which one is the first one to detect the end of
the last job. The logic isn't changed, though a loop is required in the
slow path to make sure all threads are aware of the end.

Note that for now the soft-stop still takes time for group IDs > 1 as the
poller is not yet started on these threads and needs to expire its timeout
as there's no way to wake it up. But all threads are eventually stopped.
2022-07-01 19:15:15 +02:00
Willy Tarreau
03f9b35114 MEDIUM: tinfo: add a dynamic thread-group context
The thread group info is not sufficient to represent a thread group's
current state as it's read-only. We also need something comparable to
the thread context to represent the aggregate state of the threads in
that group. This patch introduces ha_tgroup_ctx[] and tg_ctx for this.
It's indexed on the group id and must be cache-line aligned. The thread
masks that were global and that do not need to remain global were moved
there (want_rdv, harmless, idle).

Given that all the masks placed there now become group-specific, the
associated thread mask (tid_bit) now switches to the thread's local
bit (ltid_bit). Both are the same for nbtgroups 1 but will differ for
other values.

There's also a tg_ctx pointer in the thread so that it can be reached
from other threads.
2022-07-01 19:15:15 +02:00
Willy Tarreau
22b2a24eb2 CLEANUP: thread: remove thread_sync_release() and thread_sync_mask
This function was added in 2.0 when reworking the thread isolation
mechanism to make it more reliable. However it if fundamentally
incompatible with the full isolation mechanism provided by
thread_isolate_full() since that one will wait for all threads to
become idle while the former will wait for all threads to finish
waiting, causing a deadlock.

Given that it's not used, let's just drop it entirely before it gets
used by accident.
2022-07-01 19:15:15 +02:00
Willy Tarreau
cce203aae5 MINOR: thread: add a new all_tgroups_mask variable to know about active tgroups
In order to kill all_threads_mask we'll need to have an equivalent for
the thread groups. The all_tgroups_mask does just this, it keeps one bit
set per enabled group.
2022-07-01 19:15:15 +02:00
Willy Tarreau
c6cf64bb5e MINOR: thread: use ltid_bit in ha_tkillall()
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). ha_tkillall() needs
this.
2022-07-01 19:15:15 +02:00
Willy Tarreau
1e7f0d68b0 MINOR: clock: use ltid_bit in clock_report_idle()
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). clock_report_idle()
needs this.

This also implies not using all_threads_mask anymore but taking the mask
from the tgroup since it becomes relative now.
2022-07-01 19:15:15 +02:00
Willy Tarreau
adc1f52c92 MINOR: wdt: use ltid_bit in wdt_handler()
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). wdt_handler() needs
this.
2022-07-01 19:15:14 +02:00
Willy Tarreau
38d0712748 MINOR: debug: use ltid_bit in ha_thread_dump()
Since commit cc7a11ee3 ("MINOR: threads: set the tid, ltid and their bit
in thread_cfg") we ought not use (1UL << thr) to get the group mask for
thread <thr>, but (ha_thread_info[thr].ltid_bit). ha_thread_dump() needs
this.
2022-07-01 19:15:14 +02:00
Willy Tarreau
377e37a80f MINOR: tinfo: add the mask of enabled threads in each group
In order to replace the global "all_threads_mask" we'll need to have an
equivalent per group. Take this opportunity for calling it threads_enabled
and make sure which ones are counted there (in case in the future we allow
to stop some).
2022-07-01 19:15:14 +02:00
Willy Tarreau
60fe4a95a2 MINOR: tinfo: replace the tgid with tgid_bit in tgroup_info
Now that the tgid is accessible from the thread, it's pointless to have
it in the group, and it was only set but never used. However we'll soon
frequently need the mask corresponding to the group ID and the risk of
getting it wrong with the +1 or to shift 1 instead of 1UL is important,
so let's store the tgid_bit there.
2022-07-01 19:15:14 +02:00
Willy Tarreau
66ad98a772 MINOR: tinfo: add the tgid to the thread_info struct
At several places we're dereferencing the thread group just to catch
the group number, and this will become even more required once we start
to use per-group contexts. Let's just add the tgid in the thread_info
struct to make this easier.
2022-07-01 19:15:14 +02:00
Willy Tarreau
e7475c8e79 MEDIUM: tasks/fd: replace sleeping_thread_mask with a TH_FL_SLEEPING flag
Every single place where sleeping_thread_mask was still used was to test
or set a single thread. We can now add a per-thread flag to indicate a
thread is sleeping, and remove this shared mask.

The wake_thread() function now always performs an atomic fetch-and-or
instead of a first load then an atomic OR. That's cleaner and more
reliable.

This is not easy to test, as broadcast FD events are rare. The good
way to test for this is to run a very low rate-limited frontend with
a listener that listens to the fewest possible threads (2), and to
send it only 1 connection at a time. The listener will periodically
pause and the wakeup task will sometimes wake up on a random thread
and will call wake_thread():

   frontend test
        bind :8888 maxconn 10 thread 1-2
        rate-limit sessions 5

Alternately, disabling/enabling a frontend in loops via the CLI also
broadcasts such events, but they're more difficult to observe since
this is causing connection failures.
2022-07-01 19:15:14 +02:00
Willy Tarreau
dce4ad755f MEDIUM: thread: add a new per-thread flag TH_FL_NOTIFIED to remember wakeups
Right now when an inter-thread wakeup happens, we preliminary check if the
thread was asleep, and if so we wake the poller up and remove its bit from
the sleeping mask. That's not very clean since the sleeping mask cannot be
entirely trusted since a thread that's about to wake up will already have
its sleeping bit removed.

This patch adds a new per-thread flag (TH_FL_NOTIFIED) to remember that a
thread was notified to wake up. It's cleared before checking the task lists
last, so that new wakeups can be considered again (since wake_thread() is
only used to notify about task wakeups and FD polling changes). This way
we do not need to modify a remote thread's sleeping mask anymore. As such
wake_thread() now only tests and sets the TH_FL_NOTIFIED flag but doesn't
clear sleeping anymore.
2022-07-01 19:15:14 +02:00
Willy Tarreau
555c192d14 MINOR: poller: update_fd_polling: wake a random other thread
When enabling an FD that's only bound to another thread, instead of
always picking the first one, let's pick a random one. This is rarely
used (enabling a frontend, or session rate-limiting period ending),
and has greater chances of avoiding that some obscure corner cases
could degenerate into a poorly distributed load.
2022-07-01 19:15:14 +02:00
Willy Tarreau
962e5ba72b MEDIUM: polling: make update_fd_polling() not care about sleeping threads
Till now, update_fd_polling() used to check if all the target threads were
sleeping, and only then would wake an owning thread up. This causes several
problems among which the need for the sleeping_thread_mask and the fact that
by the time we wake one thread up, it has changed.

This commit changes this by leaving it to wake_thread() to perform this
check on the selected thread, since wake_thread() is already capable of
doing this now. Concretely speaking, for updt_fd_polling() it will mean
performing one computation of an ffsl() before knowing the sleeping status
on a global FD state change (which is very rare and not important here,
as it basically happens after relaxing a rate-limit (i.e. once a second
at beast) or after enabling a frontend from the CLI); thus we don't care.
2022-07-01 19:15:14 +02:00
Willy Tarreau
058b2c1015 MINOR: poller: centralize poll return handling
When returning from the polling syscall, all pollers have a certain
dance to follow, made of wall clock updates, thread harmless updates,
idle time management and sleeping mask updates. Let's have a centralized
function to deal with all of this boring stuff: fd_leaving_poll(), and
make all the pollers use it.
2022-07-01 19:15:14 +02:00
Willy Tarreau
bdcd32598f MINOR: thread: only use atomic ops to touch the flags
The thread flags are touched a little bit by other threads, e.g. the STUCK
flag may be set by other ones, and they're watched a little bit. As such
we need to use atomic ops only to manipulate them. Most places were already
using them, but here we generalize the practice. Only ha_thread_dump() does
not change because it's run under isolation.
2022-07-01 19:15:14 +02:00
Willy Tarreau
8e079cdd44 MINOR: thread: move the flags to the shared cache line
The thread flags were once believed to be local to the thread, but as
it stands, even the STUCK flag is shared since it's looked at by the
watchdog. As such we'll need to use atomic ops to manipulate them, and
likely to move them into the shared area.

This patch only moves the flag into the shared area so that we can later
decide whether it's best to leave them there or to move them back to the
local area. Interestingly, some tests have shown a 3% better performance
on dequeuing with this, while they're not used by other threads yet, so
there are definitely alignment effects that might change over time.
2022-07-01 19:15:14 +02:00
Willy Tarreau
f3efef4d60 MINOR: thread: make wake_thread() take care of the sleeping threads mask
Almost every call place of wake_thread() checks for sleeping threads and
clears the sleeping mask itself, while the function is solely used for
there. Let's move the check and the clearing of the bit inside the function
itself. Note that updt_fd_polling() still performs the check because its
rules are a bit different.
2022-07-01 19:15:14 +02:00
Willy Tarreau
3fdacdddaf MEDIUM: queue: revert to regular inter-task wakeups
Now that the inter-task wakeups are cheap, there's no point in using
task_instant_wakeup() anymore when dequeueing tasks. The use of the
regular task_wakeup() is sufficient there and will preserve a better
fairness: the test that went from 40k to 570k RPS now gives 580k RPS
(down from 585k RPS with previous commit). This essentially reverts
commit 27fab1dcb ("MEDIUM: queue: use tasklet_instant_wakeup() to
wake tasks").
2022-07-01 19:15:14 +02:00