Commit Graph

6954 Commits

Author SHA1 Message Date
Willy Tarreau
76d343d3d3 MINOR: time: replace calls to tv_ms_elapsed() with a linear subtract
Instead of operating on {sec, usec} now we convert both operands to
ns then subtract them and convert to ms. This is a first step towards
dropping timeval from these timestamps.

Interestingly, tv_ms_elapsed() and tv_ms_remain() are no longer used at
all and could be removed.
2023-04-28 16:08:08 +02:00
Willy Tarreau
591fa59da7 MINOR: time: add conversions to/from nanosecond timestamps
In order to ease the transition away from the timeval used in internal
timestamps, let's first create a few functions and macro to return a
counter from a timeval and conversely, as well as ease the conversions
to/from ns/us/ms/sec to save the user from having to count zeroes and
to think about appending ULL in conversions.
2023-04-28 16:08:08 +02:00
Christopher Faulet
81951f264e BUG/MINOR: stconn: Fix SC flags with same value
SC_FL_SND_NEVERWAIT and SC_FL_SND_EXP_MORE flags have the same value. It is
not critical because these flags are only used to know if MSG_MORE flag must
be set on a send().

No backport needed.
2023-04-28 08:51:34 +02:00
Christopher Faulet
e99c43907c BUG/MEDIUM: spoe: Don't start new applet if there are enough idle ones
It is possible to start too many applets on sporadic burst of events after
an inactivity period. It is due to the way we estimate if a new applet must
be created or not. It is based on a frequency counter. We compare the events
processing rate against the number of events currently processed (in
progress or waiting to be processed). But we should also take care of the
number of idle applets.

We already track the number of idle applets, but it is global and not
per-thread. Thus we now also track the number of idle applets per-thread. It
is not a big deal because this fills a hole in the spoe_agent structure.
Thanks to this counter, we can refrain applets creation if there is enough
idle applets to handle currently processed events.

This patch should be backported to every stable versions.
2023-04-28 08:51:34 +02:00
Amaury Denoyelle
d6646dddcc MINOR: quic: finalize affinity change as soon as possible
During accept, a quic-conn is rebind to a new thread. This process is
done in two times :
* first on the original thread via qc_set_tid_affinity()
* then on the newly assigned thread via qc_finalize_affinity_rebind()

Most quic_conn operations (I/O tasklet, task and quic_conn FD socket
read) are reactivated ony after the second step. However, there is a
possibility that datagrams are handled before it via quic_dgram_parse()
when using listener sockets. This does not seem to cause any issue but
this may cause unexpected behavior in the future.

To simplify this, qc_finalize_affinity_rebind() will be called both by
qc_xprt_start() and quic_dgram_parse(). Only one invocation will be
performed thanks to the new flag QUIC_FL_CONN_AFFINITY_CHANGED.

This should be backported up to 2.7.
2023-04-26 17:50:16 +02:00
Amaury Denoyelle
24962dd178 BUG/MEDIUM: mux-quic: do not emit RESET_STREAM for unknown length
Some HTX responses may not always contain a EOM block. For example this
is the case if content-length header is missing from the HTTP server
response. Stream termination is thus signaled to QUIC mux via shutw
callback. However, this is interpreted inconditionnally as an early
close by the mux with a RESET_STREAM emission. Most of the times, QUIC
clients report this as an error.

To fix this, check if htx.extra is set to HTX_UNKOWN_PAYLOAD_LENGTH for
a qcs instance. If true, shutw will never be used to emit a
RESET_STREAM. Instead, the stream will be closed properly with a FIN
STREAM frame. If all data were already transfered, an empty STREAM frame
is sent.

This fix may help with the github issue #2004 where chrome browser stop
to use QUIC after receiving RESET_STREAM frames.

This issue was reported by Vladimir Zakharychev. Thanks to him for his
help and testing. It was also reproduced locally using httpterm with the
query string "/?s=1k&b=0&C=1".

This should be backported up to 2.7.
2023-04-26 17:50:09 +02:00
Willy Tarreau
543e2544ca DEBUG: crash using an invalid opcode on aarch64 instead of an invalid access
On aarch64 there's also a guaranted invalid instruction, called UDF, and
which even supports an optional 16-bit immediate operand:

   https://developer.arm.com/documentation/ddi0596/2021-12/Base-Instructions/UDF--Permanently-Undefined-?lang=en

It's conveniently encoded as 4 zeroes (when the operand is zero). It's
unclear when support for it was added into GAS, if at all; even a
not-so-old 2.27 doesn't know about it. Let's byte-encode it.

Tested on an A72 and works as expected.
2023-04-25 19:53:39 +02:00
Willy Tarreau
77787ec9bc DEBUG: crash using an invalid opcode on x86/x86_64 instead of an invalid access
BUG_ON() calls currently trigger a segfault. This is more convenient
than abort() as it doesn't rely on any function call nor signal handler
and never causes non-unwindable stacks when opening cores. But it adds
quite some confusion in bug reports which are rightfully tagged "segv"
and do not instantly allow to distinguish real segv (e.g. null derefs)
from code asserts.

Some CPU architectures offer various crashing methods. On x86 we have
INT3 (0xCC), which stops into the debugger, and UD0/UD1/UD2. INT3 looks
appealing but for whatever reason (maybe signal handling somewhere) it
loses the last call point in the stack, making backtraces unusable. UD2
has the merit of being only 2 bytes and causing an invalid instruction,
which almost never happens normally, so it's easily distinguishable.
Here it was defined as a macro so that the line number in the core
matches the one where the BUG_ON() macro is called, and the debugger
shows the last frame exactly at its calligg point.

E.g. when calling "debug dev bug":

Program terminated with signal SIGILL, Illegal instruction.
  #0  debug_parse_cli_bug (args=<optimized out>, payload=<optimized out>, appctx=<optimized out>, private=<optimized out>) at src/debug.c:408
  408             BUG_ON(one > zero);
  [Current thread is 1 (Thread 0x7f7a660cc1c0 (LWP 14238))]
  (gdb) bt
  #0  debug_parse_cli_bug (args=<optimized out>, payload=<optimized out>, appctx=<optimized out>, private=<optimized out>) at src/debug.c:408
  #1  debug_parse_cli_bug (args=<optimized out>, payload=<optimized out>, appctx=<optimized out>, private=<optimized out>) at src/debug.c:402
  #2  0x000000000061a69f in cli_parse_request (appctx=appctx@entry=0x181c0160) at src/cli.c:832
  #3  0x000000000061af86 in cli_io_handler (appctx=0x181c0160) at src/cli.c:1035
  #4  0x00000000006ca2f2 in task_run_applet (t=0x181c0290, context=0x181c0160, state=<optimized out>) at src/applet.c:449
2023-04-25 18:51:10 +02:00
Amaury Denoyelle
d5f03cd576 CLEANUP: quic: rename frame variables
Rename all frame variables with the suffix _frm. This helps to
differentiate frame instances from other internal objects.

This should be backported up to 2.7.
2023-04-24 15:35:22 +02:00
Amaury Denoyelle
888c5f283a CLEANUP: quic: rename frame types with an explicit prefix
Each frame type used in quic_frame union has been renamed with the
following prefix "qf_". This helps to differentiate frame instances from
other internal objects.

This should be backported up to 2.7.
2023-04-24 15:35:03 +02:00
Willy Tarreau
7310164b2c MINOR: listener: add a new global tune.listener.default-shards setting
This new setting accepts "by-process", "by-group" and "by-thread" and
will dictate how listeners will be sharded by default when nothing is
specified. While the default remains "by-process", "by-group" should be
much more efficient with many threads, while not changing anything for
single-group setups.
2023-04-23 09:46:15 +02:00
Willy Tarreau
f1003ea7fa MINOR: protocol: perform a live check for SO_REUSEPORT support
When testing if a protocol supports SO_REUSEPORT, we're now able to
verify if the OS does really support it. While it may be supported at
build time, it may possibly have been blocked in a container for
example so we'd rather know what it's like.
2023-04-23 09:46:15 +02:00
Willy Tarreau
b073573c10 MINOR: sock: add a function to check for SO_REUSEPORT support at runtime
The new function _sock_supports_reuseport() will be used to check if a
protocol type supports SO_REUSEPORT or not. This will be useful to verify
that shards can really work.
2023-04-23 09:46:15 +02:00
Willy Tarreau
8a5e6f4cca MINOR: protocol: add a function to check if some features are supported
The new function protocol_supports_flag() checks the protocol flags
to verify if some features are supported, but will support being
extended to refine the tests. Let's use it to check for REUSEPORT.
2023-04-23 09:46:15 +02:00
Willy Tarreau
785b89f551 MINOR: protocol: move the global reuseport flag to the protocols
Some protocol support SO_REUSEPORT and others not. Some have such a
limitation in the kernel, and others in haproxy itself (e.g. sock_unix
cannot support multiple bindings since each one will unbind the previous
one). Also it's really protocol-dependent and not just family-dependent
because on Linux for some time it was supported for TCP and not UDP.

Let's move the definition to the protocols instead. Now it's preset in
tcp/udp/quic when SO_REUSEPORT is defined, and is otherwise left unset.
The enabled() config condition test validates IPv4 (generally sufficient),
and -dR / noreuseport all protocols at once.
2023-04-23 09:46:15 +02:00
Willy Tarreau
65df7e028d MINOR: protocol: add a flags field to store info about protocols
We'll use these flags to know if some protocols are supported, and if
so, with what options/extensions. Reuseport will move there for example.
Two functions were added to globally set/clear a flag.
2023-04-23 09:46:15 +02:00
Willy Tarreau
da0d2cb698 MINOR: proxy: make proxy_type_str() recognize peers sections
Now proxy_type_str() will emit "peers section" when the mode is set to
peers, so as to ease sharing more code between peers and proxies.
2023-04-23 09:46:15 +02:00
Willy Tarreau
f6a8444f55 REORG: listener: move the bind_conf's thread setup code to listener.c
What used to be only two lines to apply a mask in a loop in
check_config_validity() grew into a 130-line block that performs deeply
listener-specific operations that do not have their place there anymore.
In addition it's worth noting that the peers code still doesn't support
shards nor being bound to more than one group, which is a second reason
for moving that code to its own function. Nothing was changed except
recreating the missing variables from the bind_conf itself (the fe only).
2023-04-23 09:46:15 +02:00
Willy Tarreau
4c538df28c CLEANUP: protocol: move the nb_receivers to plug a hole in protocol
This field forces an unaligned hole between two list heads. Let's move
it up where it will be more easily combined with other fields. In
addition, turn it to unsigned while it's still not used.
2023-04-23 09:46:15 +02:00
Willy Tarreau
798d6b4124 CLEANUP: protocol: move the l3_addrlen to plug a hole in proto_fam
There's a two-byte hole in proto_fam after sock_family, let's move the
l3_addrlen there as a ushort. Note that contrary to what the comment
says, it's still not used by hash algorithms though it could.
2023-04-23 09:46:15 +02:00
Willy Tarreau
df4051cd58 BUILD: proto_tcp: export the correct names for proto_tcpv[46]
The exported names were not correct (missing the 'v').
2023-04-23 09:46:15 +02:00
Willy Tarreau
968a4f34fc BUILD: sock_inet: forward-declare struct receiver
Including sock_inet.h without receiver-t.h causes build failures due to
struct receiver not being defined. Let's just forward-declare it.
2023-04-23 09:46:15 +02:00
Ilya Shipitsin
ccf8012f28 CLEANUP: assorted typo fixes in the code and comments
This is 36th iteration of typo fixes
2023-04-23 09:44:53 +02:00
Tim Duesterhus
3a8c63d48d MINOR: Make tasklet_free() safe to be called with NULL
Make this freeing function safe, like other freeing functions are as discussed
in GitHub issue #2126.
2023-04-23 00:28:25 +02:00
Willy Tarreau
ff18504d73 MINOR: listener: make sure to avoid ABA updates in per-thread index
One limitation of the current thread index mechanism is that if the
values are assigned multiple times to the same thread and the index
loops, it can match again the old value, which will not prevent a
competing thread from finishing its CAS and assigning traffic to a
thread that's not the optimal one. The probability is low but the
solution is simple enough and consists in implementing an update
counter in the high bits of the index to force a mismatch in this
case (assuming we don't try to cover for extremely unlikely cases
where the update counter loops while the index remains equal). So
let's do that. In order to improve the situation a little bit, we
now set the index to a ulong so that in 32 bits we have 8 bits of
counter and in 64 bits we have 40 bits.
2023-04-21 17:41:26 +02:00
Willy Tarreau
e6f5ab5afa MINOR: listener: make accept_queue index atomic
There has always been a race when checking the length of an accept queue
to determine which one is more loaded that another, because the head and
tail are read at two different moments. This is not required, we can merge
them as two 16 bit numbers inside a single 32-bit index that is always
accessed atomically. This way we read both values at once and always have
a consistent measurement.
2023-04-21 17:41:26 +02:00
Willy Tarreau
e4c36aa8a1 MINOR: receiver: add RX_F_MUST_DUP to indicate that an rx must be duped
The purpose of this new flag will be to mark that some listeners
duplicate their reference's FD instead of trying to setup a completely
new listener from scratch. This will be used when multiple groups want
to listen to the same socket, via multiple FDs.
2023-04-21 17:41:26 +02:00
Willy Tarreau
aae1810b4d MINOR: receiver: add a struct shard_info to store info about each shard
In order to create multiple receivers for one multi-group shard, we'll
need some more info about the shard. Here we store:
  - the number of groups (= number of receivers)
  - the number of threads (will be used for accept LB)
  - pointer to the reference rx (to get the FD and to find all threads)
  - pointers to the other members (to iterate over all threads)

For now since there's only one group per shard it remains simple. The
listener deletion code already takes care of removing the current
member from its shards list and moving others' reference to the last
one if it was their reference (so as to avoid o(n^2) updates during
ordered deletes).

Since the vast majority of setups will not use multi-group shards, we
try to save memory usage by only allocating the shard_info when it is
needed, so the principle here is that a receiver shard_info==NULL is
alone and doesn't share its socket with another group.

Various approaches were considered and tests show that the management
of the listeners during boot makes it easier to just attach to or
detach from a shard_info and automatically allocate it if it does not
exist, which is what is being done here.

For now the attach code is not called, but detach is already called
on delete.
2023-04-21 17:41:26 +02:00
Willy Tarreau
84fe1f479b MINOR: listener: support another thread dispatch mode: "fair"
This new algorithm for rebalancing incoming connections to multiple
threads is simpler and instead of considering the threads load, it will
only cycle through all of them, offering a fair share of the traffic to
each thread. It may be well suited for short-lived connections but is
also convenient for very large thread counts where it's not always certain
that the least loaded thread will always be found.
2023-04-21 17:41:26 +02:00
Willy Tarreau
6a4d48b736 MINOR: quic_sock: index li->per_thr[] on local thread id, not global one
There's a li_per_thread array in each listener for use with QUIC
listeners. Since thread groups were introduced, this array can be
allocated too large because global.nbthread is allocated for each
listener, while only no more than MIN(nbthread,MAX_THREADS_PER_GROUP)
may be used by a single listener. This was because the global thread
ID is used as the index instead of the local ID (since a listener may
only be used by a single group). Let's just switch to local ID and
reduce the allocated size.
2023-04-21 17:41:26 +02:00
Willy Tarreau
77d37b07b1 MINOR: quic: support migrating the listener as well
When migrating a quic_conn to another thread, we may need to also
switch the listener if the thread belongs to another group. When
this happens, the freshly created connection will already have the
target listener, so let's just pick it from the connection and use
it in qc_set_tid_affinity(). Note that it will be the caller's
responsibility to guarantee this.
2023-04-21 17:41:26 +02:00
Aurelien DARRAGON
76e255520f MINOR: server: pass adm and op cause to srv_update_status()
Operational and administrative state change causes are not propagated
through srv_update_status(), instead they are directly consumed within
the function to provide additional info during the call when required.

Thus, there is no valid reason for keeping adm and op causes within
server struct. We are wasting space and keeping uneeded complexity.

We now exlicitly pass change type (operational or administrative) and
associated cause to srv_update_status() so that no extra storage is
needed since those values are only relevant from srv_update_status().
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
1746b56e68 MINOR: server: change srv_op_st_chg_cause storage type
This one is greatly inspired by "MINOR: server: change adm_st_chg_cause storage type".

While looking at current srv_op_st_chg_cause usage, it was clear that
the struct needed some cleanup since some leftovers from asynchronous server
state change updates were left behind and resulted in some useless code
duplication, and making the whole thing harder to maintain.

Two observations were made:

- by tracking down srv_set_{running, stopped, stopping} usage,
  we can see that the <reason> argument is always a fixed statically
  allocated string.
- check-related state change context (duration, status, code...) is
  not used anymore since srv_append_status() directly extracts the
  values from the server->check. This is pure legacy from when
  the state changes were applied asynchronously.

To prevent code duplication, useless string copies and make the reason/cause
more exportable, we store it as an enum now, and we provide
srv_op_st_chg_cause() function to fetch the related description string.
HEALTH and AGENT causes (check related) are now explicitly identified to
make consumers like srv_append_op_chg_cause() able to fetch checks info
from the server itself if they need to.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
f3b48a808e MINOR: server: srv_append_status refacto
srv_append_status() has become a swiss-knife function over time.
It is used from server code and also from checks code, with various
inputs and distincts code paths, making it very hard to guess the
actual behavior of the function (resulting string output).

To simplify the logic behind it, we're dividing it in multiple contextual
functions that take simple inputs and do explicit things, making them
more predictable and easier to maintain.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
9b1ccd7325 MINOR: server: change adm_st_chg_cause storage type
Even though it doesn't look like it at first glance, this is more like
a cleanup than an actual code improvement:

Given that srv->adm_st_chg_cause has been used to exclusively store
static strings ever since it was implemented, we make the choice to
store it as an enum instead of a fixed-size string within server
struct.

This will allow to save some space in server struct, and will make
it more easily exportable (ie: event handlers) because of the
reduced memory footprint during handling and the ability to later get
the corresponding human-readable message when it's explicitly needed.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
e9314fb7a7 MINOR: event_hdl: provide event->when for advanced handlers
For advanced async handlers only
(Registered using EVENT_HDL_ASYNC_TASK() macro):

event->when is provided as a struct timeval and fetched from 'date'
haproxy global variable.

Thanks to 'when', related event consumers will be able to timestamp
events, even if they don't work in real-time or near real-time.
Indeed, unlike sync or normal async handlers, advanced async handlers
could purposely delay the consumption of pending events, which means
that the date wouldn't be accurate if computed directly from within
the handler.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
ebf58e991a MINOR: event_hdl: dynamically allocated event data members
Add the ability to provide a cleanup function for event data passed
via the publishing function.

One use case could be the need to provide valid pointers in the safe
section of the data struct.
Cleanup function will be automatically called with data (or copy of data)
as argument when all handlers consumed the event, which provides an easy
way to release some memory or decrement refcounts to ressources that were
provided through the data struct.
data in itself may not be freed by the cleanup function, it is handled
by the API.

This would allow passing large (allocated) data blocks through the data
struct while keeping data struct size under the EVENT_HDL_ASYNC_EVENT_DATA
size limit.

To do so, when publishing an event, where we would currently do:

        struct event_hdl_cb_data_new_family event_data;

        /* safe data, available from both sync and async contexts
	 * may not use pointers to short-living resources
	 */
        event_data.safe.my_custom_data = x;

        /* unsafe data, only available from sync contexts */
        event_data.unsafe.my_unsafe_data = y;

        /* once data is prepared, we can publish the event */
        event_hdl_publish(NULL,
                          EVENT_HDL_SUB_NEW_FAMILY_SUBTYPE_1,
                          EVENT_HDL_CB_DATA(&event_data));

We could do:

        struct event_hdl_cb_data_new_family event_data;

        /* safe data, available from both sync and async contexts
	 * may not use pointers to short-living resources,
	 * unless EVENT_HDL_CB_DATA_DM is used to ensure pointer
	 * consistency (ie: refcount)
	 */
        event_data.safe.my_custom_static_data = x;
	event_data.safe.my_custom_dynamic_data = malloc(1);

        /* unsafe data, only available from sync contexts */
        event_data.unsafe.my_unsafe_data = y;

        /* once data is prepared, we can publish the event */
        event_hdl_publish(NULL,
                          EVENT_HDL_SUB_NEW_FAMILY_SUBTYPE_1,
                          EVENT_HDL_CB_DATA_DM(&event_data, data_new_family_cleanup));

With data_new_family_cleanup func which would look like this:

      void data_new_family_cleanup(const void *data)
      {
      	const struct event_hdl_cb_data_new_family *event_data = ptr;

	/* some data members require specific cleanup once the event
	 * is consumed
	 */
      	free(event_data.safe.my_custom_dynamic_data);
	/* don't ever free data! it is not ours */
      }

Not sure if this feature will become relevant in the future, so I prefer not
to mention it in the doc for now.

But given that the implementation is trivial and does not put a burden
on the existing API, it's a good thing to have it there, just in case.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
147691fd83 CLEANUP: event_hdl: fix comment typo about _sync assertion
Fixing a comment relative to EVENT_HDL_ASSERT_SYNC macro where a
typo was made and the comment was lacking some context.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
363ef4daa7 CLEANUP: event_hdl: updating obsolete comment for EVENT_HDL_CB_DATA
EVENT_HDL_CB_DATA macro comments were not updated during the API
refactor, fixing that.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
8273bfc639 BUG/MINOR: event_hdl: don't waste 1 event subtype slot
ESUB_INDEX(n) index macro is used exclusively with n > 0
Fixing it so that it starts numbering at 1 instead of 2.

This way, we don't waste a subtype slot in event_hdl_sub_type
struct, and we comply with the structure comments about max
supported event subtypes (currently set at 16).

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
a63f4903c9 MINOR: server/event_hdl: prepare for upcoming refactors
This commit does nothing that ought to be mentioned, except that
it adds missing comments and slighty moves some function calls
out of "sensitive" code in preparation of some server code refactors.
2023-04-21 14:36:45 +02:00
Aurelien DARRAGON
d714213862 MINOR: server/event_hdl: add proxy_uuid to event_hdl_cb_data_server
Expose proxy_uuid variable in event_hdl_cb_data_server struct to
overcome proxy_name fixed length limitation.

proxy_uuid may be used by the handler to perform proxy lookups.
This should be preferred over lookups relying proxy_name.
(proxy_name is suitable for printing / logging purposes but not for
ID lookups since it has a maximum fixed length)
2023-04-21 14:36:45 +02:00
Frédéric Lécaille
0ed94032b2 MINOR: quic: Do not allocate too much ack ranges
Limit the maximum number of ack ranges to QUIC_MAX_ACK_RANGES(32).

Must be backported to 2.6 and 2.7.
2023-04-19 11:36:54 +02:00
Frédéric Lécaille
4b2627beae BUG/MINOR: quic: Stop removing ACK ranges when building packets
Since this commit:

    BUG/MINOR: quic: Possible wrapped values used as ACK tree purging limit.

There are more chances that ack ranges may be removed from their trees when
building a packet. It is preferable to impose a limit to these trees. This
will be the subject of the a next commit to come.

For now on, it is sufficient to stop deleting ack range from their trees.
Remove quic_ack_frm_reduce_sz() and quic_rm_last_ack_ranges() which were
there to do that.
Make qc_frm_len() support ACK frames and calls it to ensure an ACK frame
may be added to a packet before building it.

Must be backported to 2.6 and 2.7.
2023-04-19 11:36:54 +02:00
Aurelien DARRAGON
2a9764baae CLEANUP: hlua: avoid confusion between internal timers and tick based timers
Not all hlua "time" variables use the same time logic.

hlua->wake_time relies on ticks since its meant to be used in conjunction
with task scheduling. Thus, it should be stored as a signed int and
manipulated using the tick api.
Adding a few comments about that to prevent mixups with hlua internal
timer api which doesn't rely on the ticks api.
2023-04-19 11:03:31 +02:00
Aurelien DARRAGON
da9503ca9a MEDIUM: hlua: reliable timeout detection
For non yieldable lua handlers (converters, fetches or yield
incompatible lua functions), current timeout detection relies on now_ms
thread local variable.

But within non-yieldable contexts, now_ms won't be updated if not by us
(because we're momentarily stuck in lua context so we won't
re-enter the polling loop, which is responsible for clock updates).

To circumvent this, clock_update_date(0, 1) was manually performed right
before now_ms is being read for the timeout checks.

But this fails to work consistently, because if no other concurrent
threads periodically run clock_update_global_date(), which do happen if
we're the only active thread (nbthread=1 or low traffic), our
clock_update_date() call won't reliably update our local now_ms variable

Moreover, clock_update_date() is not the right tool for this anyway, as
it was initially meant to be used from the polling context.
Using it could have negative impact on other threads relying on now_ms
to be stable. (because clock_update_date() performs global clock update
from time to time)

-> Introducing hlua multipurpose timer, which is internally based on
now_cpu_time_fast() that provides per-thread consistent clock readings.

Thanks to this new hlua timer API, hlua timeout logic is less error-prone
and more robust.

This allows the timeout detection to work as expected for both yieldable
and non-yieldable lua handlers.

This patch depends on commit "MINOR: clock: add now_cpu_time_fast() function"

While this could theorically be backported to all stable versions,
it is advisable to avoid backports unless we're confident enough
since it could cause slight behavior changes (timing related) in
existing setups.
2023-04-19 11:03:31 +02:00
Aurelien DARRAGON
df188f145b MINOR: clock: add now_cpu_time_fast() function
Same as now_cpu_time(), but for fast queries (less accurate)
Relies on now_cpu_time() and now_mono_time_fast() is used
as a cache expiration hint to prevent now_cpu_time() from being
called too often since it is known to be quite expensive.

Depends on commit "MINOR: clock: add now_mono_time_fast() function"
2023-04-19 11:03:31 +02:00
Aurelien DARRAGON
07cbd8e074 MINOR: clock: add now_mono_time_fast() function
Same as now_mono_time(), but for fast queries (less accurate)
Relies on coarse clock source (also known as fast clock source on
some systems).

Fallback to now_mono_time() if coarse source is not supported on the system.
2023-04-19 11:03:31 +02:00
Amaury Denoyelle
0783a7b08e MINOR: listener: remove unneeded local accept flag
Remove the receiver RX_F_LOCAL_ACCEPT flag. This was used by QUIC
protocol before thread rebinding was supported by the quic_conn layer.

This should be backported up to 2.7 after the previous patch has also
been taken.
2023-04-18 17:09:34 +02:00
Amaury Denoyelle
739de3f119 MINOR: quic: properly finalize thread rebinding
When a quic_conn instance is rebinded on a new thread its tasks and
tasklet are destroyed and new ones created. Its socket is also migrated
to a new thread which stop reception on it.

To properly reactivate a quic_conn after rebind, wake up its tasks and
tasklet if they were active before thread rebind. Also reactivate
reading on the socket FD. These operations are implemented on a new
function qc_finalize_affinity_rebind().

This should be backported up to 2.7 after a period of observation.
2023-04-18 17:09:02 +02:00
Amaury Denoyelle
25174d51ef MEDIUM: quic: implement thread affinity rebinding
Implement a new function qc_set_tid_affinity(). This function is
responsible to rebind a quic_conn instance to a new thread.

This operation consists mostly of releasing existing tasks and tasklet
and allocating new instances on the new thread. If the quic_conn uses
its owned socket, it is also migrated to the new thread. The migration
is finally completed with updated the CID TID to the new thread. After
this step, the connection is thus accessible to the new thread and
cannot be access anymore on the old one without risking race condition.

To ensure rebinding is either done completely or not at all, tasks and
tasklet are pre-allocated before all operations. If this fails, an error
is returned and rebiding is not done.

To destroy the older tasklet, its context is set to NULL before wake up.
In I/O callbacks, a new function qc_process() is used to check context
and free the tasklet if NULL.

The thread rebinding can cause a race condition if the older thread
quic_dghdlrs::dgrams list contains datagram for the connection after
rebinding is done. To prevent this, quic_rx_pkt_retrieve_conn() always
check if the packet CID is still associated to the current thread or
not. In the latter case, no connection is returned and the new thread is
returned to allow to redispatch the datagram to the new thread in a
thread-safe way.

This should be backported up to 2.7 after a period of observation.
2023-04-18 17:08:34 +02:00
Amaury Denoyelle
1304d19dee MINOR: quic: delay post handshake frames after accept
When QUIC handshake is completed on our side, some frames are prepared
to be sent :
* HANDSHAKE_DONE
* several NEW_CONNECTION_ID with CIDs allocated

This step was previously executed in quic_conn_io_cb() directly after
CRYPTO frames parsing. This patch delays it to be completed after
accept. Special care have been taken to ensure it is still functional
with 0-RTT activated.

For the moment, this patch should have no impact. However, when
quic_conn thread migration on accept will be implemented, it will be
easier to remap only one CID to the new thread. New CIDs will be
allocated after migration on the new thread.

This should be backported up to 2.7 after a period of observation.
2023-04-18 17:08:28 +02:00
Amaury Denoyelle
a66e04338e MINOR: protocol: define new callback set_affinity
Define a new protocol callback set_affinity. This function is used
during listener_accept() to notify about a rebind on a new thread just
before pushing the connection on the selected thread queue. If the
callback fails, accept is done locally.

This change will be useful for protocols with state allocated before
accept is done. For the moment, only QUIC protocol is concerned. This
will allow to rebind the quic_conn to a new thread depending on its
load.

This should be backported up to 2.7 after a period of observation.
2023-04-18 16:54:52 +02:00
Amaury Denoyelle
1e959ad522 MINOR: quic: remove TID encoding in CID
CIDs were moved from a per-thread list to a global list instance. The
TID-encoded is thus non needed anymore.

This should be backported up to 2.7 after a period of observation.
2023-04-18 16:54:31 +02:00
Amaury Denoyelle
e83f937cc1 MEDIUM: quic: use a global CID trees list
Previously, quic_connection_id were stored in a per-thread tree list.
Datagram were first dispatched to the correct thread using the encoded
TID before a tree lookup was done.

Remove these trees and replace it with a global trees list of 256
entries. A CID is using the list index corresponding to its first byte.
On datagram dispatch, CID is lookup on its tree and TID is retrieved
using new member quic_connection_id.tid. As such, a read-write lock
protects each list instances. With 256 entries, it is expected that
contention should be reduced.

A new structure quic_cid_tree served as a tree container associated with
its read-write lock. An API is implemented to ensure lock safety for
insert/lookup/delete operation.

This patch is a step forward to be able to break the affinity between a
CID and a TID encoded thread. This is required to be able to migrate a
quic_conn after accept to select thread based on their load.

This should be backported up to 2.7 after a period of observation.
2023-04-18 16:54:17 +02:00
Amaury Denoyelle
66947283ba MINOR: quic: remove TID ref from quic_conn
Remove <tid> member in quic_conn. This is moved to quic_connection_id
instance.

For the moment, this change has no impact. Indeed, qc.tid reference
could easily be replaced by tid as all of this work was already done on
the connection thread. However, it is planified to support quic_conn
thread migration in the future, so removal of qc.tid will simplify this.

This should be backported up to 2.7.
2023-04-18 16:20:47 +02:00
Amaury Denoyelle
c2a9264f34 MINOR: quic: adjust quic CID derive API
ODCID are never stored in the CID tree. Instead, we store our generated
CID which is directly derived from the CID using a hash function. This
operation is done via quic_derive_cid().

Previously, generated CID was returned as a 64-bits integer. However,
this is cumbersome to convert as an array of bytes which is the most
common CID representation. Adjust this by modifying return type to a
quic_cid struct.

This should be backported up to 2.7.
2023-04-18 16:20:47 +02:00
Amaury Denoyelle
1a5cc19cec MINOR: quic: adjust Rx packet type parsing
qc_parse_hd_form() is the function used to parse the first byte of a
packet and return its type and version. Its API has been simplified with
the following changes :
* extra out paremeters are removed (long_header and version). All infos
  are now stored directly in quic_rx_packet instance
* a new dummy version is declared in quic_versions array with a 0 number
  code. This can be used to match Version negotiation packets.
* a new default packet type is defined QUIC_PACKET_TYPE_UNKNOWN to be
  used as an initial value.

Also, the function has been exported to an include file. This will be
useful to be able to reuse on quic-sock to parse the first packet of a
datagram.

This should be backported up to 2.7.
2023-04-18 16:20:47 +02:00
Amaury Denoyelle
591e7981d9 CLEANUP: quic: rename quic_connection_id vars
Two different structs exists for QUIC connection ID :
* quic_connection_id which represents a full CID with its sequence
  number
* quic_cid which is just a buffer with a length. It is contained in the
  above structure.

To better differentiate them, rename all quic_connection_id variable
instances to "conn_id" by contrast to "cid" which is used for quic_cid.

This should be backported up to 2.7.
2023-04-18 16:20:47 +02:00
Amaury Denoyelle
90e5027e46 CLEANUP: quic: remove unused scid_node
Remove unused scid_node member for quic_conn structure. It was prepared
for QUIC backend support.

This should be backported up to 2.7.
2023-04-18 16:20:47 +02:00
Amaury Denoyelle
22a368ce58 CLEANUP: quic: remove unused QUIC_LOCK label
QUIC_LOCK label is never used. Indeed, lock usage is minimal on QUIC as
every connection is pinned to its owned thread.

This should be backported up to 2.7.
2023-04-18 16:20:47 +02:00
Christopher Faulet
ca5309a9a3 MINOR: stconn: Add a flag to report EOS at the stream-connector level
SC_FL_EOS flag is added to report the end-of-stream at the SC level. It will
be used to distinguish end of stream reported by the endoint, via the
SE_FL_EOS flag, and the abort triggered by the stream, via the
SC_FL_ABRT_DONE flag.

In this patch, the flag is defined and is systematically tested everywhere
SC_FL_ABRT_DONE is tested. It should be safe because it is never set.
2023-04-17 17:41:28 +02:00
Christopher Faulet
a1d14a7c7f MINOR: stconn: Add a flag to ack endpoint errors at SC level
The flag SC_FL_ERROR is added to ack errors on the endpoint. When
SE_FL_ERROR flag is detected on the SE descriptor, the corresponding is set
on the SC. Idea is to avoid, as far as possible, to manipulated the SE
descriptor in upper layers and know when an error in the endpoint is handled
by the SC.

For now, this flag is only set and cleared but never tested.
2023-04-14 17:05:53 +02:00
Christopher Faulet
b2b1c3a6ea MINOR: channel/stconn: Replace sc_shutw() by sc_shutdown()
All reference to a shutw is replaced by an abort. So sc_shutw() is renamed
sc_shutdown(). SC app ops functions are renamed accordingly.
2023-04-14 15:02:57 +02:00
Christopher Faulet
208c712b40 MINOR: stconn: Rename SC_FL_SHUTW in SC_FL_SHUT_DONE
Here again, it is just a flag renaming. In SC flags, there is no longer
shutdown for writes but shutdowns.
2023-04-14 15:01:21 +02:00
Christopher Faulet
cfc11c0eae MINOR: channel/stconn: Replace sc_shutr() by sc_abort()
All reference to a shutr is replaced by an abort. So sc_shutr() is renamed
sc_abort(). SC app ops functions are renamed accordingly.
2023-04-14 14:54:35 +02:00
Christopher Faulet
0c370eee6d MINOR: stconn: Rename SC_FL_SHUTR in SC_FL_ABRT_DONE
Here again, it is just a flag renaming. In SC flags, there is no longer
shutdown for reads but aborts. For now this flag is set when a read0 is
detected. It is of couse not accurate. This will be changed later.
2023-04-14 14:51:22 +02:00
Christopher Faulet
df7cd710a8 MINOR: channel/stconn: Replace channel_shutw_now() by sc_schedule_shutdown()
After the flag renaming, it is now the turn for the channel function to be
renamed and moved in the SC scope. channel_shutw_now() is replaced by
sc_schedule_shutdown(). The request channel is replaced by the front SC and
the response is replace by the back SC.
2023-04-14 14:49:45 +02:00
Christopher Faulet
e38534cbd0 MINOR: stconn: Rename SC_FL_SHUTW_NOW in SC_FL_SHUT_WANTED
Because shutowns for reads are now considered as aborts, the shudowns for
writes can now be considered as shutdowns. Here it is just a flag
renaming. SC_FL_SHUTW_NOW is renamed SC_FL_SHUT_WANTED.
2023-04-14 14:46:07 +02:00
Christopher Faulet
12762f09a5 MINOR: channel/stconn: Replace channel_shutr_now() by sc_schedule_abort()
After the flag renaming, it is now the turn for the channel function to be
renamed and moved in the SC scope. channel_shutr_now() is replaced by
sc_schedule_abort(). The request channel is replaced by the front SC and the
response is replace by the back SC.
2023-04-14 14:08:49 +02:00
Christopher Faulet
573ead1e68 MINOR: stconn: Rename SC_FL_SHUTR_NOW in SC_FL_ABRT_WANTED
It is the first step to transform shutdown for reads for the upper layer
into aborts. This patch is quite simple, it is just a flag renaming.
2023-04-14 14:06:01 +02:00
Christopher Faulet
7eb837df4a MINOR: stream: Introduce stream_abort() to abort on both sides in same time
The function stream_abort() should now be called when an abort is performed
on the both channels in same time.
2023-04-14 14:04:59 +02:00
Christopher Faulet
3db538ac2f MINOR: channel: Forwad close to other side on abort
Most of calls to channel_abort() are associated to a call to
channel_auto_close(). Others are in areas where the auto close is the
default. So, it is now systematically enabled when an abort is performed on
a channel, as part of channel_abort() function.
2023-04-14 13:56:28 +02:00
Christopher Faulet
dbad8ec787 MINOR: stream: Uninline and export sess_set_term_flags() function
This function will be used to set termination flags on TCP streams from
outside of process_stream(). Thus, it must be uninlined and exported.
2023-04-14 12:13:09 +02:00
Frédéric Lécaille
fad0e6cf73 MINOR: quic: Add packet loss and maximum cc window to "show quic"
Add the number of packet losts and the maximum congestion control window computed
by the algorithms to "show quic".
Same thing for the traces of existent congestion control algorithms.

Must be backported to 2.7 and 2.6.
2023-04-13 19:20:08 +02:00
Willy Tarreau
d30e82b9f0 MINOR: receiver: reserve special values for "shards"
Instead of artificially setting the shards count to MAX_THREAD when
"by-thread" is used, let's reserve special values for symbolic names
so that we can add more in the future. For now we use value -1 for
"by-thread", which requires to turn the type to signed int but it was
already used as such everywhere anyway.
2023-04-13 17:12:50 +02:00
Amaury Denoyelle
53fc98c3bc MINOR: fd: implement fd_migrate_on() to migrate on a non-local thread
fd_migrate_on() can be used to migrate an existing FD to any thread, even
one belonging to a different group from the current one and from the
caller's. All that is needed is to make sure the FD is still valid when
the operation is performed (which is the case when such operations happen).

This is potentially slightly expensive since it locks the tgid during the
delicate operation, but it is normally performed only from an owning
thread to offer the FD to another one (e.g. reassign a better thread upon
accept()).
2023-04-13 16:57:51 +02:00
Willy Tarreau
7b44c26e13 MINOR: fd: add a lock bit with the tgid
In order to permit to migrate FDs from one thread group to another,
we'll need to be able to set a TGID that is compatible with no other
thread group. Either we use a special value or we dedicate a special
bit. Given that we already have way more bits than needed, let's just
sacrifice the topmost one to serve as a lock bit, indicating the tgid
is not valid anymore. This will make all fd_grab_tgid() fail to grab
it.

The new fd_lock_tgid() function now tries to assign a locked tgid to
an idle FD, and fd_unlock_tgid() simply drops the lock bit, revealing
the target tgid.

For now it's still unused so it must not have any effect.
2023-04-13 16:57:51 +02:00
Willy Tarreau
4d882bd800 MINOR: fd: optimize fd_claim_tgid() for use in fd_insert()
fd_claim_tgid() uses a CAS to set the desired TGID on the FD. It's only
called from fd_insert() where in the vast majority of cases, the tgid
and refcount are zero before the call. However the loop was optimized
for the case where it was equal to the desired TGID, systematically
causing one extra round in the loop there. Better start assuming a
zero value.
2023-04-13 16:57:51 +02:00
Willy Tarreau
97da942ba6 MINOR: thread: keep a bitmask of enabled groups in thread_set
We're only checking for 0, 1, or >1 groups enabled there, and we'll soon
need to be more precise and know quickly which groups are non-empty.
Let's just replace the count with a mask of enabled groups. This will
allow to quickly spot the presence of any such group in a set.
2023-04-13 16:57:51 +02:00
William Lallemand
3f210970bf BUG/MINOR: stick_table: alert when type len has incorrect characters
Alert when the len argument of a stick table type contains incorrect
characters.

Replace atol by strtol.

Could be backported in every maintained versions.
2023-04-13 14:46:08 +02:00
Willy Tarreau
7f2b3f9431 BUILD: bug.h: add a warning in the base API when unsafe functions are used
Once in a while we introduce an sprintf() or strncat() function by
accident. These ones are particularly dangerous and must never ever
be used because the only way to use them safely is at least as
complicated if not more, than their safe counterparts. By redefining
a few of these functions with an attribute_warning() we can deliver a
message to the developer who is tempted to use them. This commit does
it for strcat(), strcpy(), strncat(), sprintf(), vsprintf(). More could
come later if needed, such as strtok() and maybe a few others, but these
are less common.
2023-04-07 18:21:36 +02:00
Willy Tarreau
d499127148 MINOR: compiler: define a __attribute__warning() macro
__attribute__((deprecated)) is convenient to discourage from using
something deprecated, but gcc >= 4.3 provides __attribute__((warning(x)))
that allows to display a specific warning if something is used. This is
particularly convenient to give indications when some API parts need to
be adapted. Let's just define it as a macro that falls back to the older
deprecated attribute when not available.

It's supported on clang 14 as well but works differently and errors
out when redefined (while the main purpose precisely is to add such a
redefinition). Thus instead on clang we use deprecated(msg) which is
OK. See https://github.com/llvm/llvm-project/issues/56519
2023-04-07 18:14:28 +02:00
Willy Tarreau
988e19c607 BUILD: compiler: fix __equals_1() on older compilers
It appeared that __has_attribute() doesn't work on gcc 4.4 and older
because the concatenation of __has_attribute##x isn't resolved as a one
before being passed to __equals_1() which immediately concatenates it to
comma_for_one. We first need to pass it through an extra layer to resolve
this name to a value. The new version was tested with gcc 4.2 to 11.3.

This may be backported though it's pretty minor.
2023-04-07 18:14:28 +02:00
Olivier Houchard
ead43fe4f2 MEDIUM: compression: Make it so we can compress requests as well.
Add code so that compression can be used for requests as well.
New compression keywords are introduced :
"direction" that specifies what we want to compress. Valid values are
"request", "response", or "both".
"type-req" and "type-res" define content-type to be compressed for
requests and responses, respectively. "type" is kept as an alias for
"type-res" for backward compatibilty.
"algo-req" specifies the compression algorithm to be used for requests.
Only one algorithm can be provided.
"algo-res" provides the list of algorithm that can be used to compress
responses. "algo" is kept as an alias for "algo-res" for backward
compatibility.
2023-04-07 00:49:17 +02:00
Olivier Houchard
dea25f51b6 MINOR: compression: Count separately request and response compression
Duplicate the compression counters, so that we have separate counters
for request and response compression.
2023-04-07 00:47:04 +02:00
Olivier Houchard
db573e9c58 MINOR: compression: Store algo and type for both request and response
Make provision for being able to store both compression algorithms and
content-types to compress for both requests and responses. For now only
the responses one are used.
2023-04-07 00:46:59 +02:00
Olivier Houchard
dfc11da561 MINOR: compression: Prepare compression code for request compression
Make provision for storing the compression algorithm and the compression
context twice, one for requests, and the other for responses. Only the
response ones are used for now.
2023-04-07 00:46:55 +02:00
Olivier Houchard
3ce0f01b81 MINOR: compression: Make compression offload a flag
Turn compression offload into a flag in struct comp, instead of using
an int just for it.
2023-04-07 00:46:45 +02:00
Christopher Faulet
6bb26d41fe BUG/MUNOR: http-ana: Use an unsigned integer for http_msg flags
In the commit 2954bcc1e (BUG/MINOR: http-ana: Don't switch message to DATA
when waiting for payload), the HTTP message flags were extended and don't
fit anymore in an unsigned char. So, we must use an unsigned integer now. It
is not a big deal because there was already a 6-bytes hole in the structure,
just after the flags. Now, there are a 3-bytes hold before.

This patch should fix the issue #2105. It is 2.8-specific, no backport
needed.
2023-04-06 08:58:45 +02:00
Amaury Denoyelle
15adc4cc4e MINOR: quic: remove address concatenation to ODCID
Previously, ODCID were concatenated with the client address. This was
done to prevent a collision between two endpoints which used the same
ODCID.

Thanks to the two previous patches, first connection generated CID is
now directly derived from the client ODCID using a hash function which
uses the client source address from the same purpose. Thus, it is now
unneeded to concatenate client address to <odcid> quic-conn member.

This change allows to simplify the quic_cid structure management and
reduce its size which is important as it is embedded several times in
various structures such as quic_conn and quic_rx_packet.

This should be backported up to 2.7.
2023-04-05 11:09:57 +02:00
Amaury Denoyelle
2c98209c1c MINOR: quic: remove ODCID dedicated tree
First connection CID generation has been altered. It is now directly
derived from client ODCID since previous commit :
  commit 162baaff7a
  MINOR: quic: derive first DCID from client ODCID

This patch removes the ODCID tree which is now unneeded. On connection
lookup via CID, if a DCID is not found the hash derivation is performed
for an INITIAL/0-RTT packet only. In case a client has used multiple
times an ODCID, this will allow to retrieve our generated DCID in the
CID tree without storing the ODCID node.

The impact of this two combined patch is that it may improve slightly
haproxy memory footprint by removing a tree node from quic_conn
structure. The cpu calculation induced by hash derivation should only be
performed only a few times per connection as the client will start to
use our generated CID as soon as it received it.

This should be backported up to 2.7.
2023-04-05 11:07:01 +02:00
Christopher Faulet
ffcffa8e93 MINOR: http-ana: Add a HTTP_MSGF flag to state the Expect header was checked
HTTP_MSGF_EXPECT_CHECKED is now set on the request message to know the
"Expect: " header was already handled, if any. The flag is set from the
moment we try to handle the header to send a "100-continue" response,
whether it was found or not.

This way, when we are waiting for the request payload, thanks to this flag,
we only try to handle "Expect: " header only once. Before it was performed
by changing the message state from BODY to DATA. But this has some side
effects and it is no accurate. So, it is better to rely on a flag to do so.
2023-04-05 10:33:32 +02:00
Aurelien DARRAGON
c84899c636 MEDIUM: hlua/event_hdl: initial support for event handlers
Now that the event handler API is pretty mature, we can expose it in
the lua API.

Introducing the core.event_sub(<event_types>, <cb>) lua function that
takes an array of event types <event_types> as well as a callback
function <cb> as argument.

The function returns a subscription <sub> on success.
Subscription <sub> allows you to manage the subscription from anywhere
in the script.
To this day only the sub->unsub method is implemented.

The following event types are currently supported:
  - "SERVER_ADD": when a server is added
  - "SERVER_DEL": when a server is removed from haproxy
  - "SERVER_DOWN": server states goes from up to down
  - "SERVER_UP": server states goes from down to up

As for the <cb> function: it will be called when one of the registered
event types occur. The function will be called with 3 arguments:
  cb(<event>,<data>,<sub>)

<event>: event type (string) that triggered the function.
(could be any of the types used in <event_types> when registering
the subscription)

<data>: data associated with the event (specific to each event family).

For "SERVER_" family events, server details such as server name/id/proxy
will be provided.
If the server still exists (not yet deleted), a reference to the live
server is provided to spare you from an additionnal lookup if you need
to have direct access to the server from lua.

<sub> refers to the subscription. In case you need to manage it from
within an event handler.
(It refers to the same subscription that the one returned from
core.event_sub())

Subscriptions are per-thread: the thread that will be handling the
event is the one who performed the subscription using
core.event_sub() function.

Each thread treats events sequentially, it means that if you have,
let's say SERVER_UP, then SERVER_DOWN in a short timelapse, then your
cb function will first be called with SERVER_UP, and once you're done
handling the event, your function will be called again with SERVER_DOWN.

This is to ensure event consitency when it comes to logging / triggering
logic from lua.

Your lua cb function may yield if needed, but you're pleased to process
the event as fast as possible to prevent the event queue from growing up

To prevent abuses, if the event queue for the current subscription goes
over 100 unconsumed events, the subscription will pause itself
automatically for as long as it takes for your handler to catch up.
This would lead to events being missed, so a warning will be emitted in
the logs to inform you about that. This is not something you want to let
happen too often, it may indicate that you subscribed to an event that
is occurring too frequently or/and that your callback function is too
slow to keep up the pace and you should review it.

If you want to do some parallel processing because your callback
functions are slow: you might want to create subtasks from lua using
core.register_task() from within your callback function to perform the
heavy job in a dedicated task and allow remaining events to be processed
more quickly.

Please check the lua documentation for more information.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
4e5e26641d MINOR: proxy: add findserver_unique_id() and findserver_unique_name()
Adding alternative findserver() functions to be able to perform an
unique match based on name or puid and by leveraging revision id (rid)
to make sure the function won't match with a new server reusing the
same name or puid of the "potentially deleted" server we were initially
looking for.

For example, if you were in the position of finding a server based on
a given name provided to you by a different context:

Since dynamic servers were implemented, between the time the name was
picked and the time you will perform the findserver() call some dynamic
server deletion/additions could've been performed in the mean time.

In such cases, findserver() could return a new server that re-uses the
name of a previously deleted server. Depending on your needs, it could
be perfectly fine, but there are some cases where you want to lookup
the original server that was provided to you (if it still exists).
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
f751a97a11 MINOR: event_hdl: pause/resume for subscriptions
While working on event handling from lua, the need for a pause/resume
function to temporarily disable a subscription was raised.

We solve this by introducing the EHDL_SUB_F_PAUSED flag for
subscriptions.

The flag is set via _pause() and cleared via _resume(), and it is
checked prior to notifying the subscription in publish function.

Pause and Resume functions are also available for via lookups for
identified subscriptions.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
b4b7320a6a MINOR: event_hdl: add event_hdl_async_equeue_size() function
Use event_hdl_async_equeue_size() in advanced async task handler to
get the near real-time event queue size.

By near real-time, you should understand that the queue size is not
updated during element insertion/removal, but shortly before insertion
and shortly after removal, so the size should reflect the approximate
queue size at a given time but should definitely not be used as a
unique source of truth.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
9e98a27d6a MINOR: event_hdl: add event_hdl_async_equeue_isempty() function
Add event_hdl_async_equeue_isempty() to check is the event queue is
empty from an advanced async task handler.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
b289fd1420 MINOR: event_hdl: normal tasks support for advanced async mode
advanced async mode (EVENT_HDL_ASYNC_TASK) provided full support for
custom tasklets registration.

Due to the similarities between tasks and tasklets, it may be useful
to use the advanced mode with an existing task (not a tasklet).
While the API did not explicitly disallow this usage, things would
get bad if we try to wakeup a task using tasklet_wakeup() for notifying
the task about new events.

To make the API support both custom tasks and tasklets, we use the
TASK_IS_TASKLET() macro to call the proper waking function depending
on the task's type:

  - For tasklets: we use tasklet_wakeup()
  - For tasks: we use task_wakeup()

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00
Aurelien DARRAGON
ef6ca67176 BUG/MEDIUM: event_hdl: clean soft-stop handling
soft-stop was not explicitly handled in event_hdl API.

Because of this, event_hdl was causing some leaks on deinit paths.
Moreover, a task responsible for handling events could require some
additional cleanups (ie: advanced async task), and as the task was not
protected against abort when soft-stopping, such cleanup could not be
performed unless the task itself implements the required protections,
which is not optimal.

Consider this new approach:
 'jobs' global variable is incremented whenever an async subscription is
 created to prevent the related task from being aborted before the task
 acknowledges the final END event.

 Once the END event is acknowledged and freed by the task, the 'jobs'
 variable is decremented, and the deinit process may continue (including
 the abortion of remaining tasks not guarded by the 'jobs' variable).

To do this, a new global mt_list is required: known_event_hdl_sub_list
This list tracks the known (initialized) subscription lists within the
process.

sub_lists are automatically added to the "known" list when calling
event_hdl_sub_list_init(), and are removed from the list with
event_hdl_sub_list_destroy().

This allows us to implement a global thread-safe event_hdl deinit()
function that is automatically called on soft-stop thanks to signal(0).
When event_hdl deinit() is initiated, we simply iterate against the known
subscription lists to destroy them.

event_hdl_subscribe_ptr() was slightly modified to make sure that a sub_list
may not accept new subscriptions once it is destroyed (removed from the
known list)
This can occur between the time the soft-stop is initiated (signal(0)) and
haproxy actually enters in the deinit() function (once tasks are either
finished or aborted and other threads already joined).

It is safe to destroy() the subscription list multiple times as long
as the pointer is still valid (ie: first on soft-stop when handling
the '0' signal, then from regular deinit() path): the function does
nothing if the subscription list is already removed.

We partially reverted "BUG/MINOR: event_hdl: make event_hdl_subscribe thread-safe"
since we can use parent mt_list locking instead of a dedicated lock to make
the check gainst duplicate subscription ID.
(insert_lock is not useful anymore)

The check in itself is not changed, only the locking method.

sizeof(event_hdl_sub_list) slightly increases: from 24 bits to 32bits due
to the additional mt_list struct within it.

With that said, having thread-safe list to store known subscription lists
is a good thing: it could help to implement additional management
logic for subcription lists and could be useful to add some stats or
debugging tools in the future.

If 68e692da0 ("MINOR: event_hdl: add event handler base api")
is being backported, then this commit should be backported with it.
2023-04-05 08:58:17 +02:00