Lua mailers scripts now leverages Queue class to implement a mailqueue.
Thanks to the mailqueue, emails are still sent asynchronously, but with
respect to their generation order (better ordering consistency).
That is, previous script limitation (see below) is no longer true:
"
Current known script limitation: if multiple events are generated
simultaneously it is possible that emails could be received out of
order since emails are sent asynchronously using smtp_send_email()
and there is no sending queue. Relying on the email "date" should
help to know which email was generated first..
"
For a given server, email-alerts will be sent in the same order as the
events were generated. However this does not apply to events between 2
distinct servers, since each server is tracked independently within
the lua script. (1 subscription per server, to make sure only relevant
servers x events are being tracked and prevent useless wakeups)
Adding a new lua class: Queue.
This class provides a generic FIFO storage mechanism that may be shared
between multiple lua contexts to easily pass data between them, as stock
Lua doesn't provide easy methods for passing data between multiple
coroutines.
New Queue object may be obtained using core.queue()
(it works like core.concat() for a concat Class)
Lua documentation was updated (including some usage examples)
Since mailers/healthcheckmail.vtc already requires lua to emulate the
SMTP server for the test, force it to use lua mailers example script
to send email-alerts so we don't rely anymore on legacy tcpcheck
mailers implementation.
This is done by simply loading examples/mailers.lua (as a symlink) from
haproxy config file.
Remaining in drain mode after removing one of server admins flags leads
to this message being generated:
"Server name/backend is leaving forced drain but remains in drain mode."
However this is not necessarily true: the server might just be leaving
MAINT with the IDRAIN flag set, so the report is incorrect in this case.
(FDRAIN was not set so it cannot be cleared)
To prevent confusion around this message and to comply with the code
comment above it: we remove the "leaving forced drain" precision to
make the report suitable for multiple transitions.
Since 3157222 ("MEDIUM: hlua/applet: Use the sedesc to report and detect
end of processing"), hlua_socket_handler() might spin loop if the hlua
socket is destroyed and some data was left unconsumed in the applet.
Prior to the above commit, the stream was explicitly KILLED
(when ctx->die == 1) so the app couldn't spinloop on unconsumed data.
But since the refactor this is no longer the case.
To prevent unconsumed data from waking the applet indefinitely, we consume
pending data when either one of EOS|ERROR|SHR|SHW flags are set, as it is
done everywhere else this check is performed in the code. Hence it was
probably overlooked in the first place during the refacto.
This bug is 2.8 specific only, so no backport needed.
Legacy mailers implemented using tcpchecks may now be replaced using a
pure lua implementation.
Simply loading the file "mailers.lua" using lua-load directive is enough
to disable the legacy mailer implementation and make the lua script
subscribe to server events in order to generate messages and send email
alerts to configured mailservers according to the mailers configuration
specified by the user in the config file.
lua-load-per-thread directive is supported, the script will automatically
force itself on a single thread to prevent multiple mails from being sent
for the same event.
The email sending from lua in itself is handled with smtp_send_email()
function from Thierry FOURNIER.
(see: https://www.arpalert.org/how-to-send-email.html)
The function was slightly adapted to send HELO instead of EHLO when
beginning the SMTP handshake to make it more compatible with basic SMTP
stacks and to comply with the legacy SMTP handshake performed in
mailers.c.
Authentication is not yet handled by smtp_send_email(), but it may be
further improved to do so.
Note that existing lua libraries may also support sending emails (even
with authentication support maybe?), I did not do much researchs about
this, so I'm not aware of existing solutions at the time of writing this
script.
The only restriction is that when using an external library, the library
function calls must not be blocking, since haproxy relies on lua
executions to be yieldable and rescheduled.
As long as the script complies with this limitation, it may be customized
or improved in many ways, including templating, making calls to APIs
services.. (ie: triggering automation flows, sending SMS alerts...
you name it)
The purpose of this script is to provide a basic working replacement for
legacy mailers implementation based on tcpchecks, which is planned for
removal in the future. tcpcheck based mailers is kind of a hack which is
not suitable as a long term solution.
(hard to maintain and not customizable)
Note: Email content for email alerts sent using this script might slightly
differ from the legacy implementation (some optional info might be missing
such as server's check dynamic description, or included statistics such as
currently active servers may appear out of sync) due the email generation
now being performed asynchronously. However the output format complies with
the original one and essential informations are consistently reported.
Current known script limitation: if multiple events are generated
simultaneously it is possible that emails could be received out of
order since emails are sent asynchronously using smtp_send_email()
and there is no sending queue. Relying on the email "date" should
help to know which email was generated first..
Proxy mailers, which are configured using "email-alert" directives
in proxy sections from the configuration, are now being exposed
directly in lua thanks to the proxy:get_mailers() method which
returns a class containing the various mailers settings if email
alerts are configured for the given proxy (else nil is returned).
Both the class and the proxy method were marked as LEGACY since this
feature relies on mailers configuration, and the goal here is to provide
the last missing bits of information to lua scripts in order to make
them capable of sending email alerts instead of relying on the soon-to-
be deprecated mailers implementation based on checks (see src/mailers.c)
Lua documentation was updated accordingly.
Exposing a new hlua function, available from body or init contexts, that
forcefully disables the sending of email alerts even if the mailers are
defined in haproxy configuration.
This will help for sending email directly from lua.
(prevent legacy email sending from intefering with lua)
Exposing SERVER_CHECK event through the lua API.
New lua class named ServerEventCheck was added to provide additional
data for SERVER_CHECK event.
Lua documentation was updated accordingly.
Adding a new event type: SERVER_CHECK.
This event is published when a server's check state ought to be reported.
(check status change or check result)
SERVER_CHECK event is provided as a server event with additional data
carrying relevant check's context such as check's result and health.
Adding a new SERVER event in the event_hdl API.
SERVER_ADMIN is implemented as an advanced server event.
It is published each time the administrative state changes.
(when s->cur_admin changes)
SERVER_ADMIN data is an event_hdl_cb_data_server_admin struct that
provides additional info related to the admin state change, but can
be casted as a regular event_hdl_cb_data_server struct if additional
info is not needed.
Reuse cb_data from STATE event to publish UP and DOWN events.
This saves some CPU time since the event is only constructed
once to publish STATE, STATE+UP or STATE+DOWN depending on the
state change.
Adding a new SERVER event in the event_hdl API.
SERVER_STATE is implemented as an advanced server event.
It is published each time the server's effective state changes.
(when s->cur_state changes)
SERVER_STATE data is an event_hdl_cb_data_server_state struct that
provides additional info related to the server state change, but can
be casted as a regular event_hdl_cb_data_server struct if additional
info is not needed.
add a macro helper to help publish server events to global and
per-server subscription list at once since all server events
support both subscription modes.
Server.get_pend_conn: number of pending connections to the server
Server.get_cur_sess: number of current sessions handled by the server
Lua documentation was updated accordingly.
After a sending attempt, we check the opposite SC to see if it is waiting
for a minimum free space to receive more data. If the condition is
respected, it is unblocked. 0 is special case where the SC is
unconditionally unblocked.
During fast-forward, if the SC is waiting for a minimum free space to
receive more data and some data was sent, it is only unblock is the
condition is respected. 0 is special case where the SC is unconditionally
unblocked.
If the opposite SC is waiting for a minimum free space to receive more data,
it is only unblock is the condition is respected. 0 is a special cases where
the opposite SC is always unblocked.
At the end of process_stream(), in sc_update_rx(), the SC is now unblocked
if it was waiting for room and the free space in the input buffer is large
enough. This patch should fix an issue with the compression filter that can
leave the channel's buffer empty while the endpoint is waiting for room to
progress. Indeed, in this case, because the buffer is empty, there is no
send attempt and no other way to unblock the SE.
This commit depends on following commits:
* MEDIUM: tree-wide: Change sc API to specify required free space to progress
* MINOR: stconn: Add a field to specify the room needed by the SC to progress
* MINOR: peers: Use the applet API to send message
* MINOR: stats: Use the applet API to write data
* MINOR: cli: Use applet API to write output message
It should fix a regression introduced with the commit 341a5783b
("BUG/MEDIUM: stconn: stop to enable/disable reads from streams via
si_update_rx").
It must be backported iff the commit above is also backported. It was not
backported yet and it is thus probably a good idea to not do so to avoid to
backport too many change..
sc_need_room() now takes the required free space to receive more data as
parameter. All calls to this function are updated accordingly. For now, this
value is set but not used. When we are waiting for a buffer, 0 is used. So
we expect to be unblocked ASAP. However this must be reviewed because
SC_FL_NEED_BUF is probably enough in this case and this flag is already set
if the input buffer allocation fails.
When the SC is blocked because it is waiting for room in the input buffer,
it will be responsible to specify the minimum free space required to
progress. In this commit, we only introduce the field in the stconn
structure that will be used to store this value. It is a signed value with
the following meaning:
* -1: The SC is waiting for room but not based on the buffer state. It
will be typically used during splicing when the pipe is full. In
this case, only a successful send can unblock the SC.
* >= 0; The minimum free space in the input buffer to unblock the SC. 0 is
a special value to specify the SC must be unblocked ASAP, by the
stream, at the end of process_stream() or when output data are
consumed on the opposite side.
The peers applet now use the applet API to send message instead of the
channel API. This way, it does not need to take care to request more room if
it fails to put data into the channel's buffer.
stats_putchk() is updated to use the applet API instead of the channel API
to write data. To do so, the appctx is passed as parameter instead of the
channel. This way, the applet does not need to take care to request more
room it it fails to put data into the channel's buffer.
Instead of using the channel API to to write output message from the CLI
applet, we use the applet API. This way, the applet does not need to take
care to request more room it it fails to put its message into the channel's
buffer.
This commit introduces the keyword "client-sigalgs" for the bind line,
which does the same as "sigalgs" but for the client authentication.
"ssl-default-bind-client-sigalgs" allows to set the default parameter
for all the bind lines.
This patch should fix issue #2081.
This patch introduces the "sigalgs" keyword for the bind line, which
allows to configure the list of server signature algorithms negociated
during the handshake. Also available as "ssl-default-bind-sigalgs" in
the default section.
This patch was originally written by Bruno Henc.
Previously it would re-dump all threads to the same trash if the output
buffer was full, which it never was since the trash is of the same size.
Now it dumps one thread, copies it to the buffer and yields until it can
continue. Showing 256 threads works as expected.
Currently large setups cannot dump all their threads because they're
first dumped to the trash buffer, then copied to stderr. Here we can
now change this, instead we dump one thread at a time into the trash
and immediately send it to stderr. We also keep a copy into a local
trash chunk that's assigned to thread_dump_buffer so that a core file
still contains a copy of a large number of threads, which is generally
sufficient for the vast majority of situations.
It was verified that dumping 256 threads now produces ~55kB of output
and all of them are properly dumped.
The thread dump mechanism that is used by "show threads" and by the
panic dump is overly complicated due to an initial misdesign. It
firsts wakes all threads, then serializes their dumps, then releases
them, while taking extreme care not to face colliding dumps. In fact
this is not what we need and it reached a limit where big machines
cannot dump all their threads anymore due to buffer size limitations.
What is needed instead is to be able to dump *one* thread, and to let
the requester iterate on all threads.
That's what this patch does. It adds the thread_dump_buffer to the
struct thread_ctx so that the requester offers the buffer to the
thread that is about to be dumped. This buffer also serves as a lock.
A thread at rest has a NULL, a valid pointer indicates the thread is
using it, and 0x1 (NULL+1) is used by the dumped thread to tell the
requester it's done. This makes sure that a given thread is dumped
once at a time. In addition to this, the calling thread decides
whether it accesses the thread by itself or via the debug signal
handler, in order to get a backtrace. This is much saner because the
calling thread is free to do whatever it wants with the buffer after
each thread is dumped, and there is no dependency between threads,
once they've dumped, they're free to continue (and possibly to dump
for another requester if needed). Finally, when the THREAD_DUMP
feature is disabled and the debug signal is not used, the requester
accesses the thread by itself like before.
For now we still have the buffer size limitation but it will be
addressed in future patches.
NS_TO_TV helper was implemented in 591fa59 ("MINOR: time: add conversions
to/from nanosecond timestamps")
Due to NS_TO_TV being implemented as a macro and not a function, we must
take extra care when manipulating user input.
In current implementation, 't' argument is not isolated within the macro.
Because of this, NS_TO_TV(1 + 1) will expand to:
((const struct timeval){ .tv_sec = 1 + 1 / 1000000000ULL, .tv_usec = (1 + 1 % 1000000000ULL) / 1000U })
Instead of:
((const struct timeval){ .tv_sec = 2 / 1000000000ULL, .tv_usec = (2 % 1000000000ULL) / 1000U })
As such, NS_TO_TV usage in hlua_now() is currently incorrect and this
results in unexpected values being passed to lua.
In this patch, we're adding an extra parenthesis around 't' in NS_TO_TV()
macro to make it safe against such usages. (that is: ensure proper
argument expansion as if NS_TO_TV was implemented as a function)
This is a 2.8 specific bug, no backport needed.
When a client H2 stream is waiting for a tunnel establishment, it must state
it expects data from server. It is the second fix that should fix
regressions of the commit 2722c04b ("MEDIUM: mux-h2: Don't expect data from
server as long as request is unfinished")
It is a 2.8-specific bug. No backport needed.
In 2.3, commit 471425f51 ("BUG/MINOR: debug: Don't dump the lua stack
if it is not initialized") introduced the possibility to emit an empty
line when there's no Lua info to dump. The problem is that doing this
on the CLI in "show threads" marks the end of the output, and it may
affect some external tools. We need to make sure that LFs are only
emitted if there's something on the line and that all lines properly
start with the prefix.
This may be backported as far as 2.0 since the commit above was
backported there.
With the change for QUIC MUX local error API, the new flag QC_CF_ERRL is
now checked on qc_detach(). If set, qcs instance is freed even though
transfer is not finished. This should help to quickly release qcs and
eventually all MUX instance resources.
To further accelerate this, a specific check has been added in
qc_shutw(). It is skipped if local error flag is set to prevent noisy
reset stream invocation. In the same way, QUIC MUX is not rescheduled on
qc_recv_buf() operation if local error flag set.
This should be backported up to 2.7.
If an error a detected at the MUX layer, all remaining stream endpoints
should be closed asap with error set. This is now done by checking for
QC_CF_ERRL flag on qc_wake_some_streams() and qc_send_buf(). To complete
this, qc_wake_some_streams() is called by qc_process() if needed.
This should help to quickly release streams as soon as a new error is
detected locally by the MUX or APP layer. This allows to in turn free
the MUX instance itself. Previously, error would not have been
automatically reported until the transport layer closure would occur
on CONNECTION_CLOSE emission.
This should be backported up to 2.7.
When a fatal error is detected by the QUIC MUX or H3 layer, the
connection should be closed with a CONNECTION_CLOSE with an error code
as the reason.
Previously, a direct call was used to the quic_conn layer to try to
close the connection. This API was adjusted to be more flexible. Now,
when an error is detected, the function qcc_set_error() is called. This
set the flag QC_CF_ERRL with the error code stored by the MUX. The
connection will be closed soon so most of the operations are not
conducted anymore. Connection is then finally closed during qc_send()
via quic_conn layer if QC_CF_ERRL is set. This will set the flag
QC_CF_ERRL_DONE which indicates that the MUX instance can be freed.
This model is cleaner and brings the following improvments :
- interaction with quic_conn layer for closure is centralized on a
single function
- CO_FL_ERROR is not set anymore. This was incorrect as this should be
reserved to errors reported by the transport layer to be similar with
other haproxy components. As a consequence, qcc_is_dead() has been
adjusted to check for QC_CF_ERRL_DONE to release the MUX instance.
This should be backported up to 2.7.