During the zero-copy data forwarding, the caller specify the maximum amount
of data the producer may push. However, the HTML stats applet does not use
it and can fill all the free space in the buffer. It is especially an issue
when the consumer is limited by a flow control, like the H2. Because we may
emit too large DATA frame in this case. It is especially visible with big
buffer (for instance 32k).
In the early age or zero-copy data forwarding, the caller was responsible to
pass a properly resized buffer. And during the different refactoring steps,
this has changed but the HTML stats applet was not updated accordingly.
To fix the bug, the buffer used to dump the HTML page is resized to be sure
not too much data are dumped.
This patch should solve the issue #2757. It must be backported to 3.0.
Issuing "debug dev counters" on the CLI will now scan all existing
counters, and report their count, type, location, function name, the
condition and an optional comment passed to the macro.
The command takes a number of arguments:
- "show": this is the default, it will just list the counters
- "reset": will reset the matching counters instead of listing them
- "all": by default, only non-zero counters are listed. With "all",
they are all listed
- "bug": restrict the reset or dump to counters of type "BUG" (BUG_ON usually)
- "chk": restrict the reset or dump to counters of type "CHK" (CHECK_IF)
- "cnt": restrict the reset or dump to counters of type "CNT" (COUNT_IF)
The types may be cumulated, and the option entered in any order. Here's
an example of the output of "debug dev counters show all bug":
Count Type Location function(): "condition" [comment]
0 BUG ring.h:114 ring_dup(): "max > ring_size(dst)"
0 BUG vecpair.h:223 vp_getblk_ofs(): "ofs >= v1->len + v2->len"
0 BUG buf.h:395 b_add(): "b->data + count > b->size"
0 BUG buf.h:106 b_room(): "b->data > b->size"
0 BUG task.h:328 _task_queue(): "(ulong)caller & 1"
0 BUG task.h:324 _task_queue(): "task->tid != tid"
0 BUG task.h:313 _task_queue(): "(ulong)caller & 1"
(...)
This is expected to be convenient combined with the use and abuse of
COUNT_IF() at select locations.
This macro works exactly like BUG_ON() except that it never logs anything
nor crashes, it only implements an atomic counter that is incremented on
every call. This can be used to count a number of unlikely events that are
worth checking at run time on setups showing unusual and unreproducible
behaviors.
These macros do not always kill the process, and sometimes it would be
nice to know if some match or not, and how many times (especially for the
CHECK_IF one).
This commit adds a new section "dbg_cnt" made of structs that contain
function name, file name, line number, check type, condition and match
count. A newe macro __DBG_COUNT() adds one to the counter, and is placed
inside _BUG_ON() and _BUG_ON_ONCE(). It's worth noting that the exact
type of the check is not very precise but in practice we don't care,
as most checks will cause the process to die anyway unless they're of
type _BUG_ON_ONCE() (used by CHECK_IF by default).
All of this is limited to !defined(USE_OBSOLETE_LINKER) because we're
creating a section, thus we need a modern linker to be able to scan
this section later. Doing so adds ~50kB to the executable due to the
~1266 BUG_ON() and others placed there. That's not huge in comparison
to the visibility it can provide.
The BUG_ON() macros are made of two levels so as to resolve the condition
to a string. However this doesn't offer much flexibility for performing
other operations when the condition is validated, so let's adjust them so
that the condition is checked in the outer macro and the operations are
performed in the inner one.
A stream may be shut without any HTX EOM reported to report a proper
closure. This is the case for QCS instances flagged with
QC_SF_UNKNOWN_PL_LENGTH. Shut is performed with an empty FIN emission
instead of a RESET_STREAM. This has been implemented since the following
patch :
24962dd178
BUG/MEDIUM: mux-quic: do not emit RESET_STREAM for unknown length
However, in case of HTTP/3, an empty FIN should only be done after a
full message is emitted, which requires at least a HEADERS frame. If an
empty FIN is emitted without it, client may interpret this as invalid
and close the connection. To prevent this, fallback to a RESET_STREAM
emission if no data were emitted on the stream.
This was reproduced using ngtcp2-client with 10% loss (-r 0.1) on a
remote host, with httpterm request "/?s=100k&C=1&b=0&P=400". An error
ERR_H3_FRAME_UNEXPECTED is returned by ngtcp2-client when the bug
occurs.
Note that this change is incomplete. The message validity depends solely
on the application protocol in use. As such, a new app_ops callback
should be implemented to ensure the stream is closed accordingly.
However, this first patch ensures that at least HTTP/3 case is valid
while keeping a minimal backport process.
This should be backported up to 2.8.
An empty STREAM frame can be emitted by QUIC MUX to notify about a
delayed FIN when there is no data left to transmit. This requires a
tedious comparison on stream offset in qmux_ctrl_send() to ensure an
empty stream frame is not always considered as retransmitted, which is
necessary to locally close the QCS instance.
Simplify this by unsubscribe from streamdesc layer when the QCS is
locally closed on FIN transmission notification. This prevents all
future retransmitted frames to be reported to the QCS instance,
especially any potentially retransmitted empty FIN.
Before this patch, when wrong argument was provided in the configuration for
mworker-max-reloads keyword, parser shows these errors below on the stderr:
[WARNING] (1820317) : config : parsing [haproxy.cfg:154] : (null)parsing [haproxy.cfg:154] : 'mworker-max-reloads' expects an integer argument.
In a case, when by mistake two arguments were provided instead of one, this has
also triggered a buggy error message:
[ALERT] (1820668) : config : parsing [haproxy.cfg:154] : 'mworker-max-reloads' cannot handle unexpected argument '45'.
[WARNING] (1820668) : config : parsing [haproxy.cfg:154] : (null)
So, as 'mworker-max-reloads' is parsed in discovery mode by master process
let's align now its parser with all others, which could be called for this
mode. Like this in cases, when there are too many args or argument isn't a
valid integer we return proper error codes to global section parser and
messages are formated properly.
This fix should be backported in all stable versions.
Initially we agreed to split builds into "latest" for development branch
and fixed 22.04 for stable branches. It got broken when "latest" label migrated
from ubuntu-22 to ubuntu-24 ... because of build cache. Cache key is built using
runner label, it was not prepared to use the same "latest" cache from ubuntu 22
on ubuntu 24. To make things clear, let's stick explicitely to ubuntu 24.
PCRE2 is recommended, PCRE was chosen for no reason. GHA Ubuntu 22 images include both libs,
but recent Ubuntu 24 does not. Let us prepare for Ubuntu 24
Commit cf3fe1eed ("MINOR: mux-h2/traces: print the size of the DATA
frames") added the size of the DATA frame to the traces. Unfortunately
it uses ullong instead of ulong to cast a pointer, which breaks the
build on 32-bit platforms. Let's just switch it to ulong which works
on both.
One main problem with panic dumps is that they're filling the dumping
thread's trash, and that the global thread_dump_buffer is too small to
catch enough of them.
Here we're proceeding differently. When dumping threads for a panic, we're
passing the magic value 0x2 as the buffer, and it will instruct the target
thread to allocate its own buffer using get_trash_chunk() (which is signal
safe), so that each thread dumps into its own buffer. Then the thread will
wait for the buffer to be consumed, and will assign its own thread_dump_buffer
to it. This way we can simply dump all threads' buffers from gdb like this:
(gdb) set $t=0
while ($t < global.nbthread)
printf "%s\n", ha_thread_ctx[$t].thread_dump_buffer.area
set $t=$t+1
end
For now we make it wait forever since it's only called on panic and we
want to make sure the thread doesn't leave and continues to use that trash
buffer or do other nasty stuff. That way the dumping thread will make all
of them die.
This would be useful to backport to the most recent branches to help
troubleshooting. It backports well to 2.9, except for some trivial
context in tinfo-t.h for an updated comment. 2.8 and older would also
require TAINTED_PANIC. The following previous patches are required:
MINOR: debug: make mark_tainted() return the previous value
MINOR: chunk: drop the global thread_dump_buffer
MINOR: debug: split ha_thread_dump() in two parts
MINOR: debug: slightly change the thread_dump_pointer signification
MINOR: debug: make ha_thread_dump_done() take the pointer to be used
MINOR: debug: replace ha_thread_dump() with its two components
At the few places we were calling ha_thread_dump(), now we're
calling separately ha_thread_dump_fill() and ha_thread_dump_done()
once the data are consumed.
This will allow the caller to decide whether to definitely clear the
pointer and release the thread, or to leave it unlocked so that it's
easy to analyse from the struct (the goal will be to use that in panic()
so that cores are easy to analyse).
Now the thread_dump_pointer is returned ORed with 1 once done, or NULL
when cancelled (for now noone cancels). The goal will be to permit
the callee to provide its own pointer.
The ha_thread_dump_fill() function now returns the buffer pointer that
was used (without OR 1) or NULL, for ease of use from the caller.
We want to have a function to trigger the dump and another one to wait
for it to be completed. This will be important to permit panic dumps to
be done on local threads. For now this does not change anything, as the
function still calls the two new functions one after the other.
This variable is not very useful and is confusing anyway. It was mostly
used to detect that a panic dump was still in progress, but we can now
check mark_tainted() for this. The pointer was set to one of the dumping
thread's trash chunks. Let's temporarily continue to copy the dumps to
that trash, we'll remove it later.
Since mark_tainted() uses atomic ops to update the tainted status, let's
make it return the prior value, which will allow the caller to detect
if it's the first one to set it or not.
As mentioned in previous commit, b_peek_ofs() performs a wrapping check
but is often called with ofs == 0 as a constant. We can detect this case
with __builtin_const_p() so it makes sense to use it. A test shows a size
reduction of about 320 bytes, which is not much, but it happens in hot code
paths, and each 16 bytes reduction indicates an eliminated conditional
branch.
Some clear winners are ci_getblk_nc() (-48 bytes), h2c_dec_hdrs (-141B),
h1_copy_msg_data (-124B), tcpcheck_spop_expect_hello (-80B),
h1_parse_msg_data (-44B). These ones will definitely benefit from doing
less conditional jumps.
The function is an exact copy of b_peek_varint() with ofs==0 and doing a
b_del() at the end. We can simply call that other one and delete the
contents. It turns out that the code is bigger with this change because
b_peek_varint() passes its offset to b_peek() which performs a wrapping
check. When ofs==0 the wrapping cannot happen, but there's no real way
to tell that to the compiler. Instead conditioning the if() in b_peek()
with (!__builtin_constant_p(ofs) || ofs) does the job, but it's not worth
it at the moment since we have no users of b_get_varint() for now. Let's
just stick to the simple normal code.
Some large functions were moved to buf.c by commit ac66df4e2 ("REORG:
buffers: move some of the heavy functions from buf.h to buf.c"). However,
as found by Amaury, haring doesn't build anymore. Upon close inspection,
b_getblk_nc() isn't that big since it's very much inlinable, and a part
of its apparently large size comes from the BUG_ON_HOT() that were
implemented. Regarding b_peek_varint(), it doesn't have any dependency
and is used only at 4 places in the DNS code, so its loop will not have
big impacts, and the rest around can be optimised away by the compiler
so it remains relevant to keep it inlined. Also it can serve as a base
to deduplicate the code in b_get_varint().
No backport needed.
The ARGT_ID argument type may now be used to set a custom resolve
function in order to help resolve the argument string value. If the
custom resolve function is not set, the behavior is the same as of
type ARGT_STR.
A useless BUG_ON() statement was let in a conditional block that already
checks that the condition cannot be met within the block. Remove the
useless BUG_ON()
"option forwarded" provides a convenient way to automatically insert
rfc7239 forwarded header to requests sent to servers.
On the other hand, manually crafting the header is quite complicated due
to specific formatting rules that must be followed as per rfc7239.
However, sometimes it may be necessary to craft the header manually, for
instance if it has to be conditional or based on parameters that "option
forwarded" doesn't provide. To ease this task, in this patch we implement
rfc7239_nn and rfc7239_np which are respectively meant to craft nodename:
nodeport values, specifically intended to manually build rfc7239 'for'
and 'by' header fields while ensuring rfc7239 compliancy.
Example:
# build RFC-compliant 7239 header:
http-request set-var-fmt(txn.forwarded) "for=\"%[ipv6(::1),rfc7239_nn]:%[str(8888),rfc7239_np]\";host=\"haproxy.org\";proto=http"
# check RFC-compliancy:
http-request set-var(txn.test) "var(txn.forwarded),debug(ok,stderr),rfc7239_is_valid,debug(ok,stderr)"
# stderr output:
# [debug] ok: type=str <for="[::1]:_8888";host="haproxy.org";proto=http>
# [debug] ok: type=bool <1>
See documentation for more info and examples.
This issue came with this commit:
f627b92 BUG/MEDIUM: quic: always validate sender address on 0-RTT
and could be easily reproduced with picoquic QUIC client with -Q option
which splits a big ClientHello TLS message into two Initial datagrams.
A second condition must be fulfilled to reprodue this issue: picoquic
must not send the token provided by haproxy (NEW_TOKEN). To do that,
haproxy must be patched to prevent it to send such tokens.
Under these conditions, if haproxy has enough time to reply to the first Initial
datagrams, when it receives the second Initial datagram it sends a Retry paquet.
Then the client ignores the Retry paquet as mentionned by RFC 9000:
17.2.5.2. Handling a Retry Packet
A client MUST accept and process at most one Retry packet for each connection
attempt. After the client has received and processed an Initial or Retry packet
from the server, it MUST discard any subsequent Retry packets that it receives.
On its side, haproxy has closed the connection. When it receives the second
Initial datagram, it open a new connection but with Initial packets it
cannot decrypt (wrong ODCID) leaving the client without response.
To fix this, as the aim of the token (NEW_TOKEN) sent by haproxy is to validate
the peer address, in place of closing the connection when no token was received
for a 0RTT connection, one leaves this validation to the handshake process.
Indeed, the peer adress is validated during the handshake when a valid handshake
packet is received by the listener. But as one does not want haproxy to process
0RTT data when no token was received, one does not accept the connection before
the successful handshake completion. In addition to this, the 0RTT packets
are not released after successful handshake completion when no token was received
to leave a chance to haproxy to process these 0RTT data in such case (see
quic_conn_io_cb()).
Must be backported as far as 2.9.
Tokens are sent when opening a connection, just after the handshake, to
be possibly reused by the peer for the next connection. They are used
to validate the peer address during the 0RTT connection openings.
But there is no reason to reserve this feature to 0RTT connections.
This patch modifies quic_build_post_handshake_frames() to do so.
This bug came with this commit:
f627b92 BUG/MEDIUM: quic: always validate sender address on 0-RTT
If an error happens in quic_build_post_handshake_frames() during the
code exexuted for th NEW_TOKEN frame allocation, some could leak because
of the wrong label used to interrupt this function asap.
Replace the "goto leave" by "goto err" to deallocated such frames to fix
this issue.
Must be backported as far as 2.9.
When a filter is registered on the data, it means it may change the payload
length by rewritting data. It means consumers of the message cannot trust the
expected length of payload as announced by the producer. The commit 8bd835b2d2
("MEDIUM: filters/htx: Don't rely on HTX extra field if payload is filtered")
was pushed to solve this issue. When the HTTP payload of a message is filtered,
the extra field is set to 0 to be sure it will never be used by error by any
consumer. However, it is not enough.
Indeed, the filters must be called before fowarding some data. They cannot be
by-passed. But if a consumer is unable to flush the HTX message, some outgoing
data can remain blocked in the channel's buffer. If some new data are then
pushed because there is some room in the channel's buffe, the producer will set
the HTX extra field. At this stage, if the consumer is unblocked and can send
again data, it is possible to call it to forward outgoing data blocked in the
channel's buffer before waking the stream up to filter new input data. It is the
purpose of the data fast-forwarding. In this case, the HTX extra field will be
seen by the consumer. It is unexpected and leads to undefined behavior.
One consequence of this bug is to perform a wrong chunking on compressed
messages, leading to processing errors at the end of the message, reported as
"ID--" in logs.
To fix the bug, a HTX flag is added to state the payload of the current HTX
message is altered. When this flag is set (HTX_FL_ALTERED_PAYLOAD), the HTX
extra field must not be trusted. And to keep things simple, when this flag is
set, the HTX extra field is automatically set to 0 when the HTX message is
loaded, in htxbuf() function.
It is probably the less intrusive way to fix the bug for now. But this part must
be reviewed to save meta-info of the HTX message outside of the message itself.
This commit should solve the issue #2741. It must be backported as far as 2.9.
In sc_notify() function, the consumer side of the SC is tested to verify if
we must perform a shutdown on the endpoint. To do so, no output data must be
present in the buffer and in the iobuf. However, there is a bug here, the
iobuf of the opposite SC is tested instead of the one of the current SC. So
a shutdown can be performed on the endpoint while there are still output
data in the iobuf that must be sent. Concretely, it can only be data blocked
in a pipe.
Because of this bug, data blocked in the pipe will be never sent. I've not
tested but I guess this may block the stream during the client or server
timeout.
This patch must be backported as far as 2.9.
If a parsing error is reported by the mux on the response payload, a proxy
error (PRXCOND) must be reported instead of a server abort (SRVCL). Because
of this bug, inavlid response may are reported as "SD--" or "SL--" in logs
instead of "PD--" or "PL--".
This patch must be backported to all stable versions.
When the stream is shut down, some tests are performed to know if the
connection must also be closed or not. There are trace messages for all
cases, except for the default one: Abort or close-mode. Thanks to this
patch, there is now a message too in this case.
Info about the SD iobuf are now dumped in trace messages when a stream send
event is processed. It is a useful information to debug zero-copy forwarding
issues.
When a send attempt is performed on the opposite side from sc_notify() and
all outgoing data are sent while a shut was scheduled, the SE is shut down
because we consider all data were sent and no more are expected. However,
here we must also be carefull to have sent all pending data in the
iobuf. Indeed, some spliced data may be blocked. In this case, if the SE is
shut down, these data may be lost.
This patch should fix the original bug reported in #2749. It must be
backported as far as 2.9.
The proxy must be created in mworker mode, but only in the worker, not in
the master. The current code creates the proxy in both processes.
The patch only checks that we are not in the master to start the
ocsp-update pre-check.
No backport needed.
Since commit fe75c1e12d ("MEDIUM: startup: remove
MODE_MWORKER_WAIT") the MODE_MWORKER_WAIT constant disappeared. The
initialization of the default resolvers section was conditionned by this
constant.
The section must be created in mworker mode, but only in the worker not in
the master. It was currently completely disabled in both the master and
the worker which could break configuration using it, as well as the
httpclient.
No backport needed.
Since commit fe75c1e12d ("MEDIUM: startup: remove
MODE_MWORKER_WAIT") the MODE_MWORKER_WAIT constant disappearded. The
initialization of the httpclient proxy was conditionned by this
constant.
The proxy must be created in mworker mode, but only in the worker not in
the master. It was currently completely disabled in both the master and
the worker provoking a NULL dereference upon httpclient usage.
No backport needed.
Latest patches on the mworker rework skipped the httpclient_proxy
creation by accident. This is not supposed to happen because haproxy is
supposed to stop when the proxy creation failed, but it shows a flaw in
the API.
When the httpclient_proxy or the proxy used in parameter of
httpclient_new_from_proxy() is NULL, it will be dereferenced and cause a
crash.
The patch only returns a NULL when doing an httpclient_new() if the
proxy is not available.
Must be backported as far as 2.7.
Released version 3.1-dev10 with the following main changes :
- BUG/MAJOR: mux-quic: do not crash on empty STREAM frame emission
- BUG/MINOR: stats: Fix the name for the total number of streams created
- MINOR: quic: strengthen qc_release_frm()
- MEDIUM: quic: decount acknowledged data for MUX txbuf window
- MINOR: quic: implement dedicated type for out-of-order stream ACK
- MEDIUM: quic: merge contiguous/overlapping buffered ack stream range
- MEDIUM: quic: decount out-of-order ACK data range for MUX txbuf window
- MINOR: log: add do_log() logging helper
- MINOR: log: add do_log_parse_act() helper func
- MINOR: action: add do-log action
- REGTESTS: add some tests for 'do-log' action
- BUG/MEDIUM: hlua: make hlua_ctx_renew() safe
- BUG/MEDIUM: hlua: properly handle sample func errors in hlua_run_sample_{fetch,conv}()
- BUG/MINOR: quic: fix discarding of already stored out-of-order ACK
- BUG/MEDIUM: quic: properly decount out-of-order ACK on stream release
- MINOR: ssl: disable server side default CRL check with WolfSSL
- MEDIUM: sink: implement sink_find_early()
- MINOR: trace: postresolve sink names
- MINOR: sample: postresolve sink names in debug() converter
- BUG/MEDIUM: mux-quic: ensure timeout server is active for short requests
- MINOR: cfgparse: simulate long configuration parsing with force-cfg-parser-pause
- BUILD: cache: silence an uninitialized warning at -Og with gcc-12.2
- BUG/MINOR: mux-h2/traces: present the correct buffer for trailers errors traces
- MINOR: mux-h2/traces: print the size of the DATA frames
- CLEANUP: muxes: remove useless inclusion of ebmbtree.h
- REORG: buffers: move some of the heavy functions from buf.h to buf.c
- MINOR: buffer: add a buffer list type with functions
- MINOR: mux-h2: split the amount of rx data from the amount to ack
- MINOR: mux-h2: create and initialize an rx offset per stream
- MEDIUM: mux-h2: start to update stream when sending WU
- MEDIUM: mux-h2: start to introduce the window size in the offset calculation
- MINOR: mux-h2: count within a connection, how many streams are receiving data
- MINOR: mux-h2: allocate the array of shared rx bufs in the h2c
- MINOR: mux-h2: add rxbuf head/tail/count management for h2s
- MINOR: mux-h2: move H2_CF_WAIT_IN_LIST flag away from the demux flags
- MINOR: mux-h2: simplify the exit code in h2_rcv_buf()
- MINOR: mux-h2: simplify the wake up code in h2_rcv_buf()
- MINOR: mux-h2: clear up H2_CF_DEM_DFULL and H2_CF_DEM_SHORT_READ ambiguity
- MAJOR: mux-h2: make streams use the connection's buffers
- MAJOR: mux-h2: permit a stream to allocate as many buffers as desired
- MAJOR: mux-h2: make the rxbuf allocation algorithm a bit smarter
- MINOR: mux-h2: add tune.h2.be.rxbuf and tune.h2.fe.rxbuf global settings
- MEDIUM: mux-h2: change the default initial window to 16kB
- DOC: design-thoughts: add diagrams illustrating an rx win groth
- MEDIUM: mux-h2: rework h2_restart_reading() to differentiate recv and demux
- OPTIM: mux-h2: make h2_send() report more accurate wake up conditions
- OPTIM: mux-h2: try to continue reading after demuxing when useful
- OPTIM: mux-h2: use tasklet_wakeup_after() in h2s_notify_recv()
- MINOR: mux-h2/traces: add missing flags and proxy ID in traces
- MINOR: mux-h2/traces: add buffer-related info to h2s and h2c
- CI: cirrus-ci: bump FreeBSD image to 14-1
- REGTESTS: fix a reload race in abns_socket.vtc
- MINOR: activity/memprofile: always return "other" bin on NULL return address
- MINOR: quic: notify connection layer on handshake completion
- BUG/MINOR: stream: unblock stream on wait-for-handshake completion
- BUG/MEDIUM: quic: support wait-for-handshake
- BUG/MEDIUM: server: server stuck in maintenance after FQDN change
- BUG/MEDIUM: queue: make sure never to queue when there's no more served conns
- DEBUG: mux-h2/flags: add H2_CF_DEM_RXBUF & H2_SF_EXPECT_RXDATA for the decoder
- REGTESTS: cli: add delay 0.1 before connect to cli
- MINOR: startup: add O_CLOEXEC flag to open /dev/null
- MEDIUM: startup: move daemonization fork in init
- MINOR: startup: refactor "daemonization" fork
- MEDIUM: startup: move PID handling in init()
- MAJOR: mworker: move master-worker fork in init()
- BUG/MINOR: mworker: fix memory leak due to master-worker fork
- REORG: mworker: set nbthread=1 for master after fork
- MINOR: init: check MODE_MWORKER before creating master CLI
- REORG: mworker: move mworker_create_master_cli in master 'case'
- MEDIUM: startup: call chroot() if needed in one place
- MEDIUM: startup: do set_identity() if needed in one place
- MINOR: startup: only worker gets capabilities from bin
- CLEANUP: haproxy: rm no longer used mworker_reexec_waitmode
- MINOR: startup: rename exit_on_waitmode_failure to exit_on_failure
- MINOR: defaults: update MASTER_MAXCONN description
- MEDIUM: startup: remove MODE_MWORKER_WAIT
- MINOR: global: add MODE_DISCOVERY flag
- MEDIUM: cfgparse: add KWF_DISCOVERY keyword flag
- MEDIUM: cfgparse: call some parsers only in MODE_DISCOVERY
- MEDIUM: cfgparse-global: parse only KWF_DISCOVERY keywords in MODE_DISCOVERY
- MEDIUM: cfgparse: parse only "global" section in MODE_DISCOVERY
- MEDIUM: startup: introduce load_cfg and read_cfg
- MINOR: cfgparse: fix *thread keywords sensitive to global section position
- MINOR: mworker/cli: rename mworker_cli_proxy_new_listener
- MINOR: mworker/cli: rename and clean mworker_cli_sockpair_new
- MINOR: mworker/cli: create master CLI sockpair before fork
- MINOR: mworker/cli: create MASTER proxy before mcli listeners
- MINOR: mworker: add and set state PROC_O_INIT for new worker
- MEDIUM: mworker/cli: close child and parent fds, setup listeners
- MINOR: mworker: mworker_catch_sigchld: use fd_delete instead of close
- MINOR: startup: rename and adapt reexec_on_failure
- MINOR: mworker: add support for case when new worker dies
- MINOR: mworker: simplify the code that sets PROC_O_LEAVING
- MINOR: mworker/cli: add _send_status to support state transition
- MEDIUM: startup: split sending oldpids_sig logic for standalone and mworker modes
- MINOR: startup: split init() into separate initialization routines
- MINOR: startup: split main: add step_init_3
- MINOR: startup: simplify check for calling sock_get_old_sockets
- MINOR: startup: encapsulate sock_get_old_sockets in a function
- MINOR: startup: add bind_listeners
- MINOR: startup: split main: add step_init_4
- MINOR: startup: encapsulate master's code in run_master
- MINOR: startup: add read_cfg_in_discovery_mode
- MINOR: mworker: adapt exit_on_failure for master recovery mode
- MEDIUM: mworker: add support of master recovery mode
- MINOR: startup: add set_verbosity
- MEDIUM: mworker: block reloads
- MINOR: mworker: slow load status delivery if worker is starting
- MINOR: mworker: readapt program support in mworker_catch_sigchld
- MINOR: mworker: deserialize process list before read_cfg_in_discovery_mode
- MINOR: mworker: parse program only in MODE_DISCOVERY
- MINOR: cfgparse: add support for program section
- MINOR: startup: reintroduce program support
- MINOR: mworker-prog: stop old programs in mworker_ext_launch_all
- MINOR: mworker: reintroduce systemd support
- MINOR: mworker: report explicitly when worker exits due to max reloads
- MINOR: cfgparse-global: parse *env keywords in MODE_DISCOVERY
- MINOR: startup: reintroduce *env keywords support
- MINOR: startup: close devnullfd, when daemon mode is applied
In case of daemon mode now daemonization fork happens in the early init stage
before parsing and applying the configuration, so we can't close
stdio/stderr/stdout immediately after forking. We keep it open until the most
of configuration, including chroot are applied in order to show alerts, if
there are some problems. To achieve this /dev/null is opened just before calling
chroot(), and after the chroot block it's used to close all standard outputs
and stdin. At this point we no longer need the fd of /dev/null, so we can close
it as well.
setenv/resetenv/presetenv/unsetenv keywords in the configuration modify the
process environment. In case of master-worker and programs we need to restore
the initial process environment before reload, as the configuration could
change in between and newly forked workers and programs should be launched
in the environment corresponded to this new configuration.
To achieve this we backup the initial process environment before the first
configuration read, when 'global' and 'program' sections are read. And then we
clean up master process environment and restore the initial one from the backup
in mworker_reexec().
setenv/resetenv/presetenv/unsetenv keywords should be parsed by master
process and by worker. As some other master parameters could be enabled in
conditional blocks (.if...endif). To achieve this let's tag '*env' keywords
with KWF_DISCOVERY flag.
It's convienient for testing and for usage to produce different warning
messages, when the former worker exits due to max reloads exceeded, and when it
was terminated by the master.
Let's reintroduce systemd support in the refactored master-worker mode.
As for now, the master-worker fork happens during early initialization steps and
then the master process receieves the "READY" status message from the newly
forked worker, that has successfully loaded. Let's propagate this "READY" status
message at this moment to the systemd from the master process context
(_send_status()). We use the master process to send messages to systemd,
because it is only the process, monitored by systemd.
In master recovery mode, we also need to send to the systemd the "READY"
message, but with the status "Reload failed". "READY" will signal to systemd,
that master process is still alive, because it doesn't exit in recovery mode
and it keeps the existed worker. Status "Reload failed" will signal to user,
that something wrong has happened with the configuration. Same message logic
was originally preserved for the case, when the worker fails to read its
configuration, see on_new_child_failure() for more details.
This patch is a part of series to reintroduce the program support in the new
master-worker architecture.
Now, after refactoring in master-worker mode it's the master process, who
stops workers forked before the reload. Current worker no longer sends USR1 or
TERM signals to the previous one after ports binding. This behaviour is kept
only for the standalone mode.
So, in case of programs, it's up to master process as well to stop programs,
which were launched before reload. Let's do this in mworker_ext_launch_all(),
just before starting the new programs.
This patch is a part of series to reintroduce the program support in the new
master-worker architecture.
Let's add here mworker_ext_launch_all() call before master-worker fork to
start external programs. We keep the order and the place of these two forks
(program and master-worker) the same as before the refactoring, in order to
avoid regressions.
This patch is a part of series to reintroduce the program support in the new
master-worker architecture.
Programs are launched by master, thus only the master process needs its
configuration. Therefore, program section parser should be called only in
discovery mode, when master parses its configuration.
Program section has a post section parser. It should be called only in
discovery mode as well.