From Linux 5.17, anonymous regions can be name via prctl/PR_SET_VMA
so caches can be identified when looking at HAProxy process memory
mapping.
The most possible error is lack of kernel support, as a result
we ignore it, if the naming fails the mapping of memory context
ought to still occur.
Since 3.0-dev7 with commit 1a088da7c2 ("MAJOR: stktable: split the keys
across multiple shards to reduce contention"), building without threads
yields a warning about the shard not being used. This is because the
locks API does nothing of its arguments, which is the only place where
the shard is being used. We cannot modify the lock API to pretend to
consume its argument because quite often it's not even instantiated.
Let's just pretend we consume shard using an explict ALREADY_CHECKED()
statement instead. While we're at it, let's make sure that XXH32() is
not called when there is a single bucket!
No backport is needed.
Unlike the muxes, the applets have the responsibility to notify the SC if
they have more data to deliver to the stream. The same is done to notify the
SC that applets must be woken up ASAP to continue some processing. When an
applet is woken up, we pretend it has no more data to deliver by setting
SE_FL_HAVE_NO_DATA flag. If the applet removes this flag, we must take care
to not set it again just after. Otherwise, the applet may remain blocked if
there is no other condition to wake it up.
It is an issue for the applets using their own buffers because
SE_FL_HAVE_NO_DATA is erroneously set in sc_applet_recv() function, after
the applet execution. For instance, it happens for the cli applet when a
huge map is cleared. No data are delivered to the stream but we pretend it
is the case to clear the map per batches.
This patch should fix the issue #2543. No Backported needed.
Some flags are defined during statistics generation and output. They use
the prefix STAT_* which is also used for other purposes. Rename them
with the new prefix STAT_F_* to differentiate them from the other
usages.
Several unique names were used for different purposes under statistics
implementation. This caused the code to be difficult to understand.
* stat/stats name is removed when a more specific name could be used
* restrict field usage to purely refer to <struct field> which
represents a raw stat value.
* use "line" naming to represent an array of <struct field>
Info are used to expose haproxy global metrics. It is similar to proxy
statistics and any other module. As such, rename info indexes using
SI_I_INF_* prefix. Also info variable is renamed stat_line_info.
Thanks to this, naming is now consistent between info and other
statistics. It will help to integrate it as a "global" statistics
module.
Statistics were extended with the introduction of stats module. This
mechanism allows to expose various metrics for several haproxy
components. As a consequence of this, some static variables were
transformed to dynamic ones to be able to regroup all statistics
definition.
Rename these variables with more explicit naming :
* stat_lines can be used to generate one line of statistics for any
module using struct field as value
* metrics and metrics_len are used to stored description of metrics
indexed by module
Note that info is not integrated in the statistics module mechanism.
However, it could be done in the future to better reflect its purpose.
This commit is the first one of a serie which adjust naming convention
for stats module. The objective is to remove ambiguity and better
reflect how stats are implemented, especially since the introduction of
stats module.
This patch renames elements related to proxies statistics. One of the
main change is to rename ST_F_* statistics indexes prefix with the new
name ST_I_PX_*. This remove the reference to field which represents
another concept in the stats module. In the same vein, global
stat_fields variable is renamed metrics_px.
This commit is part of a series to align counters usage between
frontends/listeners on one side and backends/servers on the other.
On frontend side, "stot" is the total count of sessions for both proxies
and listeners. For proxies, fe_counters <cum_sess> is correctely used.
The bug is on listeners where <cum_conn> value is returned, which
instead indicates a number of connection. This commit fixes this by
returning <cum_sess> counter value for "stot" metric.
Along this fixes, use the opportunity to report "conn_tot" for listeners
using <cum_conn> value, as for frontend proxies.
This commit fixes a bug but must not be backported as stats output is
changed.
This commit is part of a serie to align counters usage between
frontends/listeners on one side and backends/servers on the other.
"stot" metric refers to the total number of sessions. On backend side,
it is interpreted as a number of streams. Previously, this was accounted
using <cum_sess> be_counters field for servers, but <cum_conn> instead
for backend proxies.
Adjust this by using <cum_sess> for both proxies and servers. As such,
<cum_conn> field can be removed from be_counters.
Note that several diagnostic messages which reports total frontend and
backend connections were adjusted to use <cum_sess>. However, this is an
outdated and misleading information as it does reports streams count on
backend side. These messages should be fixed in a separate commit.
This should be backported to all stable releases.
This commit is the first one of a series which aims to align counters
usage between frontends/listeners on one side and backends/servers on
the other.
Remove <down_trans> field from proxy structure. Use instead the same
name field from be_counters structure, which is already used for
servers.
32bits build was broken because of wrong printf length modifier.
src/ssl_ckch.c:4144:66: error: format specifies type 'long' but the argument has type 'unsigned int' [-Werror,-Wformat]
4143 | memprintf(err, "parsing [%s:%d] : cannot parse '%s' value '%s', too long, max len is %ld.\n",
| ~~~
| %u
4144 | file, linenum, args[cur_arg], args[cur_arg + 1], sizeof(alias_name));
| ^~~~~~~~~~~~~~~~~~
src/ssl_ckch.c:4217:64: error: format specifies type 'long' but the argument has type 'unsigned int' [-Werror,-Wformat]
4216 | memprintf(err, "parsing [%s:%d] : cannot parse '%s' value '%s', too long, max len is %ld.\n",
| ~~~
| %u
4217 | file, linenum, args[cur_arg], args[cur_arg + 1], sizeof(alias_name));
| ^~~~~~~~~~~~~~~~~~
2 errors generated.
make: *** [Makefile:1034: src/ssl_ckch.o] Error 1
make: *** Waiting for unfinished jobs....
Replace %ld by %zd.
Should fix issue #2542.
Released version 3.0-dev8 with the following main changes :
- BUG/MINOR: cli: Don't warn about a too big command for incomplete commands
- BUG/MINOR: listener: always assign distinct IDs to shards
- BUG/MINOR: log: fix lf_text_len() truncate inconsistency
- BUG/MINOR: tools/log: invalid encode_{chunk,string} usage
- BUG/MINOR: log: invalid snprintf() usage in sess_build_logline()
- CLEANUP: log: lf_text_len() returns a pointer not an integer
- MINOR: quic: simplify qc_send_hdshk_pkts() return
- MINOR: quic: uniformize sending methods for handshake
- MINOR: quic: improve sending API on retransmit
- MINOR: quic: use qc_send_hdshk_pkts() in handshake IO cb
- MEDIUM: quic: remove duplicate hdshk/app send functions
- OPTIM: quic: do not call qc_send() if nothing to emit
- OPTIM: quic: do not call qc_prep_pkts() if everything sent
- BUG/MEDIUM: http-ana: Deliver 502 on keep-alive for fressh server connection
- BUG/MINOR: http-ana: Fix TX_L7_RETRY and TX_D_L7_RETRY values
- BUILD: makefile: warn about unknown USE_* variables
- BUILD: makefile: support USE_xxx=0 as well
- BUG/MINOR: guid: fix crash on invalid guid name
- BUILD: atomic: fix peers build regression on gcc < 4.7 after recent changes
- BUG/MINOR: debug: make sure DEBUG_STRICT=0 does work as documented
- BUILD: cache: fix non-inline vs inline declaration mismatch to silence a warning
- BUILD: debug: make DEBUG_STRICT=1 the default
- BUILD: pools: make DEBUG_MEMORY_POOLS=1 the default option
- CI: update the build options to get rid of unneeded DEBUG options
- BUILD: makefile: get rid of the config CFLAGS variable
- BUILD: makefile: allow to use CFLAGS to append build options
- BUILD: makefile: drop the SMALL_OPTS settings
- BUILD: makefile: move -O2 from CPU_CFLAGS to OPT_CFLAGS
- BUILD: makefile: get rid of the CPU variable
- BUILD: makefile: drop the ARCH variable and better document ARCH_FLAGS
- BUILD: makefile: extract ARCH_FLAGS out of LDFLAGS
- BUILD: makefile: move the fwrapv option to STD_CFLAGS
- BUILD: makefile: make the ERR variable also support 0
- BUILD: makefile: add FAILFAST to select the -Wfatal-errors behavior
- BUILD: makefile: extract -Werror/-Wfatal-errors from automatic CFLAGS
- BUILD: makefile: split WARN_CFLAGS from SPEC_CFLAGS
- BUILD: makefile: rename SPEC_CFLAGS to NOWARN_CFLAGS
- BUILD: makefile: do not pass warnings to VERBOSE_CFLAGS
- BUILD: makefile: also drop DEBUG_CFLAGS
- CLEANUP: makefile: make the output of the "opts" target more readable
- DOC: install: clarify the build process by splitting it into subsections
- BUG/MINOR: server: fix slowstart behavior
- BUG/MEDIUM: cache/stats: Handle inbuf allocation failure in the I/O handler
- MINOR: ssl: add the section parser for 'crt-store'
- DOC: configuration: Add 3.12 Certificate Storage
- REGTESTS: ssl: test simple case of crt-store
- MINOR: ssl: rename ckchs_load_cert_file to new_ckch_store_load_files_path
- MINOR: ssl/crtlist: alloc ssl_conf only when a valid keyword is found
- BUG/MEDIUM: stick-tables: fix the task's next expiration date
- CLEANUP: stick-tables: always respect the to_batch limit when trashing
- BUG/MEDIUM: peers/trace: fix crash when listing event types
- BUG/MAJOR: stick-tables: fix race with peers in entry expiration
- DEBUG: pool: improve decoding of corrupted pools
- REORG: pool: move the area dump with symbol resolution to tools.c
- DEBUG: pools: report the data around the offending area in case of mismatch
- MINOR: listener/protocol: add proto name in alerts
- MINOR: proto_quic: add proto name in alert
- BUG/MINOR: lru: fix the standalone test case for invalid revision
- DOC: management: fix typos
- CI: revert kernel addr randomization introduced in 3a0fc864
- MINOR: ring: clarify the usage of ring_size() and add ring_allocated_size()
- BUG/MAJOR: ring: use the correct size to reallocate startup_logs
- MINOR: ring: always check that the old ring fits in the new one in ring_dup()
- CLEANUP: ssl: remove dead code in cfg_parse_crtstore()
- MINOR: ssl: supports crt-base in crt-store
- MINOR: ssl: 'key-base' allows to load a 'key' from a specific path
- MINOR: net_helper: Add support for floats/doubles.
- BUG/MEDIUM: grpc: Fix several unaligned 32/64 bits accesses
- MINOR: peers: Split resync process function to separate running/stopping states
- MINOR: peers: Add 2 peer flags about the peer learn status
- MINOR: peers: Add flags to report the peer state to the resync task
- MINOR: peers: sligthly adapt part processing the stopping signal
- MINOR: peers: Add functions to commit peer changes from the resync task
- BUG/MINOR: peers: Report a resync was explicitly requested from a thread-safe manner
- BUG/MAJOR: peers: Update peers section state from a thread-safe manner
- MEDIUM: peers: Only lock one peer at a time in the sync process function
- MINOR: peer: Restore previous peer flags value to ease debugging
- BUG/MEDIUM: stconn: Don't forward channel data if input data must be filtered
- BUILD: cache: fix a build warning with gcc < 7
- BUILD: xxhash: silence a build warning on Solaris + gcc-5.5
- CI: reduce ASAN log redirection umbrella size
- CLEANUP: assorted typo fixes in the code and comments
- BUG/MEDIUM: evports: do not clear returned events list on signal
- MEDIUM: evports: permit to report multiple events at once
- MEDIUM: ssl: support aliases in crt-store
- BUG/MINOR: ssl: check on forbidden character on wrong value
- BUG/MINOR: ssl: fix crt-store load parsing
- BUG/MEDIUM: applet: Fix applet API to put input data in a buffer
- BUG/MEDIUM: spoe: Always retry when an applet fails to send a frame
- BUG/MEDIUM: peers: Fix exit condition when max-updates-at-once is reached
- BUILD: linuxcap: Properly declare prepare_caps_from_permitted_set()
- BUG/MEDIUM: peers: fix localpeer regression with 'bind+server' config style
- MINOR: peers: stop relying on srv->addr to find peer port
- MEDIUM: ssl: support a named crt-store section
- MINOR: stats: remove implicit static trash_chunk usage
- REORG: stats: extract HTML related functions
- REORG: stats: extract JSON related functions
- MEDIUM: ssl: crt-base and key-base local keywords for crt-store
- MINOR: stats: Get the right prototype for stats_dump_html_end().
- MAJOR: ssl: use the msg callback mecanism for backend connections
- MINOR: ssl: implement keylog fetches for backend connections
- BUG/MINOR: stconn: Fix sc_mux_strm() return value
- MINOR: mux-pt: Test conn flags instead of sedesc ones to perform a full close
- MINOR: stconn/connection: Move shut modes at the SE descriptor level
- MINOR: stconn: Rewrite shutdown functions to simplify the switch statements
- MEDIUM: stconn: Use only one SC function to shut connection endpoints
- MEDIUM: stconn: Explicitly pass shut modes to shut applet endpoints
- MEDIUM: stconn: Use one function to shut connection and applet endpoints
- MEDIUM: muxes: Use one callback function to shut a mux stream
- BUG/MINOR: sock: handle a weird condition with connect()
- BUG/MINOR: fd: my_closefrom() on Linux could skip contiguous series of sockets
- BUG/MEDIUM: peers: Don't set PEERS_F_RESYNC_PROCESS flag on a peer
- BUG/MEDIUM: peers: Fix state transitions of a peer
- MINOR: init: use RLIMIT_DATA instead of RLIMIT_AS
- CI: modernize macos matrix
Limiting total allocatable process memory (VSZ) via setting RLIMIT_AS limit is
no longer effective, in order to restrict memory consumption at run time.
We can see from process memory map below, that there are many holes within
the process VA space, which bumps its VSZ to 1.5G. These holes are here by
many reasons and could be explaned at first by the full randomization of
system VA space. Now it is usually enabled in Linux kernels by default. There
are always gaps around the process stack area to trap overflows. Holes before
and after shared libraries could be explained by the fact, that on many
architectures libraries have a 'preferred' address to be loaded at; putting
them elsewhere requires relocation work, and probably some unshared pages.
Repetitive holes of 65380K are most probably correspond to the header that
malloc has to allocate before asked a claimed memory block. This header is
used by malloc to link allocated chunks together and for its internal book
keeping.
$ sudo pmap -x -p `pidof haproxy`
127136: ./haproxy -f /home/haproxy/haproxy/haproxy_h2.cfg
Address Kbytes RSS Dirty Mode Mapping
0000555555554000 388 64 0 r---- /home/haproxy/haproxy/haproxy
00005555555b5000 2608 1216 0 r-x-- /home/haproxy/haproxy/haproxy
0000555555841000 916 64 0 r---- /home/haproxy/haproxy/haproxy
0000555555926000 60 60 60 r---- /home/haproxy/haproxy/haproxy
0000555555935000 116 116 116 rw--- /home/haproxy/haproxy/haproxy
0000555555952000 7872 5236 5236 rw--- [ anon ]
00007fff98000000 156 36 36 rw--- [ anon ]
00007fff98027000 65380 0 0 ----- [ anon ]
00007fffa0000000 156 36 36 rw--- [ anon ]
00007fffa0027000 65380 0 0 ----- [ anon ]
00007fffa4000000 156 36 36 rw--- [ anon ]
00007fffa4027000 65380 0 0 ----- [ anon ]
00007fffa8000000 156 36 36 rw--- [ anon ]
00007fffa8027000 65380 0 0 ----- [ anon ]
00007fffac000000 156 36 36 rw--- [ anon ]
00007fffac027000 65380 0 0 ----- [ anon ]
00007fffb0000000 156 36 36 rw--- [ anon ]
00007fffb0027000 65380 0 0 ----- [ anon ]
...
00007ffff7fce000 4 4 0 r-x-- [ anon ]
00007ffff7fcf000 4 4 0 r---- /usr/lib/x86_64-linux-gnu/ld-2.31.so
00007ffff7fd0000 140 140 0 r-x-- /usr/lib/x86_64-linux-gnu/ld-2.31.so
...
00007ffff7ffe000 4 4 4 rw--- [ anon ]
00007ffffffde000 132 20 20 rw--- [ stack ]
ffffffffff600000 4 0 0 --x-- [ anon ]
---------------- ------- ------- -------
total kB 1499288 75504 72760
This exceeded VSZ makes impossible to start an haproxy process with 200M
memory limit, set at its initialization stage as RLIMIT_AS. We usually
have in this case such cryptic output at stderr:
$ haproxy -m 200 -f haproxy_quic.cfg
(null)(null)(null)(null)(null)(null)
At the same time the process RSS (a memory really used) is only 75,5M.
So to make process memory accounting more realistic let's base the memory
limit, set by -m option, on RSS measurement and let's use RLIMIT_DATA instead
of RLIMIT_AS.
RLIMIT_AS was used before, because earlier versions of haproxy always allocate
memory buffers for new connections, but data were not written there
immediately. So these buffers were not instantly counted in RSS, but were
always counted in VSZ. Now we allocate new buffers only in the case, when we
will write there some data immediately, so using RLIMIT_DATA becomes more
appropriate.
The commit 9425aeaffb ("BUG/MAJOR: peers: Update peers section state from a
thread-safe manner") introduced regressions about state transitions of a
peer.
A peer may be in a connected, accepted or released state. Before, changes for
these states were performed synchronously. Since the commit above, changes
are mainly performed in the sync process task.
The first regression was about the released then accepted state transition,
called the renewed state. In reality the state was always crushed by the
accepted state. After some review, the state was just removed to always
perform the cleanup in the sync process task before acknowledging the
connected or accepted states.
Then, a wakeup of the peer applet was missing from the sync process task
after the ack of connected or accepted states, blocking the applet.
Finally, when a peer is in released, connected or accepted state, we must
take care to wait the sync process task wakeup before trying to receive or
send messages.
This patch must only be backported if the above commit is backported.
The bug was introduced by commit 9425aeaffb ("BUG/MAJOR: peers: Update peers
section state from a thread-safe manner"). A peers flags was set on a peer
by error. Just remove it.
This patch must only be backported if the above commit is backported.
We got a detailed report analysis showing that our optimization consisting
in using poll() to detect already closed FDs within a 1024 range has an
issue with the case where 1024 consecutive FDs are open (hence do not show
POLLNVAL) and none of them has any activity report. In this case poll()
returns zero update and we would just skip the loop that inspects all the
FDs to close the valid ones. One visible effect is that the called programs
might occasionally see some FDs being exposed in the low range of their fd
space, possibly making the process run out of FDs when trying to open a
file for example.
Note that this is actually a fix for commit b8e602cb1b ("BUG/MINOR: fd:
make sure my_closefrom() doesn't miss some FDs") that already faced a
more common form of this problem (incomplete but non-empty FDs reported).
This can be backported up to 2.0.
As reported on github issue #2491, there's a very strange situation where
epoll_wait() appears to be reported EPOLLERR only (and not IN/OUT/HUP etc
as normally happens with EPOLLERR), and when connect() is called again to
check the state of the ongoing connection, it returns EALREADY, basically
saying "no news, please wait". This obviously triggers a wakeup loop. For
now it has remained impossible to reproduce this issue outside of the
reporter's environment, but that's definitely something that is impossible
to get out from.
The workaround here is to address the lowest level cause we can act on,
which is to avoid returning to wait if EPOLLERR was returned. Indeed, in
this case we know it will loop, so we must definitely take this one into
account. We only do that after connect() asks us to wait, so that a
properly established connection with a queued error at the end of an
exchange will not be diverted and will be handled as usual.
This should be backported to approximately all versions, at least as far
as 2.4 according to the reporter who observed it there.
Thanks to @donnyxray for their useful captures isolating the problem.
mux-ops .shutr and .shutw callback functions are merged into a unique
functions, called .shut. The shutdown mode is still passed as argument,
muxes are responsible to test it. Concretly, .shut() function of each mux is
now the content of the old .shutw() followed by the content of the old
.shutr().
se_shutdown() function is now used to perform a shutdown on a connection
endpoint and an applet endpoint. The same function is used for
both. sc_conn_shut() function was removed and appctx_shut() function was
updated to only deal with the applet stuff.
It is the same than the previous patch but for applets. Here there is
already only one function. But with this patch, appctx_shut() function was
modified to explicitly get shutdown mode as parameter. In addition
appctx_shutw() was removed.
The SC API to perform shutdowns on connection endpoints was unified to have
only one function, sc_conn_shut(), with read/write shut modes passed
explicitly. It means sc_conn_shutr() and sc_conn_shutw() were removed. The
next step is to do the same at the mux level.
CO_SHR_* and CO_SHW_* modes are in fact used by the stream-connectors to
instruct the muxes how streams must be shut done. It is then the mux
responsibility to decide if it must be propagated to the connection layer or
not. And in this case, the modes above are only tested to pass a boolean
(clean or not).
So, it is not consistant to still use connection related modes for
information set at an upper layer and never used by the connection layer
itself.
These modes are thus moved at the sedesc level and merged into a single
enum. Idea is to add more modes, not necessarily mutually exclusive, to pass
more info to the muxes. For now, it is a one-for-one renaming.
In .shutr and .shutw callback functions, we must rely on the connection
flags (CO_FL_SOCK_RD_SH/WR_SH) to decide to fully close the connection
instead of using sedesc flags. At the end, for the PT multiplexer, it is
equivalent. But it is more logicial and consistent this way.
Since the begining, this function returns a pointer on an appctx while it
should be a void pointer. It is the caller responsibility to cast it to the
right type, the corresponding mux stream in this case.
However, it is not a big deal because this function is unused for now. Only
the unsafe one is used.
This patch must be backported as far as 2.6.
This patch implements the backend side of the keylog fetches.
The code was ready but needed the SSL message callbacks.
This could be used like this:
log-format "CLIENT_EARLY_TRAFFIC_SECRET %[ssl_bc_client_random,hex] %[ssl_bc_client_early_traffic_secret]\n
CLIENT_HANDSHAKE_TRAFFIC_SECRET %[ssl_bc_client_random,hex] %[ssl_bc_client_handshake_traffic_secret]\n
SERVER_HANDSHAKE_TRAFFIC_SECRET %[ssl_bc_client_random,hex] %[ssl_bc_server_handshake_traffic_secret]\n
CLIENT_TRAFFIC_SECRET_0 %[ssl_bc_client_random,hex] %[ssl_bc_client_traffic_secret_0]\n
SERVER_TRAFFIC_SECRET_0 %[ssl_bc_client_random,hex] %[ssl_bc_server_traffic_secret_0]\n
EXPORTER_SECRET %[ssl_bc_client_random,hex] %[ssl_bc_exporter_secret]\n
EARLY_EXPORTER_SECRET %[ssl_bc_client_random,hex] %[ssl_bc_early_exporter_secret]"
Backend SSL connections never used the ssl_sock_msg_callbacks() which
prevent the use of keylog on the server side.
The impact should be minimum, though it add a major callback system for
protocol analysis, which is the same used on frontend connections.
https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_msg_callback.html
The patch add a call to SSL_CTX_set_msg_callback() in
ssl_sock_prepare_srv_ssl_ctx() the same way it's done for bind lines in
ssl_sock_prepare_ctx().
When the stat code was reorganized, and the prototype to
stats_dump_html_end() was moved to its own header, it missed the function
arguments. Fix that.
This should fix issue 2540.
Add support for crt-base and key-base local keywords for the crt-store.
current_crtbase and current_keybase are filed with a copy of the global
keyword argument when a crt-store is declared, and updated with a new
path when the keywords are in the crt-store section.
The ckch_conf_kws[] array was updated with ¤t_crtbase and
¤t_keybase instead of the global_ssl ones so the parser can use
them.
The keyword must be used before any "load" line in a crt-store section.
Example:
crt-store web
crt-base /etc/ssl/certs/
key-base /etc/ssl/private/
load crt "site3.crt" alias "site3"
load crt "site4.crt" key "site4.key"
frontend in2
bind *:443 ssl crt "@web/site3" crt "@web/site4.crt"
Extract functions related to HTML stats webpage from stats.c into a new
module named stats-html. This allows to reduce stats.c to roughly half
of its original size.
A static variable trash_chunk was used as implicit buffer in most of
stats output function. It was a oneline buffer uses as temporary storage
before emitting to the final applet or CLI buffer.
Replaces it by a buffer defined in show_stat_ctx structure. This allows
to retrieve it in most of stats output function. An additional parameter
was added for the function where context was not already used. This
renders the code cleaner and will allow to split stats.c in several
source files.
As a result of a new member into show_stat_ctx, per-command context max
size has increased. This forces to increase APPLET_MAX_SVCCTX to ensure
pool size is big enough. Increase it to 128 bytes which includes some
extra room for the future.
This patch introduces named crt-store section. A named crt-store allows
to add a scope to the crt name.
For example, a crt named "foo.crt" in a crt-store named "web" will
result in a certificate called "@web/foo.crt".
Now that peers entirely rely on peer->srv for connection settings, and
that it was confirmed that it works properly thanks to previous commit,
let's finish what we started in f6ae258 ("MINOR: peers: rely on srv->addr
and remove peer->addr") and stop using srv->addr to find out peers port
and instead rely on srv->svc_port as it's already done for other proxy
types.
A dumb mistake was made in f6ae25858 ("MINOR: peers: rely on srv->addr
and remove peer->addr"). I completely overlooked the part where the bind
address settings are used as implicit server's address settings when the
peers are declared using the new bind+server config style (which is the
new recommended method to declare peers as it follows the same logic as
the one used in other proxy sections).
As such, the peers synchro fails to work between previous and new process
(localpeer mechanism) upon reload when declaring peers with way:
global
localpeer local
peers mypeers
bind 127.0.0.1:10001
server local
And one has to use the 'old' config style to make it work:
global
localpeer local
peers mypeers
peer local 127.0.0.1:10001
--
To fix the issue, let's explicitly set the server's addr:port
according to the bind's address settings (only the first listener is
considered) when local peer was declared using the 'bind+server' method.
No backport needed.
Expected arguments were not specified in the
prepare_caps_from_permitted_set() function declaration. It is an issue for
some compilers, for instance clang. But at the end, it is unexpected and
deprecated.
No backport needed, except if f0b6436f57 ("MEDIUM: capabilities: check
process capabilities sets") is backported.
When a peer applet is pushing updates, we limit the number of update sent at
once via a global parameter to not spend too much time in the applet. On
interrupt, we claimed for more room to be woken up quickly. However, this
statement is only true if something was pushed in the buffer. Otherwise,
with an empty buffer, if the stream itself is not woken up, the applet
remains also blocked because there is no send activity on the other side to
unblock it.
In this case, instead of requesting more room, it is sufficient to state the
applet have more data to send.
This patch must be backported as far as 2.6.
This bug is related to the previous one ("BUG/MEDIUM: spoe: Always retry
when an applet fails to send a frame"). applet_putblk() function retruns -1
on error and it should always be interpreted as a missing of room in the
buffer. However, on the spoe, this was processed as an I/O error.
This patch must be backported as far as 2.8.
applet_putblk and co were added to simplify applets. In 2.8, a fix was
pushed to deal with all errors as a room error because the vast majority of
applets didn't expect other kind of errors. The API was changed with the
commit 389b7d1f7b ("BUG/MEDIUM: applet: Fix API for function to push new
data in channels buffer").
Unfortunately and for unknown reason, the fix was totally failed. Checks on
channel functions were just wrong and not consistent. applet_putblk()
function is especially affected because the error is returned but no flag
are set on the SC to request more room. Because of this bug, applets relying
on it may be blocked, waiting for more room, and never woken up.
It is an issue for the peer and spoe applets.
This patch must be backported as far as 2.8.
The crt-store load line parser relies on offsets of member of the
ckch_conf struct. However the new "alias" keyword as an offset to
-1, because it does not need to be used. Plan was to handle it that way
in the parser, but it wasn't supported yet. So -1 was still used in an
offset computation which was not used, but ASAN could see the problem.
This patch fixes the issue by using a signed type for the offset value,
so any negative value would be skipped. It also introduced a
PARSE_TYPE_NONE for the parser.
No backport needed.
The crt-store load line now allows to put an alias. This alias is used
as the key in the ckch_tree instead of the certificate. This way an
alias can be referenced in the configuration with the '@/' prefix.
This can only be define with a crt-store.
Since the beginning in 2.0 the nevlist parameter was set to 1 before
calling port_getn(), which means that a single FD event will be reported
per polling loop. This is extremely inefficient, and all the code was
designed to use global.tune.maxpollevents. It looks like it's a leftover
of a temporary debugging change. No apparent issues were found by setting
it to a higher value, so better do that.
That code is not much used nowadays with Solaris disappearing from the
landscape, so even if this definitely was a bug, it's preferable not to
backport that fix as it could uncover other subtle bugs that were never
raised yet.
Since 2.0 with commit 0ba4f483d2 ("MAJOR: polling: add event ports
support (Solaris)"), the polling system on Solaris suffers from a
signal handling problem. It turns out that this API is very bizarre,
as reported events are automatically unregistered and their counter
is updated in the same variable that was used to pass the count on
input, making it difficult to handle certain error codes (how should
one handle ENOSYS for example?). And to complete everything, the API
is able to return both EINTR and an event if a signal is reported.
The code tries to deal with certain such cases (e.g. ETIME for timeout
can also report an event), otherwise it defaults to clearing the
event counter upon error. This has the effect that EINTR clears the
list of events, which are also automatically cleared from the set by
the system.
This is visible when using external checks where the SIGCHLD of the
leaving child causes a wakeup that ruins the event counter and causes
endless loops, apparently due to the queued inter-thread byte in the
pipe used to wake threads up that never gets removed in this case.
Note that extcheck would also deserve deeper investigation because it
can immediately re-trigger a check in such a case, which is not normal.
Removing the wiping of the nevlist variable fixes the problem.
This can be backported to all versions since it affects 2.0.