The documentation about -q seems wrong, it does not output messages
after the startup, it disables all messages. It was always quiet with
the stdio_quiet() function.
Must be backported in all stable versions.
Move the MODE_QUIET and MODE_VERBOSE test in print_message() so we
always output in the startup-logs even with MODE_QUIET.
ha_warning(), ha_alert() and ha_notice() does not check the MODE_QUIET
and MODE_VERBOSE anymore, it is done before doing the fprintf() in
print_message().
ha_alert(), ha_warning() and ha_notice() shouldn't check MODE_STARTING
for log emission. Let's remove the check.
This shouldn't do much since the stdio_quiet() function mute the output
in main().
The commit 08d7169f4 ("MINOR: stconn: Don't queue stream task in past in
sc_notify()") tried to fix issues with epiration date set in past for the
stream in sc_notify(). However it remains some cases where the stream
expiration date may already be expired before recomputing it. This happens
when an event is reported by the mux exactly when a timeout is triggered. In
this case, depending on the scheduling, the SC may be woken up before the
stream. For these cases, we fall into the BUG_ON() preventing to queue in
the past.
So, it remains unexpected to queue a task in the past. The BUG_ON() is
correct at this place. We must just avoid to recompute the stream expiration
date if it is already expired. At worst, the stream will be woken up for
nothing. But it is not really a big deal because it will only happen on
timeouts from time to time. It is so sporadic that we can ignore it from a
performance point of view.
This patch must be backpoted to 2.8. Be careful to remove the BUG_ON() on
the 2.8.
This bug could be reproduced with -dMfail option and detected by libasan.
During the TLS secrets allocations, when failed, quic_tls_ctx_secs_free()
is called. It resets the already initialized secrets. Some were detected
as initialized when not, or with a non initialized length, which leads
to big "memset(0)" detected by libsasan.
Ensure that all the secrets are really initialized with correct lengths.
No need to be backported.
If a TX packet cannot be allocated (by qc_build_pkt()), as it can be coalesced
to another one, this leads the TX buffer to have remaining not sent prepared data.
Then haproxy crashes upon a BUG_ON() triggered by the next call to qc_txb_release().
This may happen only during handshakes.
To fix this, qc_build_pkt() returns a new -3 error to dected such allocation
failures followed which is for now on followed by a call to qc_purge_txbuf() to
send the TX prepared data and purge the TX buffer.
Must be backported as far as 2.6.
This may happen during handshakes when Handshake packets cannot be coalesced
to a first Initial packet because of TX frame allocation failures (from
qc_build_frms()). This leads too short (not padded) Initial packets to be sent.
This is detected by a BUG_ON() in qc_send_ppkts().
To avoid this an Handshake packet without ack-eliciting frames which should have
been built by qc_build_frms() is built.
Must be backported as far as 2.6.
This may happen upon ack ranges allocation failures (from quic_update_ack_ranges_list().
This can lead to empty trees of ack ranges to be used to build ACK frames which
is not good at all. Furthermore this is detected by a BUG_ON() (in qc_do_build_pkt()).
To avoid this, simply update the acknowledgemen state of the connection only if
quic_update_ack_ranges_list() succeeds, as it fails only in case of memory
allocation failures.
Must be backported as far as 2.6.
If the Handshake encryption level could not be allocated, this could lead
to Initial packets to be sent because no Handshake CRYPTO frames were generated.
Furthermore in such an allocation failure case, the connection should be closed
as soon as possible. This is done making ha_quic_set_encryption_secrets() return
0 upon an encryption level allocation failure.
Also fix a typo in the trace in relation to this allocation failure.
No need to be backported.
When the idle timer expired with a still present mux, this task was not freed
and even requeued with a timer in the past.
Fix this issue calling task_destroy() in this case. As the task is freed,
its handler must return NULL setting local <t> variable to NULL in every cases.
Also ensure that this timer task is not armed again after having been released
with a <return> statement when this is the case from qc_idle_timer_do_rearm().
Must be backported as far as 2.6.
This was no reason not to release as soon as possible the TLS/SSL QUIC connection
context from quic_conn_release() before allocating a "closing connection" connection
(quic_cc_conn struct).
This patch sets the handshake task in heavy task mode when receiving in disorder
CRYPTO data which results in in order bufferized CRYPTO data. This is done
thanks to a non-contiguous buffer and from qc_handle_crypto_frm() after having
potentially bufferized CRYPTO data in this buffer.
qc_treat_rx_crypto_frms() is no more called from qc_treat_rx_pkts() but instead
this is where the task is set in heavy task mode. Consequently,
this is the job of qc_ssl_provide_all_quic_data() to call directly
qc_treat_rx_crypto_frms() to provide the in order bufferized CRYPTO data to the
TLS stack. As this function releases the non-contiguous buffer for the CRYPTO
data, if possible, there is no need to do that from qc_treat_rx_crypto_frms()
anymore.
Add a new pool for the CRYPTO data frames received in order.
Add ->rx.crypto_frms list to each encryption level to store such frames
when they are received in order from qc_handle_crypto_frm().
Also set the handshake task (qc_conn_io_cb()) in heavy task mode from
this function after having received such frames. When this task
detects that it is set in heavy mode, it calls qc_ssl_provide_all_quic_data()
newly implemented function to provide the CRYPTO data to the TLS task.
Modify quic_conn_enc_level_uninit() to release these CRYPTO frames
when releasing the encryption level they are in relation with.
IOBUF_FL_EOI iobuf flag is now set by the producer to notify the consumer
that the end of input was reached. Thanks to this flag, we can remove the
ugly ack in h2_done_ff() to test the opposite SE flags.
Of course, for now, it works and it is good enough. But we must keep in mind
that EOI is always forwarded from the producer side to the consumer side in
this case. But if this change, a new CO_RFL_ flag will have to be added to
instruct the producer if it can forward EOI or not.
In the mux-to-mux data forwarding, we now try, as far as possible to send at
least a buffer. Of course, if the consumer side is congested or if nothing
more can be received, we leave. But the idea is to retry to fast-forward
data if less than a buffer was forwarded. It is only performed for buffer
fast-forwarding, not splicing.
The idea behind this patch is to optimise the forwarding, when a first
forward was performed to complete a buffer with some existing data. In this
case, the amount of data forwarded is artificially limited because we are
using a non-empty buffer. But without this limitation, it is highly probable
that a full buffer could have been sent. And indeed, with H2 client, a
significant improvement was observed during our test.
To do so, .done_fastfwd() callback function must be able to deal with
interim forwards. Especially for the H2 mux, to remove H2_SF_NOTIFIED flags
on the H2S on the last call only. Otherwise, the H2 stream can be blocked by
itself because it is in the send_list. IOBUF_FL_INTERIM_FF iobuf flag is
used to notify the consumer it is not the last call. This flag is then
removed on the last call.
In order to limit inter-thread contention on the global pool, in 2.9-dev3
with commit 7bf829ace ("MAJOR: pools: move the shared pool's free_list
over multiple buckets"), it was decided that if the selected bucket had
an empty free list, we would simply give up and fall back to the OS
allocator.
But this causes allocations to be made from the OS for certain threads,
to be released to overloaded pools that are sent back to the OS. One
visible effect is that sending a lot of traffic using h2load with 100
parallel streams over 100 connections causes 5-10k buffers to be
allocated, then reducing the load to only 10 connections doesn't make
these allocations go down, just because some buckets are no longer
visited.
Tests show that giving a second chance to pick another bucket in this
case is sufficient to visit all other buckets and recycle their pending
objects. Now "show pools" that starts at 10k buffers at 100 connections
goes down to about 150 with 1 connection and 100 streams in a fraction
of a second.
No backport is needed, as the issue is only in 2.9.
Since 2.9-dev3 with commit 7bf829ace ("MAJOR: pools: move the shared
pool's free_list over multiple buckets"), the global pool supports
multiple heads to reduce inter-thread contention. However, when
grabbing a freelist head fails because another thread is already
picking from it, we just skip to the next one and try again.
Unfortunately, it still maintains a bit of contention between thread
pairs when for some reasons only a few threads are used. This may
happen for example when running on a 4- or 8- thread system and
the two most active ones end up on adjacent buckets.
A better and much simpler solution consists in visiting a random bucket
instead of the current one. Tests show that the CPU usage spent in
pool_refill_local_from_shared() reduces at low number of connections
(hence threads).
No backport is needed, as the issue is only in 2.9.
In 2.6-dev1, the method used to decide how many pool entries could be
released at once was revisited to support releases in batches. This was
done with commits 91a8e28f9 ("MINOR: pool: add a function to estimate
how many may be released at once") and 361e31e3f ("MEDIUM: pool: compute
the number of evictable entries once per pool").
The first commit takes care of the possible inconsistency between the
moment the allocated count and the used count are read, but unfortunately
fixed it the wrong way, by adjusting "used" to match "alloc" whenever it
was lower (i.e. almost always). This results in a nasty case which is that
as soon as the allocated value becomes higher than the estimated count of
needed entries, we end up returning pool->minavail, which causes very
small batches to be released, starting from commit 1513c5479 ("MEDIUM:
pools: release cached objects in batches").
The problem was further amplified in 2.9-dev3 with commit 7bf829ace
("MAJOR: pools: move the shared pool's free_list over multiple buckets")
because it now becomes possible for a thread to allocate from one bucket
and release into a few other different ones, causing an accumulation of
entries in that bucket.
The fix is trivial, simply adjust the alloc counter if the used one is
higher, before performing operations.
This must be backported to 2.6.
The function returning the excess of events over the current period for a
target frequency (the overshoot) has a flaw if the inactivity period is too
long. In this case, the result may overflow. Instead to be negative, a very
high positive value is returned.
This function is used by the bandwidth limitation filter. It means after a
long inactivity period, a huge burst may be detected while it should not.
In fact, the problem arise from the moment we're past the current period. In
this case, we should not report any overshoot and just get the number of
remaining events as usual.
This patch should be backported as far as 2.7.
It is now the turn for the H1 mux to be fix to properly handle http-request
and http-keep-alive timeouts. It is quite surprising but it is broken since
the 2.2. For idle connections on client side, the smallest value between the
client timeout and the http-request/http-keep-alive timeout is used while
the client timeout should only be used if other ones are not defined. So, if
the client timeout is the smallest value, the keep-alive timeout is not
respected.
It is only an issue for idle client connections. The http-request timeout is
respected from the moment part of the next request was received.
This patch should fix the issue #2334. It must be backported as far as 2.2. But
be careful during the backports. The H1 mux had evolved a lot since the 2.2.
Add a special treatment for the IPV4 and IPV6 cases in
table_process_entry_per_key() function so that input string is parsed
in best effort (STR to pseudo type ADDR): input format is first considered
over table type and then let smp_to_stkey() do the type conversion for us
when needed.
This patch heavily depends on:
- "MEDIUM: stktable/cli: simplify entry key handling"
And optionally depends on:
- 72514a44 ("MEDIUM: tools/ip: v4tov6() and v6tov4() rework")
Make use of smp_to_stkey() in table_process_entry_per_key() to simplify
key handling and leverage auto type conversions from sample API.
One noticeable side effect is that integer input checks will be relaxed
given that c_str2int() sample conv is more permissible than the integrated
table_process_entry_per_key() integer parser.
When an ipv4 key is used to filter a CLI command on a stick table
clear/set/show table ...), inetaddr_host+htonl combination was used
with no error checking.
Instead, we now use inet_pton(), which is what we use for ipv6 addresses
since b7c962b0c0 ("BUG/MINOR: stick-table/cli: Check for invalid ipv6 key")
Doing this allows us to easily check for parsing errors: we're trading off
some parsing efficience to better catch input errors and ensure we get
similar behavior between ipv4 and ipv6 addresses handling.
This patch may be backported to all supported versions.
We must take care to release H1 input buffer when it is emptied during the
fast-forwarding nego. Otherwise, it may be kept allocated for a while,
waiting for the next "normal" receive or the H1C release.
No backport needed.
Use backend connect timeout when a new connection is instantiated for
rhttp. This ensures that if connect operation fails after a certain
delay, reverse_connect listener task is woken up. This allows to free
the current connection and retry a new connect.
As a consequence of this change, rev_process() may be woken up even if
connection is not reported with CO_FL_ERROR. This happens if timeout
fired before any network reported issue. Connection freeing is adjusted
as in this case MUX instance is already allocated. Use destroy callback
to release MUX context prior to the connection itself.
This patch is really useful as a side measure for a haproxy bug
impacting connect with SSL for both backend connections and active
reverse connect. This is caused by the delayed allocation of MUX
allocation. Asynchronous connect error detected at the socket layer is
not notified to upper layers. Currently, only connect timeout allows to
release this failed connection.
The commit d6d4abdc3 ("BUILD: mux-h1: Fix build without kernel splicing
support") introduced a regression. The kernel support for the underlying
XPRT is no longer checked. So it is possible to enable the splicing for SSL
connection. This of course leads to a segfault.
This patch restore the test on the xprt rcv_pipe/snd_pipe functions.
This patch should fix a crash reported by Tristan in #2095
(#issuecomment-1788949014). No backport needed.
QUIC connections are accounted inside global sslconns. As with QUIC
actconn, it suffered from a similar issue if an intermediary allocation
failed inside qc_new_conn().
Fix this similarly by moving increment operation inside qc_new_conn().
Increment and error path are now centralized and much easier to
validate.
The consequences are similar to the actconn fix : on memory allocation
global sslconns may wrap, this time blocking any future QUIC or SSL
connections on the process.
This must be backported up to 2.6.
Since the following commit, quic_conn instances are accounted into
global actconn and compared against maxconn.
commit 7735cf3854
MEDIUM: quic: count quic_conn instance for maxconn
Increment is always done prior to real allocation to guarantee minimal
resource consumption. Special care is taken to ensure there will always
be one decrement operation for each increment. To help this, decrement
is centralized in quic_conn_release().
This behaves incorrectly in case of an intermediary allocation failure
inside qc_new_conn(). In this case, quic_conn_release() will decrement
actconn. Then, a NULL qc is returned in quic_rx_pkt_retrieve_conn()
which will also decrement the counter on its own error code path.
To properly fix this, actconn incrementation has been moved directly
inside qc_new_conn(). It is thus easier to cover every cases :
* if alloc failure before or on pool_head_quic_conn, actconn is
decremented manually at the end of qc_new_conn()
* after this step, actconn will be decremented by quic_conn_release()
either on intermediary alloc failure or on proper connection release
This bug happens on memory allocation failure so it should be rare.
However, its impact is not negligeable as if actconn counter is wrapped
it will block any future connection allocation for both QUIC and TCP.
One small downside of this change is that a CID is now always allocated
before quic_conn even if maxconn will be reached. However, this is
considered as of minor importance compared to a more robust code.
This must be backported up to 2.6.
Recent fixes have shown <lra> and <fsb> uses were not prettu clear. So let's
try to improve documentation about these value. Especially when <lra> is
updated and how to used it.
When a EOS or EOI is detected on the endpoint and when the event is reported
at the SC level, a read activity must be reported. It is not really a big
deal because these flags already inhibit any read timeout. But it is
consistent with the <lra> comment. In addition, no read activity is reported
on abort. It is up-down event and it is not an event unblocking the
reads. So there is no reason to report a read activity.
This patch must be backported to 2.8.
In sc_need_room(), we compute the maximum room that can be requested to
restarted reading to be sure to be able to unblock the SC. At worst when the
buffer is emptied. Here, the buffer reserve is considered but it is an issue.
Counting the reserve can lead to a wicked bug with the H1 multiplexer, when
small amount of data are found at the end of the HTX buffer. In this case,
to not wrap, the H1 mux requests more room. It is an optim to be able to
resync the buffer with the consumer side and to be able to perform zero-copy
transfers. However, if this amount of data is smaller than the reserve and
if the consumer is congested, we fall in a loop because the wrong value is
used to request more room. The H1 mux continues to pretend there is not
enough space in the buffer, while the effective requested value is lower
than the free space in the buffer. While the consumer is congested and does
not consume these data, the is no way to stop the loop.
We can fix the function by removing the buffer reserve from the
computation. But it remains a dangerous decision to apply a max value on
room_needed. It is safer to require the caller must set a correct value. For
now, it is true. But at the end, it is totally unexepected to wait for more
room than an empty buffer can contain.
This patch must be backported to 2.8.
A task must never be queued in past. However, in sc_notify(), the stream
task, if not woken up, is queued. Thanks to previous fixes, the stream task
expiration date should be correct. But to prevent any issue, a BUG_ON() is
added to be sure it never happens. I guess a good idea could be to remove it
or change it to BUG_ON_HOT() for the final release.
When receive or send expiration date of a stream-connector is retrieved, we
now automatically check if it may expire. If not, TICK_ETERNITY is returned.
The expiration dates of the frontend and backend stream-connectors are used
to compute the stream expiration date. This operation is performed at 2
places: at the end of process_stream() and in sc_notify() if the stream is
not woken up.
With this patch, there is no special changes for process_stream() because it
was already handled. It make thing a little simpler. However, it fixes
sc_notify() by avoiding to erroneously compute an expiration date in
past. This highly reduce the stream wakeups when there is contention on the
consumer side.
The bug was introduced with the commit 8073094bf ("NUG/MEDIUM: stconn:
Always update stream's expiration date after I/O"). It was an error to
unconditionnaly set the stream expiration data, without testing blocking
conditions on both SC.
This patch must be backported to 2.8.
When data are directly forwarded from a mux to the opposite one, we must not
forget to report send activity when data are successfully sent or report a
blocked send with data are blocked. It is important because otherwise, if
the transfer is quite long, longer than the client or server timeout, an
error may be triggered because the write timeout is reached.
H1, H2 and PT muxes are concerned. To fix the issue, The done_fastword()
callback now returns the amount of data consummed. This way it is possible
to update/reset the FSB data accordingly.
No backport needed.
Released version 2.9-dev9 with the following main changes :
- DOC: internal: filters: fix reference to entities.pdf
- BUG/MINOR: ssl: load correctly @system-ca when ca-base is define
- MINOR: lua: Add flags to configure logging behaviour
- MINOR: lua: change tune.lua.log.stderr default from 'on' to 'auto'
- BUG/MINOR: backend: fix wrong BUG_ON for avail conn
- BUG/MAJOR: backend: fix idle conn crash under low FD
- MINOR: backend: refactor insertion in avail conns tree
- DEBUG: mux-h2/flags: fix list of h2c flags used by the flags decoder
- BUG/MEDIUM: server/log: "mode log" after server keyword causes crash
- MINOR: connection: add conn_pr_mode_to_proto_mode() helper func
- BUG/MEDIUM: server: "proto" not working for dynamic servers
- MINOR: server: add helper function to detach server from proxy list
- DEBUG: add a tainted flag when ha_panic() is called
- DEBUG: lua: add tainted flags for stuck Lua contexts
- DEBUG: pools: detect that malloc_trim() is in progress
- BUG/MINOR: quic: do not consider idle timeout on CLOSING state
- MINOR: frontend: implement a dedicated actconn increment function
- BUG/MINOR: ssl: use a thread-safe sslconns increment
- MEDIUM: quic: count quic_conn instance for maxconn
- MEDIUM: quic: count quic_conn for global sslconns
- BUG/MINOR: ssl: suboptimal certificate selection with TLSv1.3 and dual ECDSA/RSA
- REGTESTS: ssl: update the filters test for TLSv1.3 and sigalgs
- BUG/MINOR: mux-quic: fix early close if unset client timeout
- BUG/MEDIUM: ssl: segfault when cipher is NULL
- BUG/MINOR: tcpcheck: Report hexstring instead of binary one on check failure
- MEDIUM: systemd: be more verbose about the reload
- MINOR: sample: Add fetcher for getting all cookie names
- BUG/MINOR: proto_reverse_connect: support SNI on active connect
- MINOR: proxy/stktable: add resolve_stick_rule helper function
- BUG/MINOR: stktable: missing free in parse_stick_table()
- BUG/MINOR: cfgparse/stktable: fix error message on stktable_init() failure
- MINOR: stktable: stktable_init() sets err_msg on error
- MINOR: stktable: check if a type should be used as-is
- MEDIUM: stktable/peers: "write-to" local table on peer updates
- CI: github: update wolfSSL to 5.6.4
- DOC: install: update the wolfSSL required version
- MINOR: server: Add parser support for set-proxy-v2-tlv-fmt
- MINOR: connection: Send out generic, user-defined server TLVs
- BUG/MEDIUM: pattern: don't trim pools under lock in pat_ref_purge_range()
- MINOR: mux-h2: always use h2_send() in h2_done_ff(), not h2_process()
- OPTIM: mux-h2: call h2_send() directly from h2_snd_buf()
- BUG/MINOR: server: remove some incorrect free() calls on null elements
In commit 6f4bfed3a ("MINOR: server: Add parser support for
set-proxy-v2-tlv-fmt") a few free() calls were made to an element on
error path when it was detected it was NULL. It doesn't have any
effect, however there was one case of use-after-free at the end of
srv_settings_cpy() that was caught by gcc due to attempting to free
the element after freeing its holder.
No backport is needed.
This allows to eliminate full buffers very quickly and to recycle them
much faster, resulting in higher transfer rates and lower memory usage
at the same time. We just wake the tasklet up if it succeeded so that
h2_process() and friends are called to finalize what needs to.
For regular buffer sizes, the performance level becomes quite close to
the one obtained with the zero-copy mechanism (zero-copy remains much
faster with non-default buffer sizes). The memory savings are huge with
default buffer size: at 64c * 100 streams on a single thread, we used
to forward 4.4 Gbps of traffic using 10400 buffers. After the change,
the performance reaches 5.9 Gbps with only 22-24 buffers, since they
are quickly recycled. That's asaving of 160 MB of RAM.
A concern was an increase in the number of syscalls but this is not
the case, the numbers remained exactly the same before and after.
Some experimentations were made to try to cork data and not send
incomplete buffers, and that always voided these changes. One
explanation might be that keeping a first buffer with only headers
frames is sufficient to prevent a zero-copy of the data coming in
a next snd_buf() call. This still needs to be studied anyway.
By calling h2_process(), the code would theoretically make it possible
for a synchronous ->wake() call to provoke an indirect call to h2_snd_buf()
while we're in h2_done_ff(), which could be quite bad. The current
conditions do not permit it right now but this could easily break by
accident. Better use h2_send() and wake the task up if needed. Precise
performance tests showed no change.
There's a subtle issue that results from pat_ref_purge_range() trying
to release memory. Since commit 0d93a8186 ("MINOR: pools: work around
possibly slow malloc_trim() during gc") that was backported to 2.3,
trim_all_pools() now protects itself against concurrent malloc() and
free() by isolating itself. The problem is that pat_ref_purge_range()
must be called under a lock, which is precisely what's done in
cli_io_handler_clear_map(). Thus during a clearing of a map, if
another thread tries to access or update an entry in the same map, it
will wait for the ref->lock to be released, and trim_all_pools() will
wait for all threads to be harmless, thus causing a deadlock. Note
that disabling memory trimming cannot work around the problem here
because it's tested only under isolation.
The solution here consists in moving the call to trim_all_pools() to
the caller, out of the lock.
This must be backported as far as 2.4.
To follow-up the implementation of the new set-proxy-v2-tlv-fmt
keyword in the server, the connection is updated to use the previously
allocated TLVs. If no value was specified, we send out an empty TLV.
As the feature is fully working with this commit, documentation and a
test for the server and default-server are added as well.
This commit introduces a generic server-side parsing of type-value pair
arguments and allocation of a TLV list via a new keyword called
set-proxy-v2-tlv-fmt.
This allows to 1) forward any TLV type with the help of fc_pp_tlv,
2) generally, send out any TLV type and value via a log format expression.
To have this fully working the connection will need to be updated in
a follow-up commit to actually respect the new server TLV list.
default-server support has also been implemented.
In this patch, we add the possibility to declare on a table definition
("table" in peer section, or "stick-table" in proxy section) that we
want the remote/peer updates on that table to be pushed on a local
haproxy table in addition to the source table.
Consider this example:
|peers mypeers
| peer local 127.0.0.1:3334
| peer clust 127.0.0.1:3333
| table t1.local type string size 10m store server_id,server_key expire 30s
| table t1.clust type string size 10m store server_id,server_key write-to mypeers/t1.local expire 30s
With this setup, we consider haproxy uses t1.local as cache/local table
for read and write operations, and that t1.clust is a remote table
containing datas processed from t1.local and similar tables from other
haproxy peers in a cluster setup. The t1.clust table will be used to
refresh the local/cache one via the "write-to" statement.
What will happen, is that every time haproxy will see entry updates for
the t1.clust table: it will overwrite t1.local table with fresh data and
will update the entry expiration timer. If t1.local entry doesn't exist
yet (key doesn't exist), it will automatically create it. Note that only
types that cannot be used for arithmetic ops will be handled, and this
to prevent processed values from the remote table from interfering with
computations based on values from the local table. (ie: prevent
cumulative counters from growing indefinitely).
"write-to" will only push supported types if they both exist in the source
and the target table. Be careful with server_id and server_key storage
because they are often declared implicitly when referencing a table in
sticking rules but it is required to declare them explicitly for them to
be pushed between a remote and a local table through "write-to" option.
Also note that the "write-to" target table should have the same type as
the source one, and that the key length should be strictly equal,
otherwise haproxy will raise an error due to the tables being
incompatibles. A table that is already being written to cannot be used
as a source table for a "write-to" target.
Thanks to this patch, it will now be possible to use sticking rules in
peer cluster context by using a local table as a local cache which
will be automatically refreshed by one or multiple remote table(s).
This commit depends on:
- "MINOR: stktable: stktable_init() sets err_msg on error"
- "MINOR: stktable: check if a type should be used as-is"