The HTX functions used to add new HTX blocks in a message have been moved to
the header file to inline them in calling functions. These functions are
small enough.
A normalized URI is the internal term used to specify an URI is stored using
the absolute format (scheme + authority + path). For now, it is only used
for H2 clients. It is the default and recommended format for H2 request.
However, it is unusual for H1 servers to receive such URI. So in this case,
we only send the path of the absolute URI. It is performed for H1 servers,
but not for FCGI applications. This patch fixes the difference.
Note that it is not a real bug, because FCGI applications should support
abosolute URI.
Note also a normalized URI is only detected for H2 clients when a request is
received. There is no such test on the H1 side. It means an absolute URI
received from an H1 client will be sent without modification to an H1 server
or a FCGI application.
To make it possible, a dedicated function has been added to get the H1
URI. This function is called by the H1 and the FCGI multiplexer when a
request is sent to a server.
This patch should fix the issue #1232. It must be backported as far as 2.2.
Released version 2.4-dev17 with the following main changes :
- MINOIR: mux-pt/trace: Register a new trace source with its events
- BUG/MINOR: mux-pt: Fix a possible UAF because of traces in mux_pt_io_cb
- CI: travis: Drastically clean up .travis.yml
- CLEANUP: pattern: make all pattern tables read-only
- MINOR: trace: replace the trace() inline function with an equivalent macro
- MINOR: initcall: uniformize the section names between MacOS and other unixes
- CLEANUP: initcall: rename HA_SECTION to HA_INIT_SECTION
- MINOR: compiler: add macros to declare section names
- CLEANUP: initcall: rely on HA_SECTION_* instead of defining its own
- MINOR: global: declare a read_mostly section
- MINOR: fd: move a few read-mostly variables to their own section
- MINOR: epoll: move epoll_fd to read_mostly
- MINOR: kqueue: move kqueue_fd to read_mostly
- MINOR: pool: move pool declarations to read_mostly
- MINOR: threads: mark all_threads_mask as read_mostly
- MINOR: server: move idle_conn_task to read_mostly
- MINOR: protocol: move __protocol_by_family to read_mostly
- MINOR: pattern: make the pat_lru_seed read_mostly
- MINOR: trace: make trace sources read_mostly
- MINOR: freq_ctr: add a generic function to report the total value
- MEDIUM: freq_ctr: make read_freq_ctr_period() use freq_ctr_total()
- MEDIUM: freq_ctr: reimplement freq_ctr_remain_period() from freq_ctr_total()
- MINOR: freq_ctr: add the missing next_event_delay_period()
- MINOR: freq_ctr: unify freq_ctr and freq_ctr_period into freq_ctr
- MEDIUM: freq_ctr: replace the per-second counters with the generic ones
- MINOR: freq_ctr: add cpu_relax in the rotation loop of update_freq_ctr_period()
- MINOR: freq_ctr: simplify and improve the update function
- CLEANUP: time: remove the now unused ms_left_scaled
- MINOR: time: move the time initialization out of tv_update_date()
- MINOR: time: remove useless variable copies in tv_update_date()
- MINOR: time: change the global timeval and the the global tick at once
- MEDIUM: time: make the clock offset global and no per-thread
- MINOR: atomic: reimplement the relaxed version of x86 BTS/BTR
- MINOR: trace: Add the checks as a possible trace source
- MINOIR: checks/trace: Register a new trace source with its events
- MINOR: hlua: Add function to release a lua function
- BUG/MINOR: hlua: Fix memory leaks on error path when registering a task
- BUG/MINOR: hlua: Fix memory leaks on error path when registering a converter
- BUG/MINOR: hlua: Fix memory leaks on error path when registering a fetch
- BUG/MINOR: hlua: Fix memory leaks on error path when parsing a lua action
- BUG/MINOR: hlua: Fix memory leaks on error path when registering an action
- BUG/MINOR: hlua: Fix memory leaks on error path when registering a service
- BUG/MINOR: hlua: Fix memory leaks on error path when registering a cli keyword
- BUG/MINOR: cfgparse/proxy: Fix some leaks during proxy section parsing
- BUG/MINOR: listener: Handle allocation error when allocating a new bind_conf
- BUG/MINOR: cfgparse/proxy: Hande allocation errors during proxy section parsing
- MINOR: cfgparse/proxy: Group alloc error handling during proxy section parsing
- DOC: internals: update the SSL architecture schema
- BUG/MEDIUM: sample: Fix adjusting size in field converter
- MINOR: sample: add ub64dec and ub64enc converters
- CLEANUP: sample: align samples list in sample.c
- MINOR: ist: Add `istclear(struct ist*)`
- CI: cirrus: install "pcre" package
- MINOR: opentracing: correct calculation of the number of arguments in the args[]
- MINOR: opentracing: transfer of context names without prefix
- MINOR: sample: converter: Add mjson library.
- MINOR: sample: converter: Add json_query converter
- CI: travis-ci: enable weekly graviton2 builds
- DOC: ssl: Certificate hot update only works on fronted certificates
- DOC: ssl: Certificate hot update works on server certificates
- BUG/MEDIUM: threads: Ignore current thread to end its harmless period
- MINOR: threads: Only consider running threads to end a thread harmeless period
- BUG/MINOR: checks: Set missing id to the dummy checks frontend
- MINOR: logs: Add support of checks as session origin to format lf strings
- BUG/MINOR: connection: Fix fc_http_major and bc_http_major for TCP connections
- MINOR: connection: Make bc_http_major compatible with tcp-checks
- BUG/MINOR: ssl-samples: Fix ssl_bc_* samples when called from a health-check
- BUG/MINOR: http-fetch: Make method smp safe if headers were already forwarded
- MINOR: tcp_samples: Add samples to get src/dst info of the backend connection
- MINOR: tcp_samples: Be able to call bc_src/bc_dst from the health-checks
- BUG/MINOR: http_htx: Remove BUG_ON() from http_get_stline() function
- BUG/MINOR: logs: Report the true number of retries if there was no connection
- BUILD: makefile: Redirect stderr to /dev/null when probing options
- MINOR: uri_normalizer: Add uri_normalizer module
- MINOR: uri_normalizer: Add `enum uri_normalizer_err`
- MINOR: uri_normalizer: Add `http-request normalize-uri`
- MINOR: uri_normalizer: Add a `merge-slashes` normalizer to http-request normalize-uri
- MINOR: uri_normalizer: Add a `dotdot` normalizer to http-request normalize-uri
- MINOR: uri_normalizer: Add support for supressing leading `../` for dotdot normalizer
- MINOR: uri_normalizer: Add a `sort-query` normalizer
- MINOR: uri_normalizer: Add a `percent-upper` normalizer
- MEDIUM: http_act: Rename uri-normalizers
- DOC: Add introduction to http-request normalize-uri
- DOC: Note that URI normalization is experimental
- BUG/MINOR: pools: maintain consistent ->allocated count on alloc failures
- BUG/MINOR: pools/buffers: make sure to always reserve the required buffers
- MINOR: pools: drop the unused static history of artificially failed allocs
- CLEANUP: pools: remove unused arguments to pool_evict_from_cache()
- MEDIUM: pools: move the cache into the pool header
- MINOR: pool: remove the size field from pool_cache_head
- MINOR: pools: rename CONFIG_HAP_LOCAL_POOLS to CONFIG_HAP_POOLS
- MINOR: pools: enable the fault injector in all allocation modes
- MINOR: pools: make the basic pool_refill_alloc()/pool_free() update needed_avg
- MEDIUM: pools: unify pool_refill_alloc() across all models
- CLEANUP: pools: re-merge pool_refill_alloc() and __pool_refill_alloc()
- MINOR: pools: call pool_alloc_nocache() out of the pool's lock
- CLEANUP: pools: move the lock to the only __pool_get_first() that needs it
- CLEANUP: pools: rename __pool_get_first() to pool_get_from_shared_cache()
- CLEANUP: pools: rename pool_*_{from,to}_cache() to *_local_cache()
- CLEANUP: pools: rename __pool_free() to pool_put_to_shared_cache()
- MINOR: tools: add statistical_prng_range() to get a random number over a range
- MINOR: pools: use cheaper randoms for fault injections
- MINOR: pools: move the fault injector to __pool_alloc()
- MINOR: pools: split the OS-based allocator in two
- MINOR: pools: always use atomic ops to maintain counters
- MINOR: pools: move pool_free_area() out of the lock in the locked version
- MINOR: pools: factor the release code into pool_put_to_os()
- MEDIUM: pools: make CONFIG_HAP_POOLS control both local and shared pools
- MINOR: pools: create unified pool_{get_from,put_to}_cache()
- MINOR: pools: evict excess objects using pool_evict_from_local_cache()
- MEDIUM: pools: make pool_put_to_cache() always call pool_put_to_local_cache()
- CLEANUP: pools: make the local cache allocator fall back to the shared cache
- CLEANUP: pools: merge pool_{get_from,put_to}_local_caches with generic ones
- CLEANUP: pools: uninline pool_put_to_cache()
- CLEANUP: pools: declare dummy pool functions to remove some ifdefs
- BUILD: pools: fix build with DEBUG_FAIL_ALLOC
- BUG/MINOR: server: make srv_alloc_lb() allocate lb_nodes for consistent hash
- CONTRIB: mod_defender: import the minimal number of includes
- CONTRIB: mod_defender: make the code build with the embedded includes
- CONTRIB: modsecurity: import the minimal number of includes
- CONTRIB: modsecurity: make the code build with the embedded includes
- CLEANUP: sample: Improve local variables in sample_conv_json_query
- CLEANUP: sample: Explicitly handle all possible enum values from mjson
- CLEANUP: sample: Use explicit return for successful `json_query`s
- CLEANUP: lists/tree-wide: rename some list operations to avoid some confusion
- CONTRIB: move spoa_example out of the tree
- BUG/MINOR: server: free srv.lb_nodes in free_server
- BUG/MINOR: logs: free logsrv.conf.file on exit
- BUG/MEDIUM: server: ensure thread-safety of server runtime creation
- MINOR: server: add log on dynamic server creation
- MINOR: server: implement delete server cli command
- CONTRIB: move spoa_server out of the tree
- CONTRIB: move modsecurity out of the tree
- BUG/MINOR: server: fix potential null gcc error in delete server
- BUG/MAJOR: mux-h2: Properly detect too large frames when decoding headers
- BUG/MEDIUM: mux-h2: Fix dfl calculation when merging CONTINUATION frames
- BUG/MINOR: uri_normalizer: Use delim parameter when building the sorted query in uri_normalizer_query_sort
- CLEANUP: uri_normalizer: Remove trailing whitespace
- MINOR: uri_normalizer: Add a `strip-dot` normalizer
- CONTRIB: move mod_defender out of the tree
- CLEANUP: contrib: remove the last references to the now dead contrib/ directory
- BUG/MEDIUM: config: fix cpu-map notation with both process and threads
- MINOR: config: add a diag for invalid cpu-map statement
- BUG/MINOR: mworker/init: don't reset nb_oldpids in non-mworker cases
- BUG/MINOR: mworker: don't use oldpids[] anymore for reload
- BUILD: makefile: fix the "make clean" target on strict bourne shells
- IMPORT: slz: import slz into the tree
- BUILD: compression: switch SLZ from out-of-tree to in-tree
- CI: github: do not build libslz any more
- CLEANUP: compression: remove calls to SLZ init functions
- BUG/MEDIUM: mux-h2: Properly handle shutdowns when received with data
- MINOR: cpuset: define a platform-independent cpuset type
- MINOR: cfgparse: use hap_cpuset for parse_cpu_set
- MEDIUM: config: use platform independent type hap_cpuset for cpu-map
- MINOR: thread: implement the detection of forced cpu affinity
- MINOR: cfgparse: support the comma separator on parse_cpu_set
- MEDIUM: cfgparse: detect numa and set affinity if needed
- MINOR: global: add option to disable numa detection
- BUG/MINOR: haproxy: fix compilation on macOS
- BUG/MINOR: cpuset: fix compilation on platform without cpu affinity
- MINOR: time: avoid unneeded updates to now_offset
- MINOR: time: avoid overwriting the same values of global_now
- CLEANUP: time: use __tv_to_ms() in tv_update_date() instead of open-coding
- MINOR: time: avoid u64 needlessly expensive computations for the 32-bit now_ms
- BUG/MINOR: peers: remove useless table check if initial resync is finished
- BUG/MEDIUM: peers: re-work connection to new process during reload.
- BUG/MEDIUM: peers: re-work refcnt on table to protect against flush
- BUG/MEDIUM: config: fix missing initialization in numa_detect_topology()
The error path of the NUMA topology detection introduced in commit
b56a7c89a ("MEDIUM: cfgparse: detect numa and set affinity if needed")
lacks an initialization resulting in possible crashes at boot. No
backport is needed since that was introduced in 2.4-dev.
In proxy.c, when process is stopping we try to flush tables content
using 'stktable_trash_oldest'. A check on a counter "table->syncing" was
made to verify if there is no pending resync in progress.
But using multiple threads this counter can be increased by an other thread
only after some delay, so the content of some tables can be trashed earlier and
won't be pushed to the new process (after reload, some tables appear reset and
others don't).
This patch re-names the counter "table->syncing" to "table->refcnt" and
the counter is increased during configuration parsing (registering a table to
a peer section) to protect tables during runtime and until resync of a new
process has succeeded or failed.
The inc/dec operations are now made using atomic operations
because multiple peer sections could refer to the same table in futur.
This fix addresses github #1216.
This patch should be backported on all branches multi-thread support (v >= 1.8)
The peers task handling the "stopping" could wake up multiple
times in stopping state with WOKEN_SIGNAL: the connection to the
local peer initiated on the first processing was immediatly
shutdown by the next processing of the task and the old process
exits considering it is unable to connect. It results on
empty stick-tables after a reload.
This patch checks the flag 'PEERS_F_DONOTSTOP' to know if the
signal is considered and if remote peers connections shutdown
is already done or if a connection to the local peer must be
established.
This patch should be backported on all supported branches (v >= 1.6)
The old process checked each table resync status even if
the resync process is finished. This behavior had no known impact
except useless processing and was discovered during debugging on
an other issue.
This patch could be backported in all supported branches (v >= 1.6)
but once again, it has no impact except avoid useless processing.
The compiler cannot guess that tv_sec or tv_usec might have unused
parts, so the multiply by 1000 and the divide by 1000 are both
performed using 64-bit constants to stick to the common type. This is
not needed since we only keep the final 32 bits, let's help the compiler
here by casting these fields to uint. The tv_update_date() code is much
cleaner (48 bytes smaller in the CAS loop) as it avoids some register
spilling at a location where that's really unwanted.
In tv_update_date(), we calculate the new global date based on the local
one. It's very likely that other threads will end up with the exact same
now_ms date (at 1 million wakeups/s it happens 99.9% of the time), and
even the microsecond was measured to remain unchanged ~70% of the time
with 16 threads, simply because sometimes another thread already updated
a more recent version of it.
In such cases, performing a CAS to the global variable requires a cache
line flush which brings nothing. By checking if they're changed before
writing, we can divide by about 6 the number of writes to the global
variables, hence the overall contention.
In addition, it's worth noting that all threads will want to update at
the same time, so let's place a cpu relax call before trying again, this
will spread attempts apart.
The time adjustment is very rare, even at high pool rates. Tests show
that only 0.2% of tv_update_date() calls require a change of offset. Such
concurrent writes to a shared variable have an important impact on future
loads, so let's only update the variable if it changed.
The compilation is currently broken on platform without USE_CPU_AFFINITY
set. An error has been reported by the cygwin build of the CI.
This does not need to be backported.
In file included from include/haproxy/global-t.h:27,
from include/haproxy/global.h:26,
from include/haproxy/fd.h:33,
from src/ev_poll.c:22:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
32 | # error "No cpuset support implemented on this platform"
| ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
37 | CPUSET_REPR cpuset;
| ^~~~~~~~~~~
make: *** [Makefile:944: src/ev_poll.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from include/haproxy/global-t.h:27,
from include/haproxy/global.h:26,
from include/haproxy/fd.h:33,
from include/haproxy/connection.h:30,
from include/haproxy/ssl_sock.h:27,
from src/ssl_sample.c:30:
include/haproxy/cpuset-t.h:32:3: error: #error "No cpuset support implemented on this platform"
32 | # error "No cpuset support implemented on this platform"
| ^~~~~
include/haproxy/cpuset-t.h:37:2: error: unknown type name ‘CPUSET_REPR’
37 | CPUSET_REPR cpuset;
| ^~~~~~~~~~~
make: *** [Makefile:944: src/ssl_sample.o] Error 1
Fix the warning treated as error on the CI for the macOS compilation :
"src/haproxy.c:2939:23: error: unused variable 'set'
[-Werror,-Wunused-variable]"
This does not need to be backported.
Render numa detection optional with a global configuration statement
'no numa-cpu-mapping'. This can be used if the applied affinity of the
algorithm is not optimal. Also complete the documentation with this new
keyword.
On process startup, the CPU topology of the machine is inspected. If a
multi-socket CPU machine is detected, automatically define the process
affinity on the first node with active cpus. This is done to prevent an
impact on the overall performance of the process in case the topology of
the machine is unknown to the user.
This step is not executed in the following condition :
- a non-null nbthread statement is present
- a restrictive 'cpu-map' statement is present
- the process affinity is already restricted, for example via a taskset
call
For the record, benchmarks were executed on a machine with 2 CPUs
Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz. In both clear and ssl
scenario, the performance were sub-optimal without the automatic
rebinding on a single node.
Allow to specify multiple cpu ids/ranges in parse_cpu_set separated by a
comma. This is optional and must be activated by a parameter.
The comma support is disabled for the parsing of the 'cpu-map' config
statement. However, it will be useful to parse files in sysfs when
inspecting the cpus topology for NUMA automatic process binding.
Create a function thread_cpu_mask_forced. Its purpose is to report if a
restrictive cpu mask is active for the current proces, for example due
to a taskset invocation. It is only implemented for the linux platform
currently.
Use the platform independent type hap_cpuset for the cpu-map statement
parsing. This allow to address CPU index greater than LONGBITS.
Update the documentation to reflect the removal of this limit except for
platforms without cpu_set_t type or equivalent.
Replace the unsigned long parameter by a hap_cpuset. This allows to
address CPU with index greater than LONGBITS.
This function is used to parse the 'cpu-map' statement. However at the
moment, the result is casted back to a long to store it in the global
structure. The next step is to replace ulong in in cpu_map in the
global structure with hap_cpuset.
This module can be used to manipulate a cpu sets in a platform agnostic
way. Use the type cpu_set_t/cpuset_t if available on the platform, or
fallback to unsigned long, which limits de facto the maximum cpu index
to LONGBITS.
The H2_CF_RCVD_SHUT flag is used to report a read0 was encountered. It is
used by the H2 mux to properly handle shutdowns. However, this flag is only
set when no data are received. If it is detected at the socket level when
some data are received, it is not handled. And because the event was
reported on the connection, any other read attempts are blocked. In this
case, we are unable to close the connection and release the mux
immediately. We must wait the mux timeout expires.
This patch should fix the issue #1231. It must be backported as far as 2.0.
As we now embed the library we don't need to support the older 1.0 API
any more, so we can remove the explicit calls to slz_make_crc_table()
and slz_prepare_dist_table().
Now that SLZ is merged, let's update the makefile and compression
files to use it. As a result, SLZ_INC and SLZ_LIB are neither defined
nor used anymore.
USE_SLZ is enabled by default ("USE_SLZ=default") and can be disabled
by passing "USE_SLZ=" or by enabling USE_ZLIB=1.
The doc was updated to reflect the changes.
SLZ is rarely packaged by distros and there have been complaints about
the CPU and memory usage of ZLIB, leading to some suggestions to better
address the issue by simply integrating SLZ into the tree (just 3 files).
See discussions below:
https://www.mail-archive.com/haproxy@formilux.org/msg38037.htmlhttps://www.mail-archive.com/haproxy@formilux.org/msg40079.htmlhttps://www.mail-archive.com/haproxy@formilux.org/msg40365.html
This patch does just this, after minor adjustments to these files:
- tables.h was renamed to slz-tables.h
- tables.h had the precomputed tables removed since not used here
- slz.c uses includes <import/slz*> instead of "slz*.h"
The slz commit imported here was b06c172 ("slz: avoid a build warning
with -Wimplicit-fallthrough"). No other change was performed either to
SLZ nor to haproxy at this point so that this operation may be replicated
if needed for a future version.
As reported by @axinojolais in issue #1217, some older bourne shells do
not expand on braces so some files were not cleaned since the recent
splitting of the contrib/ subdir. Let's fix that by explicitly listing
the patterns to be cleared (which are in much smaller quantity now that
contrib was removed), and for grouping them with their respective dirs.
At some point, some recursive makefiles would probably help there.
Since commit 3f12887 ("MINOR: mworker: don't use children variable
anymore"), the oldpids array is not used anymore to generate the new -sf
parameters. So we don't need to set nb_oldpids to 0 during the first
start of the master process.
This patch fixes a bug when 2 masters process tries to synchronize their
peers, there is a small chances that it won't work because nb_oldpids
equals 0.
Should be backported as far as 2.0.
This bug affects the peers synchronisation code which rely on the
nb_oldpids variable to synchronize the peer from the old PID.
In the case the process is not started in master-worker mode and tries
to synchronize using the peers, there is a small chance that won't work
because nb_oldpids equals 0.
Fix the bug by setting the variable to 0 only in the case of the
master-worker when not reloaded.
It could also be a problem when trying to synchronize the peers between
2 masters process which should be fixed in another patch.
Bug exists since commit 8a361b5 ("BUG/MEDIUM: mworker: don't reuse PIDs
passed to the master").
Sould be backported as far as 1.8.
The application of a cpu-map statement with both process and threads
is broken (P-Q/1 or 1/P-Q notation).
For example, before the fix, when using P-Q/1, proc_t1 would be updated.
Then it would be AND'ed with thread which is still 0 and thus does
nothing.
Another problem is when using 1/1[-Q], thread[0] is defined. But if
there is multiple processes, every processes will use this define
affinity even if it should be applied only to 1st process.
The solution to the fix is a little bit too complex for my taste and
there is maybe a simpler solution but I did not wish to break the
storage of global.cpu_map, as it is quite painful to test all the
use-cases. Besides, this code will probably be clean up when
multiprocess support removed on the future version.
Let's try to explain my logic.
* either haproxy runs in multiprocess or multithread mode. If on
multiprocess, we should consider proc_t1 (P-Q/1 notation). If on
multithread, we should consider thread (1/P-Q notation). However
during parsing, the final number of processes or threads is unknown,
thus we have to consider the two possibilities.
* there is a special case for the first thread / first process which is
present in both execution modes. And as a matter of fact cpu-map 1 or
1/1 notation represents the same thing. Thus, thread[0] and proc_t1[0]
represents the same thing. To solve this problem, only thread[0] is
used for this special case.
This fix must be backported up to 2.0.
As previously mentioned SPOA code has nothing to do in the haproxy core
since they're not dependent on haproxy's version. This one was moved to
its own repository here with complete history:
https://github.com/haproxytech/spoa-mod_defender
This normalizer removes "/./" segments from the path component.
Usually the dot refers to the current directory which renders those segments redundant.
See GitHub Issue #714.
Currently the delimiter is hardcoded as ampersand (&) but the function takes the delimiter as a paramter.
This patch replaces the hardcoded ampersand with the given delimiter.
When header are splitted over several frames, payload of HEADERS and
CONTINUATION frames are merged to form a unique HEADERS frame before
decoding the payload. To do so, info about the current frame are updated
(dff, dfl..) with info of the next one. Here there is a bug when the frame
length (dfl) is update. We must add the next frame length (hdr.dfl) and not
only the amount of data found in the buffer (clen). Because HEADERS frames
are decoded in one pass, dfl value is the whole frame length or 0. nothing
intermediary.
This patch must be backported as far as 2.0.
In the function decoding payload of HEADERS frames, an internal error is
returned if the frame length is too large. it cannot exceed the buffer
size. The same is true when headers are splitted on several frames. The
payload of HEADERS and CONTINUATION frames are merged and the overall size
must not exceed the buffer size.
However, there is a bug when the current frame is big enough to only have
the space for a part of the header of the next frame. Because, in this case,
we wait for more data, to have the whole frame header. We don't properly
detect that the headers are too large to be stored in one buffer. In fact
the test to trigger this error is not accurate. When the buffer is full, the
error is reported if the frame length exceeds the amount of data in the
buffer. But in reality, an error must be reported when we are unable to
decode the current frame while the buffer is full. Because, in this case, we
know there is no way to change this state.
When the bug happens, the H2 connection is woken up in loop, consumming all
the CPU. But the traffic is not blocked for all that.
This patch must be backported as far as 2.0.
gcc still reports a potential null pointer dereference in delete server
function event with a BUG_ON before it. Remove the misleading NULL check
in the for loop which should never happen.
This does not need to be backported.
As previously mentioned SPOA code has nothing to do in the haproxy core
since they're not dependent on haproxy's version. This one was moved to
its own repository here with complete history:
https://github.com/haproxy/spoa-modsecurity
As previously mentioned SPOA code has nothing to do in the haproxy core
since they're not dependent on haproxy's version. This one was moved to
its own repository here with complete history:
https://github.com/haproxy/spoa-server
Implement a new CLI command 'del server'. It can be used to removed a
dynamically added server. Only servers in maintenance mode can be
removed, and without pending/active/idle connection on it.
Add a new reg-test for this feature. The scenario of the reg-test need
to first add a dynamic server. It is then deleted and a client is used
to ensure that the server is non joinable.
The management doc is updated with the new command 'del server'.
cli_parse_add_server can be executed in parallel by several CLI
instances and so must be thread-safe. The critical points of the
function are :
- server duplicate detection
- insertion of the server in the proxy list
The mode of operation has been reversed. The server is first
instantiated and parsed. The duplicate check has been moved at the end
just before the insertion in the proxy list, under the thread isolation.
Thus, the thread safety is guaranteed and server allocation is kept
outside of locks/thread isolation.
Config information has been added into the logsrv struct. The filename
is duplicated and should be freed on exit.
Introduced in the current release.
This does not need to be backported.