Released version 2.2-dev4 with the following main changes :
- MEDIUM: buffer: remove the buffer_wq lock
- MINOR: ssl: move find certificate chain code to its own function
- MINOR: ssl: resolve issuers chain later
- MINOR: ssl: resolve ocsp_issuer later
- MINOR: ssl/cli: "show ssl cert" command should print the "Chain Filename:"
- BUG/MINOR: h2: reject again empty :path pseudo-headers
- MINOR: wdt: always clear sigev_value to make valgrind happy
- MINOR: epoll: always initialize all of epoll_event to please valgrind
- BUG/MINOR: sample: Make sure to return stable IDs in the unique-id fetch
- BUG/MEDIUM: ssl: chain must be initialized with sk_X509_new_null()
- BUILD: cirrus-ci: suppress OS version check when installing packages
- BUG/MINOR: http_ana: make sure redirect flags don't have overlapping bits
- CLEANUP: fd: remove the FD_EV_STATUS aggregate
- CLEANUP: fd: remove some unneeded definitions of FD_EV_* flags
- MINOR: fd: merge the read and write error bits into RW error
- BUG/MINOR: dns: ignore trailing dot
- MINOR: contrib/prometheus-exporter: Add the last heathcheck duration metric
- BUG/MINOR: http-htx: Do case-insensive comparisons on Host header name
- MINOR: mux-h1: Remove useless case-insensitive comparisons
- MINOR: rawsock: always mark the FD not ready when we're certain it happens
- MEDIUM: connection: make the subscribe() call able to wakeup if ready
- MEDIUM: connection: don't stop receiving events in the FD handler
- MEDIUM: mux-h1: do not blindly wake up the tasklet at end of request anymore
- BUG/MINOR: arg: don't reject missing optional args
- MINOR: tools: make sure to correctly check the returned 'ms' in date2std_log
- MINOR: debug: report the task handler's pointer relative to main
- BUG/MEDIUM: debug: make the debug_handler check for the thread in threads_to_dump
- MINOR: haproxy: export main to ease access from debugger
- MINOR: haproxy: export run_poll_loop
- MINOR: task: export run_tasks_from_list
- BUILD: tools: remove obsolete and conflicting trace() from standard.c
- MINOR: tools: add new function dump_addr_and_bytes()
- MINOR: tools: add resolve_sym_name() to resolve function pointers
- MINOR: debug: use resolve_sym_name() to dump task handlers
- MINOR: cli: make "show fd" rely on resolve_sym_name()
- MEDIUM: debug: add support for dumping backtraces of stuck threads
- MINOR: debug: call backtrace() once upon startup
- MINOR: ssl: add "ca-verify-file" directive
- BUG/MINOR: wdt: do not return an error when the watchdog couldn't be enabled
- BUILD: Makefile: include librt before libpthread
- MEDIUM: wdt: fall back to CLOCK_REALTIME if CLOCK_THREAD_CPUTIME is not available
- MINOR: wdt: do not depend on USE_THREAD
- MINOR: debug: report the number of entries in the backtrace
- MINOR: debug: improve backtrace() on aarch64 and possibly other systems
- MINOR: debug: use our own backtrace function on clang+x86_64
- MINOR: debug: dump the whole trace if we can't spot the starting point
- BUILD: tools: unbreak resolve_sym_name() on non-GNU platforms
- BUILD: tools: rely on __ELF__ not USE_DL to enable use of dladdr()
- CLEANUP: contrib/spoa_example: Fix several typos
- BUILD: makefile: do not modify the build options during make reg-tests
- BUG/MEDIUM: connection: stop polling for sending when the event is ready
- MEDIUM: stream-int: make sure to try to immediately validate the connection
- MINOR: tcp/uxst/sockpair: only ask for I/O when really waiting for a connect()
- MEDIUM: connection: only call ->wake() for connect() without I/O
- OPTIM: connection: disable receiving on disabled events when the run queue is too high
- OPTIM: mux-h1: subscribe rather than waking up at a few other places
- REGTEST: Add unique-id reg-test
- MINOR: stream: Add stream_generate_unique_id function
- MINOR: stream: Use stream_generate_unique_id
- BUG/MINOR: connection/debug: do not enforce !event_type on subscribe() anymore
- MINOR: ssl/cli: support crt-list filters
- MINOR: ssl: reach a ckch_store from a sni_ctx
- DOC: fix incorrect indentation of http_auth_*
- BUG/MINOR: ssl-sock: do not return an uninitialized pointer in ckch_inst_sni_ctx_to_sni_filters
- MINOR: debug: add CLI command "debug dev write" to write an arbitrary size
- MINOR: ist: Add `IST_NULL` macro
- MINOR: ist: Add `int isttest(const struct ist)`
- MINOR: ist: Add `struct ist istalloc(size_t)` and `void istfree(struct ist*)`
- CLEANUP: Use `isttest()` and `istfree()`
- MINOR: ist: Add `struct ist istdup(const struct ist)`
- MINOR: proxy: Make `header_unique_id` a `struct ist`
- MEDIUM: stream: Make the `unique_id` member of `struct stream` a `struct ist`
- OPTIM: startup: fast unique_id allocation for acl.
- DOC: configuration.txt: fix various typos
- DOC: assorted typo fixes in the documentation and Makefile
- BUG/MINOR: init: make the automatic maxconn consider the max of soft/hard limits
- BUG/MAJOR: proxy_protocol: Properly validate TLV lengths
- CLEANUP: proxy_protocol: Use `size_t` when parsing TLVs
- MINOR: buf: Add function to insert a string at an absolute offset in a buffer
- MINOR: htx: Add a function to return a block at a specific offset
- MINOR: htx: Use htx_find_offset() to truncate an HTX message
- MINOR: flt_trace: Use htx_find_offset() to get the available payload length
- BUG/MINOR: filters: Use filter offset to decude the amount of forwarded data
- BUG/MINOR: filters: Forward everything if no data filters are called
- BUG/MEDIUM: cache/filters: Fix loop on HTX blocks caching the response payload
- BUG/MEDIUM: compression/filters: Fix loop on HTX blocks compressing the payload
- BUG/MINOR: http-ana: Reset request analysers on a response side error
- BUG/MINOR: lua: Abort when txn:done() is called from a Lua action
- BUG/MINOR: lua: Ignore the reserve to know if a channel is full or not
- MINOR: lua: Add function to know if a channel is a response one
- MINOR: lua: Stop using the lua txn in hlua_http_get_headers()
- MINOR: lua: Stop using the lua txn in hlua_http_rep_hdr()
- MINOR: lua: Stop using lua txn in hlua_http_del_hdr() and hlua_http_add_hdr()
- MINOR: lua: Remove the flag HLUA_TXN_HTTP_RDY
- MINOR: lua: Rename hlua_action_wake_time() to hlua_set_wake_time()
- BUG/MINOR: lua: Init the lua wake_time value before calling a lua function
- BUG/MINOR: http-rules: Return ACT_RET_ABRT to abort a transaction
- BUG/MINOR: http-rules: Preserve FLT_END analyzers on reject action
- BUG/MINOR: http-rules: Fix a typo in the reject action function
- MINOR: cache/filters: Initialize the cache filter when stream is created
- MINOR: compression/filters: Initialize the comp filter when stream is created
- BUG/MINOR: rules: Preserve FLT_END analyzers on silent-drop action
- BUG/MINOR: rules: Return ACT_RET_ABRT when a silent-drop action is executed
- BUG/MINOR: rules: Increment be_counters if backend is assigned for a silent-drop
- BUG/MINOR: http-rules: Abort transaction when a redirect is applied on response
- BUILD: buffer: types/{ring.h,checks.h} should include buf.h, not buffer.h
- BUILD: ssl: include mini-clist.h
- BUILD: global: must not include common/standard.h but only types/freq_ctr.h
- BUILD: freq_ctr: proto/freq_ctr needs to include common/standard.h
- BUILD: listener: types/listener.h must not include standard.h
- BUG/MEDIUM: random: initialize the random pool a bit better
- BUG/MEDIUM: random: implement per-thread and per-process random sequences
- Revert "BUG/MEDIUM: random: implement per-thread and per-process random sequences"
- BUILD: cirrus-ci: get rid of unstable freebsd images
- MINOR: tools: add 64-bit rotate operators
- BUG/MEDIUM: random: implement a thread-safe and process-safe PRNG
- MINOR: backend: use a single call to ha_random32() for the random LB algo
- BUG/MINOR: checks/threads: use ha_random() and not rand()
- MINOR: sample: make all bits random on the rand() sample fetch
- MINOR: tools: add a generic function to generate UUIDs
- DOC: fix typo about no-tls-tickets
- DOC: improve description of no-tls-tickets
- DOC: assorted typo fixes in the documentation
- CLEANUP: remove unused code in 'my_ffsl/my_flsl' functions
It was not obvious, that this setting only affects TLS versions <= 1.2 and it
we should also mention the security implication of session tickets here.
Signed-off-by: Bjoern Jacke <bjacke@samba.org>
We currently have two UUID generation functions, one for the sample
fetch and the other one in the SPOE filter. Both were a bit complicated
since they were made to support random() implementations returning an
arbitrary number of bits, and were throwing away 33 bits every 64. Now
we don't need this anymore, so let's have a generic function consuming
64 bits at once and use it as appropriate.
The rand() sample fetch supports being limited to a certain range, but
it only uses 31 bits and scales them as requested, which means that when
the requested output range is larger than 31 bits, the least significant
one is not random and may even be constant.
Let's make use of the whole 32 bits now that we have access ot them.
In order to honor spread_checks we currently call rand() which is not
thread safe and which must never turn its internal state to zero. This
is not thread safe, let's use ha_random() instead. This is a complement
to commimt 52bf839394 ("BUG/MEDIUM: random: implement a thread-safe and
process-safe PRNG") and may be backported with it.
For the random LB algorithm we need a random 32-bit hashing key that used
to be made of two calls to random(). Now we can simply perform a single
call to ha_random32() and get rid of the useless operations.
This is the replacement of failed attempt to add thread safety and
per-process sequences of random numbers initally tried with commit
1c306aa84d ("BUG/MEDIUM: random: implement per-thread and per-process
random sequences").
This new version takes a completely different approach and doesn't try
to work around the horrible OS-specific and non-portable random API
anymore. Instead it implements "xoroshiro128**", a reputedly high
quality random number generator, which is one of the many variants of
xorshift, which passes all quality tests and which is described here:
http://prng.di.unimi.it/
While not cryptographically secure, it is fast and features a 2^128-1
period. It supports fast jumps allowing to cut the period into smaller
non-overlapping sequences, which we use here to support up to 2^32
processes each having their own, non-overlapping sequence of 2^96
numbers (~7*10^28). This is enough to provide 1 billion randoms per
second and per process for 2200 billion years.
The implementation was made thread-safe either by using a double 64-bit
CAS on platforms supporting it (x86_64, aarch64) or by using a local
lock for the time needed to perform the shift operations. This ensures
that all threads pick numbers from the same pool so that it is not
needed to assign per-thread ranges. For processes we use the fast jump
method to advance the sequence by 2^96 for each process.
Before this patch, the following config:
global
nbproc 8
frontend f
bind :4445
mode http
log stdout format raw daemon
log-format "%[uuid] %pid"
redirect location /
Would produce this output:
a4d0ad64-2645-4b74-b894-48acce0669af 12987
a4d0ad64-2645-4b74-b894-48acce0669af 12992
a4d0ad64-2645-4b74-b894-48acce0669af 12986
a4d0ad64-2645-4b74-b894-48acce0669af 12988
a4d0ad64-2645-4b74-b894-48acce0669af 12991
a4d0ad64-2645-4b74-b894-48acce0669af 12989
a4d0ad64-2645-4b74-b894-48acce0669af 12990
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12987
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12992
82d5f6cd-f6c1-4f85-a89c-36ae85d26fb9 12986
(...)
And now produces:
f94b29b3-da74-4e03-a0c5-a532c635bad9 13011
47470c02-4862-4c33-80e7-a952899570e5 13014
86332123-539a-47bf-853f-8c8ea8b2a2b5 13013
8f9efa99-3143-47b2-83cf-d618c8dea711 13012
3cc0f5c7-d790-496b-8d39-bec77647af5b 13015
3ec64915-8f95-4374-9e66-e777dc8791e0 13009
0f9bf894-dcde-408c-b094-6e0bb3255452 13011
49c7bfde-3ffb-40e9-9a8d-8084d650ed8f 13014
e23f6f2e-35c5-4433-a294-b790ab902653 13012
There are multiple benefits to using this method. First, it doesn't
depend anymore on a non-portable API. Second it's thread safe. Third it
is fast and more proven than any hack we could attempt to try to work
around the deficiencies of the various implementations around.
This commit depends on previous patches "MINOR: tools: add 64-bit rotate
operators" and "BUG/MEDIUM: random: initialize the random pool a bit
better", all of which will need to be backported at least as far as
version 2.0. It doesn't require to backport the build fixes for circular
include files dependecy anymore.
This reverts commit 1c306aa84d.
It breaks the build on all non-glibc platforms. I got confused by the
man page (which possibly is the most confusing man page I've ever read
about a standard libc function) and mistakenly understood that random_r
was portable, especially since it appears in latest freebsd source as
well but not in released versions, and with a slightly different API :-/
We need to find a different solution with a fallback. Among the
possibilities, we may reintroduce this one with a fallback relying on
locking around the standard functions, keeping fingers crossed for no
other library function to call them in parallel, or we may also provide
our own PRNG, which is not necessarily more difficult than working
around the totally broken up design of the portable API.
As mentioned in previous patch, the random number generator was never
made thread-safe, which used not to be a problem for health checks
spreading, until the uuid sample fetch function appeared. Currently
it is possible for two threads or processes to produce exactly the
same UUID. In fact it's extremely likely that this will happen for
processes, as can be seen with this config:
global
nbproc 8
frontend f
bind :4445
mode http
log stdout daemon format raw
log-format "%[uuid] %pid"
redirect location /
It typically produces this log:
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30645
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30641
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30644
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30639
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30646
07764439-c24d-4e6f-a5a6-0138be59e7a8 30645
07764439-c24d-4e6f-a5a6-0138be59e7a8 30639
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30643
07764439-c24d-4e6f-a5a6-0138be59e7a8 30646
b6773fdd-678f-4d04-96f2-4fb11ad15d6b 30646
551ce567-0bfb-4bbd-9b58-cdc7e9365325 30642
07764439-c24d-4e6f-a5a6-0138be59e7a8 30642
What this patch does is to use a distinct per-thread and per-process
seed to make sure the same sequences will not appear, and will then
extend these seeds by "burning" a number of randoms that depends on
the global random seed, the thread ID and the process ID. This adds
roughly 20 extra bits of randomness, resulting in 52 bits total per
thread and per process.
It only takes a few milliseconds to burn these randoms and given
that threads start with a different seed, we know they will not
catch each other. So these random extra bits are essentially added
to ensure randomness between boots and cluster instances.
This replaces all uses of random() with ha_random() which uses the
thread-local state.
This must be backported as far as 2.0 or any version having the
UUID sample-fetch function since it's the main victim here.
It's important to note that this patch, in addition to depending on
the previous one "BUG/MEDIUM: init: initialize the random pool a bit
better", also depends on the preceeding build fixes to address a
circular dependency issue in the include files that prevented it
from building. Part or all of these patches may need to be backported
or adapted as well.
Since the UUID sample fetch was created, some people noticed that in
certain virtualized environments they manage to get exact same UUIDs
on different instances started exactly at the same moment. It turns
out that the randoms were only initialized to spread the health checks
originally, not to provide "clean" randoms.
This patch changes this and collects more randomness from various
sources, including existing randoms, /dev/urandom when available,
RAND_bytes() when OpenSSL is available, as well as the timing for such
operations, then applies a SHA1 on all this to keep a 160 bits random
seed available, 32 of which are passed to srandom().
It's worth mentioning that there's no clean way to pass more than 32
bits to srandom() as even initstate() provides an opaque state that
must absolutely not be tampered with since known implementations
contain state information.
At least this allows to have up to 4 billion different sequences
from the boot, which is not that bad.
Note that the thread safety was still not addressed, which is another
issue for another patch.
This must be backported to all versions containing the UUID sample
fetch function, i.e. as far as 2.0.
buffer.h relies on proto/activity because it contains some code and not
just type definitions. It must not be included from types files. It
should probably also be split in two if it starts to include a proto.
This causes some circular dependencies at other places.
In the same way than for the request, when a redirect rule is applied the
transction is aborted. This must be done returning HTTP_RULE_RES_ABRT from
http_res_get_intercept_rule() function.
No backport needed because on previous versions, the action return values are
not handled the same way.
Backend counters must be incremented only if a backend was already assigned to
the stream (when the stream exists). Otherwise, it means we are still on the
frontend side.
This patch may be backported as far as 1.6.
When an action interrupts a transaction, returning a response or not, it must
return the ACT_RET_ABRT value and not ACT_RET_STOP. ACT_RET_STOP is reserved to
stop the processing of the current ruleset.
No backport needed because on previous versions, the action return values are
not handled the same way.
When at least a filter is attached to a stream, FLT_END analyzers must be
preserved on request and response channels.
This patch should be backported as far as 1.7.
Since the HTX mode is the only mode to process HTTP messages, the stream is
created for a uniq transaction. The keep-alive is handled at the mux level. So,
the compression filter can be initialized when the stream is created and
released with the stream. Concretly, .channel_start_analyze and
.channel_end_analyze callback functions are replaced by .attach and .detach
ones.
With this change, it is no longer necessary to call FLT_START_FE/BE and FLT_END
analysers for the compression filter.
Since the HTX mode is the only mode to process HTTP messages, the stream is
created for a uniq transaction. The keep-alive is handled at the mux level. So,
the cache filter can be initialized when the stream is created and released with
the stream. Concretly, .channel_start_analyze and .channel_end_analyze callback
functions are replaced by .attach and .detach ones.
With this change, it is no longer necessary to call FLT_START_FE/BE and FLT_END
analysers for the cache filter.
A typo was introduced by the commit c5bb5a0f2 ("BUG/MINOR: http-rules: Preserve
FLT_END analyzers on reject action").
This patch must be backported with the commit c5bb5a0f2.
When at least a filter is attached to a stream, FLT_END analyzers must be
preserved on request and response channels.
This patch should be backported as far as 1.8.
When an action interrupts a transaction, returning a response or not, it must
return the ACT_RET_ABRT value and not ACT_RET_DONE. ACT_RET_DONE is reserved to
stop the processing on the current channel but some analysers may still be
active. When ACT_RET_ABRT is returned, all analysers are removed, except FLT_END
if it is set.
No backport needed because on previous verions, the action return value was not
handled the same way.
It is stated in the comment the return action returns ACT_RET_ABRT on
success. It it the right code to use to abort a transaction. ACT_RET_DONE must
be used when the message processing must be stopped. This does not means the
transaction is interrupted.
No backport needed.
The wake_time of a lua context is now always set to TICK_ETERNITY when the
context is initialized and when everytime the execution of the lua stack is
started. It is mandatory to not set arbitrary wake_time when an action yields.
No backport needed.
This flag was used in some internal functions to be sure the current stream is
able to handle HTTP content. It was introduced when the legacy HTTP code was
still there. Now, It is possible to rely on stream's flags to be sure we have an
HTX stream.
So the flag HLUA_TXN_HTTP_RDY can be removed. Everywhere it was tested, it is
replaced by a call to the IS_HTX_STRM() macro.
This patch is mandatory to allow the support of the filters written in lua.
In these functions, the lua txn was not used. So it can be removed from the
function argument list.
This patch is mandatory to allow the support of the filters written in lua.
In this function, the lua txn was only used to retrieve the stream. But it can
be retieve from the HTTP message, using its channel pointer. So, the lua txn can
be removed from the function argument list.
This patch is mandatory to allow the support of the filters written in lua.
In this function, the lua txn was only used to test if the HTTP transaction is
defined. But it is always used in a context where it is true. So, the lua txn
can be removed from the function argument list.
This patch is mandatory to allow the support of the filters written in lua.
The Lua function Channel.is_full() should not take care of the reserve because
it is not called from a producer (an applet for instance). From an action, it is
allowed to overwrite the buffer reserve.
This patch should be backported as far as 1.7. But it must be adapted for 1.8
and lower because there is no HTX on these versions.
When a lua action aborts a transaction calling txn:done() function, the action
must return ACT_RET_ABRT instead of ACT_RET_DONE. It is mandatory to
abort the message analysis.
This patch must be backported everywhere the commit 7716cdf45 ("MINOR: lua: Get
the action return code on the stack when an action finishes") was
backported. For now, no backport needed.
When an error occurred on the response side, request analysers must be reset. At
this stage, only AN_REQ_HTTP_XFER_BODY analyser remains, and possibly
AN_REQ_FLT_END, if at least one filter is attached to the stream. So it is safe
to remove the AN_REQ_HTTP_XFER_BODY analyser. An error was already handled and a
response was already returned to the client (or it was at least scheduled to be
sent). So there is no reason to continue to process the request payload. It may
cause some troubles for the filters because when an error occurred, data from
the request buffer are truncated.
This patch must be backported as far as 1.9, for the HTX part only. I don't know
if the legacy HTTP code is affected.
During the payload filtering, the offset is relative to the head of the HTX
message and not its first index. This index is the position of the first block
to (re)start the HTTP analysis. It must be used during HTTP analysis but not
during the payload forwarding.
So, from the compression filter point of view, when we loop on the HTX blocks to
compress the response payload, we must start from the head of the HTX
message. To ease the loop, we use the function htx_find_offset().
This patch must be backported as far as 2.0. It depends on the commit "MINOR:
htx: Add a function to return a block at a specific an offset". So this one must
be backported first.
During the payload filtering, the offset is relative to the head of the HTX
message and not its first index. This index is the position of the first block
to (re)start the HTTP analysis. It must be used during HTTP analysis but not
during the payload forwarding.
So, from the cache point of view, when we loop on the HTX blocks to cache the
response payload, we must start from the head of the HTX message. To ease the
loop, we use the function htx_find_offset().
This patch must be backported as far as 2.0. It depends on the commit "MINOR:
htx: Add a function to return a block at a specific an offset". So this one must
be backported first.
If a filter enable the data filtering, in TCP or in HTTP, but it does not
defined the corresponding callback function (so http_payload() or
tcp_payload()), it will be ignored. If all configured data filter do the same,
we must be sure to forward everything. Otherwise nothing will be forwarded at
all.
This patch must be forwarded as far as 1.9.
When the tcp or http payload is filtered, it is important to use the filter
offset to decude the amount of forwarded data because this offset may change
during the call to the callback function. So we should not rely on a local
variable defined before this call.
For now, existing HAproxy filters don't change this offset, so this bug may only
affect external filters.
This patch must be forwarded as far as 1.9.
The htx_find_offset() function may be used to look for a block at a specific
offset in an HTX message, starting from the message head. A compound result is
returned, an htx_ret structure, with the found block and the position of the
offset in the block. If the offset is ouside of the HTX message, the returned
block is NULL.
The b_insert_blk() function may now be used to insert a string, given a pointer
and the string length, at an absolute offset in a buffer, moving data between
this offset and the buffer's tail just after the end of the inserted string. The
buffer's length is automatically updated. This function supports wrapping. All
the string is copied or nothing. So it returns 0 if there are not enough space
to perform the copy. Otherwise, the number of bytes copied is returned.
This patch fixes PROXYv2 parsing when the payload of the TCP connection is
fused with the PROXYv2 header within a single recv() call.
Previously HAProxy ignored the PROXYv2 header length when attempting to
parse the TLV, possibly interpreting the first byte of the payload as a
TLV type.
This patch adds proper validation. It ensures that:
1. TLV parsing stops when the end of the PROXYv2 header is reached.
2. TLV lengths cannot exceed the PROXYv2 header length.
3. The PROXYv2 header ends together with the last TLV, not allowing for
"stray bytes" to be ignored.
A reg-test was added to ensure proper behavior.
This patch tries to find the sweat spot between a small and easily
backportable one, and a cleaner one that's more easily adaptable to
older versions, hence why it merges the "if" and "while" blocks which
causes a reindent of the whole block. It should be used as-is for
versions 1.9 to 2.1, the block about PP2_TYPE_AUTHORITY should be
dropped for 2.0 and the block about CRC32C should be dropped for 1.8.
This bug was introduced when TLV parsing was added. This happened in commit
b3e54fe387. This commit was first released
with HAProxy 1.6-dev1.
A similar issue was fixed in commit 7209c204bd.
This patch must be backported to HAProxy 1.6+.