The comments for match_word() in pattern.c mention that delimiters
at the start or end of the input string will be ignored, but this
is not mentionned in the documentation.
Backport to all supported versions.
this log could be sometimes a bit confusing (depending on the number in
fact) when you read it (e.g is it the number of active connection?) -
only trained eyes knows haproxy output a different log when closing
active connections while stopping.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
hostname were limited to 62 char, which is not RFC1035 compliant;
- the parsing loop should stop when above max label char
- fix len label test where d[i] was wrongly used
- simplify the whole function to avoid using two extra char* variable
this should fix github issue #387
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
Acked-by: Baptiste <bedis9@gmail.com>
triggered by coverity; src_port is set earlier.
this should fix github issue #467
Fixes: 7fec02153712 ("MEDIUM: proxy_protocol: Convert IPs to v6 when
protocols are mixed")
This should be backported to 1.8.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
Reviewed-by: Tim Duesterhus <tim@bastelstu.be>
Many HTTP actions rely on <.arg.http> in the act_rule structure. Not all actions
use the log-format expression, but it must be initialized anyway. Otherwise,
HAProxy may crash during the deinit when the release function is called.
No backport needed. This patch should fix issue #468.
In issue #465, we see that Coverity detected dead code in checks.c
which is in fact a missing parenthesis to build the connect() flags
consecutive to the API change in commit fdcb007ad8 ("MEDIUM: proto:
Change the prototype of the connect() method.").
The impact should be imperceptible as in the best case it may have
resulted in a missed optimization trying to save a syscall or to merge
outgoing packets.
It may be backported as far as 2.0 though it's not critical.
We're getting almost 100% failure rate recently due to the "slow" tests
never completing in time on Travis. Peers do not synchronize their data
within the expected delay, health checks time out due to the tested
agent not responding, etc. This adds a lot of noise and completely voids
the value of the build test.
Let's disable the slow tests to try to get back to a fully working state.
In ssl_sock_init(), if we fail to allocate the BIO, don't forget to free
the SSL *, or we'd end up with a memory leak.
This should be backported to 2.1 and 2.0.
As the server early data buffer is allocated in the middle of the loop
used to allocate the SSL session without being freed before retrying,
this leads to a memory leak.
To fix this we move the section of code responsible of this early data buffer
alloction after the one reponsible of allocating the SSL session.
Must be backported to 2.1 and 2.0.
When commit 477902bd2e made the conn_stream
allocation unconditional, it unfortunately moved the code doing the allocation
inside #if USE_OPENSSL, which means anybody compiling haproxy without
openssl wouldn't allocate any conn_stream, and would get a segfault later.
Fix that by moving the code that does the allocation outside #if USE_OPENSSL.
In process_stream(), when a client or a server abort is handled, the
corresponding listener's counter is incremented. But, we must be sure to have a
listener attached to the session. This bug was introduced by the commit
cff0f739e51.
Thanks to Fred to reporting me the bug.
No need to backport this patch, except if commit cff0f739e51 is backported.
A stupid cut-paste bug was introduced in the commit cff0f739e5. Backend
counters must of course be incremented on the stream's backend. Not the
frontend.
No need to backport this patch, except if commit cff0f739e5 is backported.
A first patch was made during 2.0-dev to silence a bogus warning emitted
by gcc : dd1c8f1f72 ("MINOR: cfgparse: Add a cast to make gcc happier."),
but it happens it was not sufficient as the warning re-appeared on 32-bit
machines under gcc-8 and gcc-9 :
src/cfgparse.c: In function 'check_config_validity':
src/cfgparse.c:3642:33: warning: argument 1 range [2147483648, 4294967295] exceeds maximum object size 2147483647 [-Walloc-size-larger-than=]
newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread, sizeof(*newsrv->idle_orphan_conns));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This warning doesn't trigger in other locations, and it immediately
vanishes if the previous or subsequent loops do not depend on
global.nbthread anymore, or if the field ordering of the struct server
changes! As discussed in the thread at:
https://www.mail-archive.com/haproxy@formilux.org/msg36107.html
playing with -Walloc-size-larger-than has no effect. And a minimal
reproducer could be isolated, indicating it's pointless to circle around
this one. Let's just cast nbthread to ushort so that gcc cannot make
this wrong detection. It's unlikely we'll use more than 65535 threads in
the near future anyway.
This may be backported to older releases if they are also affected, at
least to ease the job of distro maintainers.
Thanks to Ilya for testing.
lua-prepend-path allows the administrator to specify a custom Lua library
path to load custom Lua modules that are useful within the context of HAProxy
without polluting the global Lua library folder.
While the H2 parser properly checks for the absence of anything but
"trailers" in the TE header field, we forget to check this when sending
the request to an H2 server. The problem is that an H2->H2 conversion
may keep "gzip" and fail on the next stage.
This patch makes sure that we only send "TE: trailers" if the TE header
contains the "trailers" token, otherwise it's dropped.
This fixes issue #464 and should be backported till 1.9.
Since commit 1b8e68e89a ("MEDIUM: stick-table: Stop handling stick-tables
as proxies."), a rule referencing the current proxy with no table leads
to the following error :
[ALERT] 023/071924 (16479) : Proxy 'px': unable to find stick-table '(null)'.
[ALERT] 023/071914 (16479) : Fatal errors found in configuration.
for a config like this one:
backend px
stick on src
This patch fixes it and should be backported as far as 2.0.
A few places in health checks and stream-int on the send path were still
checking for this flag. Now we do not and instead we rely on snd_buf()
to report the error if any.
It's worth noting that all 3 real muxes still use CO_FL_SOCK_WR_SH and
CO_FL_ERROR interchangeably at various places to decide to abort and/or
free their data. This should be clarified and fixed so that only
CO_FL_ERROR is used, and this will render the error paths simpler and
more accurate.
The test was added before splice() and send() to make sure we never
accidently send after a shutdown, because upper layers do not all
check and it's not their job to do it. In such a case we also set
errno to EPIPE so that the error can be accurately reported, e.g.,
in health checks.
Just like with CO_FL_SOCK_RD_SH, we don't need to check for this flag too
early because conn_sock_send() already does it. No error was lost so it
was harmless, it was only useless code.
The handshake functions dedicated to proxy proto, netscaler and
socks4 all check for this flag before proceeding. This is wrong,
they must not do and instead perform the call to recv() then
report the close. The reason for this is that the current
construct managed to lose the CO_ER_CIP_EMPTY error code in case
the connection was already shut, thus causing a race condition
with some errors being reported correctly or as unknown depending
on the timing.
There are still leftovers from the pre-xprt_handshake era with lots
of places where I/O callbacks refrain from receiving/sending if they
see that a handshake is present. This needlessly duplicates the
subscribe calls as it will automatically be done by the underlying
xprt_handshake code when attempting the operation.
The only reason for still checking CO_FL_HANDSHAKE is when we decide
to instantiate xprt_handshake. This patch removes all other ones.
As mentioned in commit c192b0ab95 ("MEDIUM: connection: remove
CO_FL_CONNECTED and only rely on CO_FL_WAIT_*"), there is a lack of
consistency on which flags are checked among L4/L6/HANDSHAKE depending
on the code areas. A number of sample fetch functions only check for
L4L6 to report MAY_CHANGE, some places only check for HANDSHAKE and
many check both L4L6 and HANDSHAKE.
This patch starts to make all of this more consistent by introducing a
new mask CO_FL_WAIT_XPRT which is the union of L4/L6/HANDSHAKE and
reports whether the transport layer is ready or not.
All inconsistent call places were updated to rely on this one each time
the goal was to check for the readiness of the transport layer.
Most places continue to check CO_FL_HANDSHAKE while in fact they should
check CO_FL_HANDSHAKE_NOSSL, which contains all handshakes but the one
dedicated to SSL renegotiation. In fact the SSL layer should be the
only one checking CO_FL_SSL_WAIT_HS, so as to avoid processing data
when a renegotiation is in progress, but other ones randomly include it
without knowing. And ideally it should even be an internal flag that's
not exposed in the connection.
This patch takes CO_FL_SSL_WAIT_HS out of CO_FL_HANDSHAKE, uses this flag
consistently all over the code, and gets rid of CO_FL_HANDSHAKE_NOSSL.
In order to limit the confusion that has accumulated over time, the
CO_FL_SSL_WAIT_HS flag which indicates an ongoing SSL handshake,
possibly used by a renegotiation was moved after the other ones.
As mentioned in c192b0ab95 ("MEDIUM: connection: remove CO_FL_CONNECTED
and only rely on CO_FL_WAIT_*"), si_cs_recv() currently does not propagate
CS_FL_EOS to CF_READ_NULL if CO_FL_WAIT_L4L6 is set, while this situation
doesn't exist anymore. Let's get rid of this confusing test.
We only add the Early-data header, or get ssl_fc_has_early to return 1, if
we didn't already did the SSL handshake, as otherwise, we know the early
data were fine, and there's no risk of replay attack. But to do so, we
wrongly checked CO_FL_HANDSHAKE, we have to check CO_FL_SSL_WAIT_HS instead,
as we don't care about the status of any other handshake.
This should be backported to 2.1, 2.0, and 1.9.
When deciding if we should add the Early-Data header, or if the sample fetch
should return
Commit 477902bd2e ("MEDIUM: connections: Get ride of the xprt_done
callback.") broke the master CLI for a very obscure reason. It happens
that short requests immediately terminated by a shutdown are properly
received, CS_FL_EOS is correctly set, but in si_cs_recv(), we refrain
from setting CF_SHUTR on the channel because CO_FL_CONNECTED was not
yet set on the connection since we've not passed again through
conn_fd_handler() and it was not done in conn_complete_session(). While
commit a8a415d31a ("BUG/MEDIUM: connections: Set CO_FL_CONNECTED in
conn_complete_session()") fixed the issue, such accident may happen
again as the root cause is deeper and actually comes down to the fact
that CO_FL_CONNECTED is lazily set at various check points in the code
but not every time we drop one wait bit. It is not the first time we
face this situation.
Originally this flag was used to detect the transition between WAIT_*
and CONNECTED in order to call ->wake() from the FD handler. But since
at least 1.8-dev1 with commit 7bf3fa3c23 ("BUG/MAJOR: connection: update
CO_FL_CONNECTED before calling the data layer"), CO_FL_CONNECTED is
always synchronized against the two others before being checked. Moreover,
with the I/Os moved to tasklets, the decision to call the ->wake() function
is performed after the I/Os in si_cs_process() and equivalent, which don't
care about this transition either.
So in essence, checking for CO_FL_CONNECTED has become a lazy wait to
check for (CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN), but that always
relies on someone else having synchronized it.
This patch addresses it once for all by killing this flag and only checking
the two others (for which a composite mask CO_FL_WAIT_L4L6 was added). This
revealed a number of inconsistencies that were purposely not addressed here
for the sake of bisectability:
- while most places do check both L4+L6 and HANDSHAKE at the same time,
some places like assign_server() or back_handle_st_con() and a few
sample fetches looking for proxy protocol do check for L4+L6 but
don't care about HANDSHAKE ; these ones will probably fail on TCP
request session rules if the handshake is not complete.
- some handshake handlers do validate that a connection is established
at L4 but didn't clear CO_FL_WAIT_L4_CONN
- the ->ctl method of mux_fcgi, mux_pt and mux_h1 only checks for L4+L6
before declaring the mux ready while the snd_buf function also checks
for the handshake's completion. Likely the former should validate the
handshake as well and we should get rid of these extra tests in snd_buf.
- raw_sock_from_buf() would directly set CO_FL_CONNECTED and would only
later clear CO_FL_WAIT_L4_CONN.
- xprt_handshake would set CO_FL_CONNECTED itself without actually
clearing CO_FL_WAIT_L4_CONN, which could apparently happen only if
waiting for a pure Rx handshake.
- most places in ssl_sock that were checking CO_FL_CONNECTED don't need
to include the L4 check as an L6 check is enough to decide whether to
wait for more info or not.
It also becomes obvious when reading the test in si_cs_recv() that caused
the failure mentioned above that once converted it doesn't make any sense
anymore: having CS_FL_EOS set while still waiting for L4 and L6 to complete
cannot happen since for CS_FL_EOS to be set, the other ones must have been
validated.
Some of these parts will still deserve further cleanup, and some of the
observations above may induce some backports of potential bug fixes once
totally analyzed in their context. The risk of breaking existing stuff
is too high to blindly backport everything.
ocsp_issuer is primary set from ckch->chain when PEM is loaded from file,
but not set when PEM is loaded via CLI payload. Set ckch->ocsp_issuer in
ssl_sock_load_pem_into_ckch to fix that.
Should be backported in 2.1.
We can't just assume conn_create_mux() will be called, and set CO_FL_CONNECTED,
conn_complete_session() might be call synchronously if we're not using SSL,
so ew haee no choice but to set CO_FL_CONNECTED in there. This should fix
the recent breakage of the mcli reg tests.
When using the OCSP response, if the issuer of the response is in
the certificate chain, its address will be stored in ckch->ocsp_issuer.
However, since the ocsp_issuer could be filled by a separate file, this
pointer is free'd. The refcount of the X509 need to be incremented to
avoid a double free if we free the ocsp_issuer AND the chain.
As reported in bug #447, gcc 9.2 invents impossible code paths and then
complains that we don't check for our pointers to be NULL... This code
path is not critical, better add the test to shut it up than try to
help it being less creative.
This code hasn't changed for a while, so it could help distros to
backport this to older releases.
objt_conn() may return a NULL though here we don't have this situation
anymore since the connection is always there, so let's simply switch
to the unchecked __objt_conn(). This addresses issue #454.
Coverity rightfully reported that it's pointless to test for "conn"
to be null while all code paths leading to it have already
dereferenced it. This addresses issue #461.
When using "set ssl cert" on the CLI, if we load a new PEM, the previous
sctl, issuer and OCSP response are still loaded. This doesn't make any
sense since they won't be usable with a new private key.
This patch free the previous data.
Should be backported in 2.1.
When using multiple filters with "show table", it can be useful to
report which filter entry failed
> show table MY_TABLE data.gpc0 gt 0 data.gpc0a lt 1000
Filter entry #2: Unknown data type
> show table MY_TABLE data.gpc0 gt 0 data.gpc0 lt 1000a
Filter entry #2: Require a valid integer value to compare against
We now also catch garbage data after the filter
> show table MY_TABLE data.gpc0 gt 0 data.gpc0 lt 1000 data.gpc0 gt 1\
data.gpc0 gt 10 a
Detected extra data in filter, 16th word of input, after '10'
Even before multi-filter feature we've also silently accepted garbage
after the input, hiding potential bugs
> show table MY_TABLE data.gpc0 gt 0 data.gpc0
or
> show table MY_TABLE data.gpc0 gt 0 a
In both cases, only first filter entry would be used, silently ignoring
extra filter entry or garbage data.
Last, but not the least, it is now possible to detect multi-filter
feature from cli with something like the following:
> show table MY_TABLE data.blah
Filter entry #1: Unknown data type
The xprt_done_cb callback was used to defer some connection initialization
until we're connected and the handshake are done. As it mostly consists of
creating the mux, instead of using the callback, introduce a conn_create_mux()
function, that will just call conn_complete_session() for frontend, and
create the mux for backend.
In h2_wake(), make sure we call the wake method of the stream_interface,
as we no longer wakeup the stream task.
In connect_server(), when creating a new connection for which we don't yet
know the mux (because it'll be decided by the ALPN), instead of associating
the connection to the stream_interface, always create a conn_stream. This way,
we have less special-casing needed. Store the conn_stream in conn->ctx,
so that we can reach the upper layers if needed.
We don't properly check for missing data values for additional filter
entries, passing out of bounds index to args[], then passing to strlen.
Introduced in commit 1a693fc2: (MEDIUM: cli: Allow multiple filter
entries for "show table")
Last commit 1a693fc2fd ("MEDIUM: cli: Allow multiple filter entries for "show table"")
broke the build at two places:
src/stick_table.c: In function 'table_prepare_data_request':
src/stick_table.c:3620:33: warning: ordered comparison of pointer with integer zero [-Wextra]
src/stick_table.c: In function 'cli_io_handler_table':
src/stick_table.c:3763:5: error: 'for' loop initial declarations are only allowed in C99 mode
src/stick_table.c:3763:5: note: use option -std=c99 or -std=gnu99 to compile your code
make: *** [src/stick_table.o] Error 1
This patch fixes both. No backport needed.
"set ssl cert <filename> <payload>" CLI command should have the same
result as reload HAproxy with the updated pem file (<filename>).
Is not the case, DHparams/cert-chain is kept from the previous
context if no DHparams/cert-chain is set in the context (<payload>).
This patch should be backport to 2.1