When running haproxy -c, the cache parser is trying to allocate the size
of the cache. This can be a problem in an environment where the RAM is
limited.
This patch moves the cache allocation in the post_check callback which
is not executed during a -c.
This patch may be backported at least to 2.0 and 1.9. In 1.9, the callbacks
registration mechanism is not the same. So the patch will have to be adapted. No
need to backport it to 1.8, the code is probably too different.
When a stick counter is fetched, it is important that the requested counter does
not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build
because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we
never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For
other sample fetches, the value is tested.
But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance
1. In this case the counters sc1_* and sc2_* may be undefined.
This patch fixes the issue #330. It must be backported as far as 1.7.
When an error occurred in the function bind_parse_tls_ticket_keys(), during the
configuration parsing, the opened file is not always closed. To fix the bug, all
errors are catched at the same place, where all ressources are released.
This patch fixes the bug #325. It must be backported as far as 1.7.
When using the master CLI with 'fd@', during a reload, the master CLI
proxy is stopped. Unfortunately if this is an inherited FD it is closed
too, and the master CLI won't be able to bind again during the
re-execution. It lead the master to fallback in waitpid mode.
This patch forbids the inherited FDs in the master's listeners to be
closed during a proxy_stop().
This patch is mandatory to use the -W option in VTest versions that contain the
-mcli feature.
(86e65f1024)
Should be backported as far as 1.9.
If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic
feature from openssl instead of ECDH automatic feature. There is no impact for
the ECDH one because the feature is always enabled for that version. But doing
this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters.
This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of
'SSL_CTX_set_dh_auto'.
Currently some users may use a 2048 DH bits parameter, thinking they're using a
1024 bits one. Doing this, they may experience performance issue on light hardware.
This patch warns the user if haproxy fails to configure the given DH parameter.
In this case and if openssl version is > 1.1.0, haproxy will let openssl to
automatically choose a default DH parameter. For other openssl versions, the DH
ciphers won't be usable.
A commonly case of failure is due to the security level of openssl.cnf
which could refuse a 1024 bits DH parameter for a 2048 bits key:
$ cat /etc/ssl/openssl.cnf
...
[system_default_sect]
MinProtocol = TLSv1
CipherString = DEFAULT@SECLEVEL=2
This should be backport into any branch containing the commit c2aae74f0.
It requires all or part of the previous CLEANUP series.
This addresses github issue #324.
All bind keyword parsing message were show as alerts.
With this patch if the message is flagged only with ERR_WARN and
not ERR_ALERT it will show a label [WARNING] and not [ALERT].
ssl_sock_load_dh_params used to return >0 or -1 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. An error message was added in the case of failure and the
comment was updated.
ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success
or failure. Make it return a set of ERR_* instead so that its callers
can transparently report its status. Given that its callers only used
to know about ERR_ALERT | ERR_FATAL, this is the only code returned
for now. And a comment was updated.
ckch_inst_new_load_store() and ckch_inst_new_load_multi_store used to
return 0 or >0 to indicate success or failure. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
his is the only code returned for now. And the comment was updated.
ssl_sock_load_ckchs() used to return 0 or >0 to indicate success or
failure even though this was not documented. Make it return a set of
ERR_* instead so that its callers can transparently report its status.
Given that its callers only used to know about ERR_ALERT | ERR_FATAL,
this is the only code returned for now. And a comment was added.
These functions were returning only 0 or 1 to mention success or error,
and made it impossible to return a warning. Let's make them return error
codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since
this is the only code that was set on non-zero return value.
In addition some missing comments were added or adjusted around the
functions' return values.
This test launches a HAProxy process in master worker with 'nbproc 4'.
It sends a "show info" to the process 3 and verify that the right
process replied.
This regtest depends on the support of the master CLI for VTest.
In mux_pt_io_cb(), instead of always calling the wake method, only do so
if nobody subscribed for receive. If we have a subscription, just wake the
associated tasklet up.
This should be backported to 1.9 and 2.0.
There's a small window where the mux_pt tasklet may be woken up, and thus
mux_pt_io_cb() get scheduled, and then the connection is attached to a new
stream. If this happen, don't do anything, and just let the stream know
by calling its wake method. If the connection had an error, the stream should
take care of destroying it by calling the detach method.
This should be backported to 2.0 and 1.9.
This reverts commit "BUG/MEDIUM: mux_pt: Make sure we don't have a
conn_stream before freeing.".
mux_pt_io_cb() is only used if we have no associated stream, so we will
never have a cs, so there's no need to check that, and we of course have to
destroy the mux in mux_pt_detach() if we have no associated session, or if
there's an error on the connection.
This should be backported to 2.0 and 1.9.
Tasklets may be woken up to run on the calling thread or by a specific thread
(the owner). But since we use a non-thread safe mechanism when the calling
thread is also the for the owner, there may sometimes be collisions when two
threads decide to wake the same tasklet up at the same time and one of them
is the owner.
This is more of a matter of usage than code, in that a tasklet usually is
designed to be woken up and executed on the calling thread only (most cases)
or on a specific thread. Thus it is a property of the tasklet itself as this
solely depends how the code is constructed around it.
This patch performs a small change to address this. By default tasklet_new()
creates a "local" tasklet, which will run on the calling thread, like in 2.0.
This is done by setting tl->tid to a negative value. If the caller wants the
tasklet to run exclusively on a specific thread, it just has to set tl->tid,
which is already what shared tasklet callers do anyway.
No backport is needed.
The idle cleanup tasks' masks are wrong for threads 32 to 64, which
causes the wrong thread to wake up and clean the connections that it
does not own, with a risk of crash or infinite loop depending on
concurrent accesses. For thread 32, any thread between 32 and 64 will
be woken up, but for threads 33 to 64, in fact threads 1 to 32 will
run the task instead.
This issue only affects deployments enabling more than 32 threads. While
is it not common in 1.9 where this has to be explicit, and can easily be
dealt with by lowering the number of threads, it can be more common in
2.0 since by default the thread count is determined based on the number
of available processors, hence the MAJOR tag which is mostly relevant
to 2.x.
The problem was first introduced into 1.9-dev9 by commit 0c18a6fe3
("MEDIUM: servers: Add a way to keep idle connections alive.") and was
later moved to cfgparse.c by commit 980855bd9 ("BUG/MEDIUM: server:
initialize the orphaned conns lists and tasks at the end").
This patch needs to be backported as far as 1.9, with care as 1.9 is
slightly different there (uses idle_task[] instead of idle_conn_cleanup[]
like in 2.x).
The use of ~(1 << tid) to compute the sleeping_mask in tasklet_wakeup()
will result in breakage above 32 threads, because (1<<31) = 0xFFFFFFFF8000000,
and upper values will lead to theorically undefined results, but practically
will wrap over 0x1 to 0x80000000 again and indicate wrong sleeping masks. It
seems that the main visible effect maybe extra latency on some threads or
short CPU loops on others.
No backport is needed.
On error, make sure we don't have a conn_stream before freeing the connection
and the associated mux context. Otherwise a stream will still reference
the connection, and attempt to use it.
If we still have a conn_stream, it will properly be free'd when the detach
method is called, anyway.
This should be backported to 2.0 and 1.9.
In MT_LIST_BEHEAD(), explicitely set the next element of the prev to NULL,
instead of setting it to the prev of the next. If we only had one element,
then we'd set the next and the prev to the element itself, and thus it would
make the element appear to be outside any list.
There are 2 kinds of tcp info fetchers. Those returning a time value (fc_rtt and
fc_rttval) and those returning a counter (fc_unacked, fc_sacked, fc_retrans,
fc_fackets, fc_lost, fc_reordering). Because of a bug, the counters were handled
as time values, and by default, were divided by 1000 (because of an invalid
conversion from us to ms). To work around this bug and have the right value, the
argument "us" had to be specified.
So now, tcp info fetchers returning a counter don't support any argument
anymore. To not break old configurations, if an argument is provided, it is
ignored and a warning is emitted during the configuration parsing.
In addition, parameter validiation is now performed during the configuration
parsing.
This patch must be backported as far as 1.7.
Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload")
fixes a issue where the /dev/random FD was leaked by OpenSSL upon a
reload in master worker mode. Indeed the FD was not flagged with
CLOEXEC.
The fix was checking if ssl_used_frontend or ssl_used_backend were set
to close the FD. This is wrong, indeed the lua init code creates an SSL
server without increasing the backend value, so the deinit is never
done when you don't use SSL in your configuration.
To reproduce the problem you just need to build haproxy with openssl and
lua with an openssl which does not use the getrandom() syscall. No
openssl nor lua configuration are required for haproxy.
This patch must be backported as far as 1.8.
Fix issue #314.
The recent changes to address URI issues mixed with the recent fix to
stop caching absolute URIs have caused the cache not to cache H2 requests
anymore since these ones come with a scheme and authority. Let's unbreak
this by using absolute URIs all the time, now that we keep host and
authority in sync. So what is done now is that if we have an authority,
we take the whole URI as it is as the cache key. This covers H2 and H1
absolute requests. If no authority is present (most H1 origin requests),
then we prepend "https://" and the Host header. The reason for https://
is that most of the time we don't care about the scheme, but since about
all H2 clients use this scheme, at least we can share the cache between
H1 and H2.
No backport is needed since the breakage only affects 2.1-dev.
A lot of our chunk-based functions are able to work on a buffer pointer
but not on an ist. Instead of duplicating all of them to also take an
ist as a source, let's have a macro to make a temporary dummy buffer
from an ist. This will only result in structure field manipulations
that the compiler will quickly figure to eliminate them with inline
functions, and in other cases it will just use 4 words in the stack
before calling a function, instead of performing intermediary
conversions.
Occasionally some short-lived branches are pushed to help developers
rebase their work, these ones do not need to be built. This patch
explicitly lists "master" and "next" as the two only branches of
interest. It also adds a comment with the URL for the build status.
When a response generated by HAProxy is handled by the mux H1, if the
corresponding request has not fully been received, the close mode is
forced. Thus, the client is notified the connection will certainly be closed
abruptly, without waiting the end of the request.
The flag HTX_FL_PROXY_RESP is now set on responses generated by HAProxy,
excluding responses returned by applets and services. It is an informative flag
set by the applicative layer.
When an error file was loaded, the flag HTX_SL_F_XFER_LEN was never set on the
HTX start line because of a bug. During the headers parsing, the flag
H1_MF_XFER_LEN is never set on the h1m. But it was the condition to set
HTX_SL_F_XFER_LEN on the HTX start-line. Instead, we must only rely on the flags
H1_MF_CLEN or H1_MF_CHNK.
Because of this bug, it was impossible to keep a connection alive for a response
generated by HAProxy. Now the flag HTX_SL_F_XFER_LEN is set when an error file
have a content length (chunked responses are unsupported at this stage) and the
connection may be kept alive if there is no connection header specified to
explicitly close it.
This patch must be backported to 2.0 and 1.9.
It currently is not possible to figure the exact haproxy version from a
core file for the sole reason that the version is stored into a const
string and as such ends up in the .text section that is not part of a
core file. By turning them into variables we move them to the data
section and they appear in core files. In order to help finding them,
we just prepend an extra variable in front of them and we're able to
immediately spot the version strings from a core file:
$ strings core | fgrep -A2 'HAProxy version'
HAProxy version follows
2.1-dev2-e0f48a-88
2019/10/15
(These are haproxy_version and haproxy_date respectively). This may be
backported to 2.0 since this part is not support to impact anything but
the developer's time spent debugging.
246c024 ("MINOR: ssl: load the ocsp in/from the ckch") broke the loading
of OCSP files. The function ssl_sock_load_ocsp_response_from_file() was
not returning 0 upon success which lead to an error after the .ocsp was
read.
The error messages for OCSP in ssl_sock_load_crt_file_into_ckch() add a
double extension to the filename, that can be confusing. The messages
reference a .issuer.issuer file.
If the user agent data contains text that has special characters that
are used to format the output from the vfprintf() function, haproxy
crashes. String "%s %s %s" may be used as an example.
% curl -A "%s %s %s" localhost:10080/index.html
curl: (52) Empty reply from server
haproxy log:
00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1
00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080
00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s
00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */*
segmentation fault (core dumped)
gdb 'where' output:
#0 strlen () at ../sysdeps/x86_64/strlen.S:106
#1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>,
format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637
#2 0x00007f7c014cfe89 in _IO_vsnprintf (
string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U",
maxlen=<optimized out>,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114
#3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n")
at src/log.c:1477
#4 0x000055cb75845e0b in ha_wurfl_log (
message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47
#5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70)
at src/wurfl.c:763
In case WURFL (actually HAProxy) is not compiled with debug option
enabled (-DWURFL_DEBUG), this bug does not come to light.
This patch could be backported in every version supporting
the ScientiaMobile's WURFL. (as far as 1.7)
Absolute path must be used, otherwise, the requests are rejected by HAProxy
because of the recent changes. In addition, the configuration has been slightly
updated to remove warnings at startup.
As stated in the RCF7230#5.4, a client must send a field-value for the header
host that is identical to the authority if the target URI includes one. So, now,
by default, if the authority, when provided, does not match the value of the
header host, an error is triggered. To mitigate this behavior, it is possible to
set the option "accept-invalid-http-request". In that case, an http error is
captured without interrupting the request parsing.
There is no reason for a client to send several headers host. It even may be
considered as a bug. However, it is totally invalid to have different values for
those. So now, in such case, an error is triggered during the request
parsing. In addition, when several headers host are found with the same value,
only the first instance is kept and others are skipped.
When the option "accept-invalid-http-request" is enabled, some parsing errors
are ignored. But the position of the error is reported. In legacy HTTP mode,
such errors were captured. So, we now do the same in the H1 multiplexer.
If required, this patch may be backported to 2.0 and 1.9.
When an outgoing HTX message is formatted to a raw message, DATA blocks may be
splitted to not tranfser more data than expected. But if the buffer is almost
full, the formatting is interrupted, leaving some unused free space in the
buffer, because data are too large to be copied in one time.
Now, we transfer as much data as possible. When the message is chunked, we also
count the size used to encode the data.
When an outgoing HTX message is formatted to a raw message, if we fail to copy
data of an HTX block into the output buffer, we mark it as full. Before it was
only done calling the function buf_room_for_htx_data(). But this function is
designed to optimize input processing.
This patch must be backported to 2.0 and 1.9.
When raw data are copied or appended in a chunk, the result must not exceed the
chunk size but it can reach it. Unlike functions to copy or append a string,
there is no terminating null byte.
This patch must be backported as far as 1.8. Note in 1.8, the functions
chunk_cpy() and chunk_cat() don't exist.
In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are
chained. The expected size is always compared to available room in the buffer to
be sure the full copy will succeed. But it is a bit risky because it relies on
the fact the function chunk_memcat() evaluates the available room in the buffer
in a same way than htx ones. And, unfortunately, it does not. A bug in
chunk_memcat() will always leave a byte unused in the buffer. So, for instance,
when a chunk is copied in an almost full buffer, the last CRLF may be skipped.
To fix the issue, we now rely on the result of chunk_memcat() only.
This patch must be backported to 2.0 and 1.9.
The SSL engines code was written below the OCSP #ifdef, which means you
can't build the engines code if the OCSP is deactived in the SSL lib.
Could be backported in every version since 1.8.