Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload")
fixes a issue where the /dev/random FD was leaked by OpenSSL upon a
reload in master worker mode. Indeed the FD was not flagged with
CLOEXEC.
The fix was checking if ssl_used_frontend or ssl_used_backend were set
to close the FD. This is wrong, indeed the lua init code creates an SSL
server without increasing the backend value, so the deinit is never
done when you don't use SSL in your configuration.
To reproduce the problem you just need to build haproxy with openssl and
lua with an openssl which does not use the getrandom() syscall. No
openssl nor lua configuration are required for haproxy.
This patch must be backported as far as 1.8.
Fix issue #314.
The recent changes to address URI issues mixed with the recent fix to
stop caching absolute URIs have caused the cache not to cache H2 requests
anymore since these ones come with a scheme and authority. Let's unbreak
this by using absolute URIs all the time, now that we keep host and
authority in sync. So what is done now is that if we have an authority,
we take the whole URI as it is as the cache key. This covers H2 and H1
absolute requests. If no authority is present (most H1 origin requests),
then we prepend "https://" and the Host header. The reason for https://
is that most of the time we don't care about the scheme, but since about
all H2 clients use this scheme, at least we can share the cache between
H1 and H2.
No backport is needed since the breakage only affects 2.1-dev.
A lot of our chunk-based functions are able to work on a buffer pointer
but not on an ist. Instead of duplicating all of them to also take an
ist as a source, let's have a macro to make a temporary dummy buffer
from an ist. This will only result in structure field manipulations
that the compiler will quickly figure to eliminate them with inline
functions, and in other cases it will just use 4 words in the stack
before calling a function, instead of performing intermediary
conversions.
Occasionally some short-lived branches are pushed to help developers
rebase their work, these ones do not need to be built. This patch
explicitly lists "master" and "next" as the two only branches of
interest. It also adds a comment with the URL for the build status.
When a response generated by HAProxy is handled by the mux H1, if the
corresponding request has not fully been received, the close mode is
forced. Thus, the client is notified the connection will certainly be closed
abruptly, without waiting the end of the request.
The flag HTX_FL_PROXY_RESP is now set on responses generated by HAProxy,
excluding responses returned by applets and services. It is an informative flag
set by the applicative layer.
When an error file was loaded, the flag HTX_SL_F_XFER_LEN was never set on the
HTX start line because of a bug. During the headers parsing, the flag
H1_MF_XFER_LEN is never set on the h1m. But it was the condition to set
HTX_SL_F_XFER_LEN on the HTX start-line. Instead, we must only rely on the flags
H1_MF_CLEN or H1_MF_CHNK.
Because of this bug, it was impossible to keep a connection alive for a response
generated by HAProxy. Now the flag HTX_SL_F_XFER_LEN is set when an error file
have a content length (chunked responses are unsupported at this stage) and the
connection may be kept alive if there is no connection header specified to
explicitly close it.
This patch must be backported to 2.0 and 1.9.
It currently is not possible to figure the exact haproxy version from a
core file for the sole reason that the version is stored into a const
string and as such ends up in the .text section that is not part of a
core file. By turning them into variables we move them to the data
section and they appear in core files. In order to help finding them,
we just prepend an extra variable in front of them and we're able to
immediately spot the version strings from a core file:
$ strings core | fgrep -A2 'HAProxy version'
HAProxy version follows
2.1-dev2-e0f48a-88
2019/10/15
(These are haproxy_version and haproxy_date respectively). This may be
backported to 2.0 since this part is not support to impact anything but
the developer's time spent debugging.
246c024 ("MINOR: ssl: load the ocsp in/from the ckch") broke the loading
of OCSP files. The function ssl_sock_load_ocsp_response_from_file() was
not returning 0 upon success which lead to an error after the .ocsp was
read.
The error messages for OCSP in ssl_sock_load_crt_file_into_ckch() add a
double extension to the filename, that can be confusing. The messages
reference a .issuer.issuer file.
If the user agent data contains text that has special characters that
are used to format the output from the vfprintf() function, haproxy
crashes. String "%s %s %s" may be used as an example.
% curl -A "%s %s %s" localhost:10080/index.html
curl: (52) Empty reply from server
haproxy log:
00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1
00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080
00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s
00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */*
segmentation fault (core dumped)
gdb 'where' output:
#0 strlen () at ../sysdeps/x86_64/strlen.S:106
#1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>,
format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637
#2 0x00007f7c014cfe89 in _IO_vsnprintf (
string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U",
maxlen=<optimized out>,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n",
args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114
#3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5,
format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n")
at src/log.c:1477
#4 0x000055cb75845e0b in ha_wurfl_log (
message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47
#5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70)
at src/wurfl.c:763
In case WURFL (actually HAProxy) is not compiled with debug option
enabled (-DWURFL_DEBUG), this bug does not come to light.
This patch could be backported in every version supporting
the ScientiaMobile's WURFL. (as far as 1.7)
Absolute path must be used, otherwise, the requests are rejected by HAProxy
because of the recent changes. In addition, the configuration has been slightly
updated to remove warnings at startup.
As stated in the RCF7230#5.4, a client must send a field-value for the header
host that is identical to the authority if the target URI includes one. So, now,
by default, if the authority, when provided, does not match the value of the
header host, an error is triggered. To mitigate this behavior, it is possible to
set the option "accept-invalid-http-request". In that case, an http error is
captured without interrupting the request parsing.
There is no reason for a client to send several headers host. It even may be
considered as a bug. However, it is totally invalid to have different values for
those. So now, in such case, an error is triggered during the request
parsing. In addition, when several headers host are found with the same value,
only the first instance is kept and others are skipped.
When the option "accept-invalid-http-request" is enabled, some parsing errors
are ignored. But the position of the error is reported. In legacy HTTP mode,
such errors were captured. So, we now do the same in the H1 multiplexer.
If required, this patch may be backported to 2.0 and 1.9.
When an outgoing HTX message is formatted to a raw message, DATA blocks may be
splitted to not tranfser more data than expected. But if the buffer is almost
full, the formatting is interrupted, leaving some unused free space in the
buffer, because data are too large to be copied in one time.
Now, we transfer as much data as possible. When the message is chunked, we also
count the size used to encode the data.
When an outgoing HTX message is formatted to a raw message, if we fail to copy
data of an HTX block into the output buffer, we mark it as full. Before it was
only done calling the function buf_room_for_htx_data(). But this function is
designed to optimize input processing.
This patch must be backported to 2.0 and 1.9.
When raw data are copied or appended in a chunk, the result must not exceed the
chunk size but it can reach it. Unlike functions to copy or append a string,
there is no terminating null byte.
This patch must be backported as far as 1.8. Note in 1.8, the functions
chunk_cpy() and chunk_cat() don't exist.
In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are
chained. The expected size is always compared to available room in the buffer to
be sure the full copy will succeed. But it is a bit risky because it relies on
the fact the function chunk_memcat() evaluates the available room in the buffer
in a same way than htx ones. And, unfortunately, it does not. A bug in
chunk_memcat() will always leave a byte unused in the buffer. So, for instance,
when a chunk is copied in an almost full buffer, the last CRLF may be skipped.
To fix the issue, we now rely on the result of chunk_memcat() only.
This patch must be backported to 2.0 and 1.9.
The SSL engines code was written below the OCSP #ifdef, which means you
can't build the engines code if the OCSP is deactived in the SSL lib.
Could be backported in every version since 1.8.
A NULL dereference can occur when inserting SNIs. In the case of
checking for duplicates, if there is already several sni_ctx with the
same key.
Fix issue #321.
Don't try to load the files containing the issuer and the OCSP response
each time we generate a SSL_CTX.
The .ocsp and the .issuer are now loaded in the struct
cert_key_and_chain only once and then loaded from this structure when
creating a SSL_CTX.
Don't try to load the file containing the sctl each time we generate a
SSL_CTX.
The .sctl is now loaded in the struct cert_key_and_chain only once and
then loaded from this structure when creating a SSL_CTX.
Note that this now make possible the use of sctl with multi-cert
bundles.
$ echo -e "set ssl cert certificate.pem <<\n$(cat certificate2.pem)\n" | \
socat stdio /var/run/haproxy.stat
Certificate updated!
The operation is locked at the ckch level with a HA_SPINLOCK_T which
prevents the ckch architecture (ckch_store, ckch_inst..) to be modified
at the same time. So you can't do a certificate update at the same time
from multiple CLI connections.
SNI trees are also locked with a HA_RWLOCK_T so reading operations are
locked only during a certificate update.
Bundles are supported but you need to update each file (.rsa|ecdsa|.dsa)
independently. If a file is used in the configuration as a bundle AND
as a unique certificate, both will be updated.
Bundles, directories and crt-list are supported, however filters in
crt-list are currently unsupported.
The code tries to allocate every SNIs and certificate instances first,
so it can rollback the operation if that was unsuccessful.
If you have too much instances of the certificate (at least 20000 in my
tests on my laptop), the function can take too much time and be killed
by the watchdog. This will be fixed later. Also with too much
certificates it's possible that socat exits before the end of the
generation without displaying a message, consider changing the socat
timeout in this case (-t2 for example).
The size of the certificate is currently limited by the maximum size of
a payload, that must fit in a buffer.
The ssl_sock_load_{multi}_ckchs() function were renamed and modified:
- allocate a ckch_inst and loads the sni in it
- return a ckch_inst or NULL
- the sni_ctx are not added anymore in the sni trees from there
- renamed in ckch_inst_new_load_{multi}_store()
- new ssl_sock_load_ckchs() function calls
ckch_inst_new_load_{multi}_store() and add the sni_ctx to the sni trees.
ssl_sock_load_multi_ckchs() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
ssl_sock_load_ckchn() is now able to fail without polluting the
bind_conf trees and leaking memory.
It is a prerequisite to load certificate on-the-fly with the CLI.
The insertion of the sni_ctxs in the trees are done once everything has
been allocated correctly.
In order to allow the creation of sni_ctx in runtime, we need to split
the function to allow rollback.
We need to be able to allocate all sni_ctxs required before inserting
them in case we need to rollback if we didn't succeed the allocation.
The function was splitted in 2 parts.
The first one ckch_inst_add_cert_sni() allocates a struct sni_ctx, fill
it with the right data and insert it in the ckch_inst's list of sni_ctx.
The second will take every sni_ctx in the ckch_inst and insert them in
the bind_conf's sni tree.
struct ckch_inst represents an instance of a certificate (ckch_node)
used in a bind_conf. Every sni_ctx created for 1 ckch_node in a
bind_conf are linked in this structure.
This patch allocate the ckch_inst for each bind_conf and inserts the
sni_ctx in its linked list.
The ssl_sock_populate_sni_keytypes_hplr() function does not return an
error upon an allocation failure.
The process would probably crash during the configuration parsing if the
allocation fail since it tries to copy some data in the allocated
memory.
This patch could be backported as far as 1.5.
This patch frees the sni_keytype nodes once the sni_ctxs have been
allocated in ssl_sock_load_multi_ckchn();
Could be backported in every version using the multi-cert SSL bundles.
The ssl_sock_add_cert_sni() function never return an error when a
sni_ctx allocation fail. It silently ignores the problem and continues
to try to allocate other snis.
It is unlikely that a sni allocation will succeed after one failure and
start a configuration without all the snis. But to avoid any problem we
return a -1 upon an sni allocation error and stop the configuration
parsing.
This patch must be backported in every version supporting the crt-list
sni filters. (as far as 1.5)
A ckch_store is a storage which contains a pointer to one or several
cert_key_and_chain structures.
This patch renames ckch_node to ckch_store, and ckch_n, ckchn to ckchs.