This new sample-fetches captures the cipher list offer by the client
SSL connection during the client-hello phase. This is useful for
fingerprint the SSL connection.
BoringSSL doesn't support SSL_CTX_set_ssl_version. To remove this call, the
CTX creation is cleanup to clarify what is happening. SSL_CTX_new is used to
match the original behavior, in order: force-<method> according the method
version then the default method with no-<method> options.
OPENSSL_NO_SSL3 error message is now in force-sslv3 parsing (as force-tls*).
For CTX creation in bind environement, all CTX set related to the initial ctx
are aggregate to ssl_sock_new_ctx function for clarity.
Tests with crt-list have shown that server_method, options and mode are
linked to the initial CTX (default_ctx): all ssl-options are link to each
bind line and must be removed from crt-list.
Extract from RFC 6066:
"If the server understood the ClientHello extension but does not recognize
the server name, the server SHOULD take one of two actions: either abort the
handshake by sending a fatal-level unrecognized_name(112) alert or continue the
handshake. It is NOT RECOMMENDED to send a warning-level unrecognized_name(112)
alert, because the client's behavior in response to warning-level alerts is
unpredictable. If there is a mismatch between the server name used by the
client application and the server name of the credential chosen by the server,
this mismatch will become apparent when the client application performs the
server endpoint identification, at which point the client application will have
to decide whether to proceed with the communication."
Thanks Roberto Guimaraes for the bug repport, spotted with openssl-1.1.0.
This fix must be backported.
SSL verify and client_CA inherits from the initial ctx (default_ctx).
When a certificate is found, the SSL connection environment must be replaced by
the certificate configuration (via SSL_set_verify and SSL_set_client_CA_list).
This patch used boringssl's callback to analyse CLientHello before any
handshake to extract key signature capabilities.
Certificat with better signature (ECDSA before RSA) is choosed
transparenty, if client can support it. RSA and ECDSA certificates can
be declare in a row (without order). This makes it possible to set
different ssl and filter parameter with crt-list.
In 1.4-dev5 when we started to implement keep-alive, commit a9679ac
("[MINOR] http: make the conditional redirect support keep-alive")
added a specific check was added to support keep-alive on redirect
rules but only when the location would start with a "/" indicating
the client would come back to the same server.
But nowadays most applications put http:// or https:// in front of
each and every location, and continuing to perform a close there is
counter-efficient, especially when multiple objects are fetched at
once from a same origin which redirects them to the correct origin
(eg: after an http to https forced upgrade).
It's about time to get rid of this old trick as it causes more harm
than good at an era where persistent connections are omnipresent.
Special thanks to Ciprian Dorin Craciun for providing convincing
arguments with a pretty valid use case and proposing this draft
patch which addresses the issue he was facing.
This change although not exactly a bug fix should be backported
to 1.7 to adapt better to existing infrastructure.
Adrian Fitzpatrick reported that since commit f51658d ("MEDIUM: config:
relax use_backend check to make the condition optional"), typos like "of"
instead of "if" on use_backend rules are not properly detected. The reason
is that the parser only checks for "if" or "unless" otherwise it considers
there's no keyword, making the rule inconditional.
This patch fixes it. It may reveal some rare config bugs for some people,
but will not affect valid configurations.
This fix must be backported to 1.7, 1.6 and 1.5.
Error in the HTTP parser. The function http_get_path() can
return NULL and this case is not catched in the code. So, we
try to dereference NULL pointer, and a segfault occurs.
These two lines are useful to prevent the bug.
acl prevent_bug path_beg /
http-request deny if !prevent_bug
This bug fix should be backported in 1.6 and 1.7
Commit 405ff31 ("BUG/MINOR: ssl: assert on SSL_set_shutdown with BoringSSL")
introduced a regression causing some random crashes apparently due to
memory corruption. The issue is the use of SSL_CTX_set_quiet_shutdown()
instead of SSL_set_quiet_shutdown(), making it use a different structure
and causing the flag to be put who-knows-where.
Many thanks to Jarno Huuskonen who reported this bug early and who
bisected the issue to spot this patch. No backport is needed, this
is 1.8-specific.
Historically, http-response rules couldn't produce errors generating HTTP
responses during their evaluation. This possibility was "implicitly" added with
http-response redirect rules (51d861a4). But, at the time, replace-header rules
were kept untouched. When such a rule failed, the rules processing was just
stopped (like for an accept rule).
Conversely, when a replace-header rule fails on the request, it generates a HTTP
response (400 Bad Request).
With this patch, errors on replace-header rule are now handled in the same way
for HTTP requests and HTTP responses.
This patch should be backported in 1.7 and 1.6.
This is the same fix as which concerning the redirect rules (0d94576c).
The buffer used to expand the <replace-fmt> argument must be protected to
prevent it being overwritten during build_logline() execution (the function used
to expand the format string).
This patch should be backported in 1.7, 1.6 and 1.5. It relies on commit b686afd
("MINOR: chunks: implement a simple dynamic allocator for trash buffers") for
the trash allocator, which has to be backported as well.
Some users have experienced some troubles using the compression filter when the
HTTP response body length is undefined. They complained about receiving
truncated responses.
In fact, the bug can be triggered if there is at least one filter attached to
the stream but none registered to analyze the HTTP response body. In this case,
when the body length is undefined, data should be forwarded without any
parsing. But, because of a wrong check, we were starting to parse them. Because
it was not expected, the end of response was not correctly detected and the
response could be truncted. So now, we rely on HAS_DATA_FILTER macro instead of
HAS_FILTER one to choose to parse HTTP response body or not.
Furthermore, in http_response_forward_body, the test to not forward the server
closure to the client has been updated to reflect conditions listed in the
associated comment.
And finally, in http_msg_forward_body, when the body length is undefined, we
continue the parsing it until the server closes the connection without any on
filters. So filters can safely stop to filter data during their parsing.
This fix should be backported in 1.7
See 4b788f7d34
If we use the action "http-request redirect" with a Lua sample-fetch or
converter, and the Lua function calls one of the Lua log function, the
header name is corrupted, it contains an extract of the last loggued data.
This is due to an overwrite of the trash buffer, because his scope is not
respected in the "add-header" function. The scope of the trash buffer must
be limited to the function using it. The build_logline() function can
execute a lot of other function which can use the trash buffer.
This patch fix the usage of the trash buffer. It limits the scope of this
global buffer to the local function, we build first the header value using
build_logline, and after we store the header name.
Thanks Jesse Schulman for the bug repport.
This patch must be backported in 1.7, 1.6 and 1.5 version, and it relies
on commit b686afd ("MINOR: chunks: implement a simple dynamic allocator for
trash buffers") for the trash allocator, which has to be backported as well.
The trash buffers are becoming increasingly complex to deal with due to
the code's modularity allowing some functions to be chained and causing
the same chunk buffers to be used multiple times along the chain, possibly
corrupting each other. In fact the trash were designed from scratch for
explicitly not surviving a function call but string manipulation makes
this impossible most of the time while not fullfilling the need for
reliable temporary chunks.
Here we introduce the ability to allocate a temporary trash chunk which
is reserved, so that it will not conflict with the trash chunks other
functions use, and will even support reentrant calls (eg: build_logline).
For this, we create a new pool which is exactly the size of a usual chunk
buffer plus the size of the chunk struct so that these chunks when allocated
are exactly the same size as the ones returned by get_trash_buffer(). These
chunks may fail so the caller must check them, and the caller is also
responsible for freeing them.
The code focuses on minimal changes and ease of reliable backporting
because it will be needed in stable versions in order to support next
patch.
UDP sockets used to send DNS queries are created before fork happens and
this is a big problem because all the processes (in case of a
configuration starting multiple processes) share the same socket. Some
processes may consume responses dedicated to an other one, some servers
may be disabled, some IPs changed, etc...
As a workaround, this patch close the existing socket and create a new
one after the fork() has happened.
[wt: backport this to 1.7]
The function dns_init_resolvers() is used to initialize socket used to
send DNS queries.
This patch gives the function the ability to close a socket before
re-opening it.
[wt: this needs to be backported to 1.7 for next fix]
This patch change the names prefixing it by a "_". So "end" becomes "_end".
The backward compatibility with names without the prefix "_" is assured.
In other way, another the keyword "end" can be used like this: Map['end'].
Thanks Robin H. Johnson for the bug repport
This should be backported in version 1.6 and 1.7
Right now not only we're limited to 8 bits, but it's mentionned nowhere
and the limit was already reached. In addition, pp_opts (proxy protocol
options) were set to 32 bits while only 3 are needed. So let's swap
these two and group them together to avoid leaving two holes in the
structure, saving 64 bits on 64-bit machines.
There's a test after a successful synchronous connect() consisting
in waking the data layer up asap if there's no more handshake.
Unfortunately this test is run before setting the CO_FL_SEND_PROXY
flag and before the transport layer adds its own flags, so it can
indicate a willingness to send data while it's not the case and it
will have to be handled later.
This has no visible effect except a useless call to a function in
case of health checks making use of the proxy protocol for example.
Additionally a corner case where EALREADY was returned and considered
equivalent to EISCONN was fixed so that it's considered equivalent to
EINPROGRESS given that the connection is not complete yet. But this
code should never return on the first call anyway so it's mostly a
cleanup.
This fix should be backported to 1.7 and 1.6 at least to avoid
headaches during some debugging.
While testing a tcp_fastopen related change, it appeared that in the rare
case where connect() can immediately succeed, we still subscribe to write
notifications on the socket, causing the conn_fd_handler() to immediately
be called and a second call to connect() to be attempted to double-check
the connection.
In fact this issue had already been met with unix sockets (which often
respond immediately) and partially addressed but incorrect so another
patch will follow. But for TCP nothing was done.
The fix consists in removing the WAIT_L4_CONN flag if connect() succeeds
and to subscribe for writes only if some handshakes or L4_CONN are still
needed. In addition in order not to fail raw TCP health checks, we have
to continue to enable polling for data when nothing is scheduled for
leaving and the connection is already established, otherwise the caller
will never be notified.
This fix should be backported to 1.7 and 1.6.
A recent patch to support BoringSSL caused this warning to appear on
OpenSSL 1.1.0 :
src/ssl_sock.c:3062:4: warning: statement with no effect [-Wunused-value]
It's caused by SSL_CTX_set_ecdh_auto() which is now only a macro testing
that the last argument is zero, and the result is not used here. Let's
just kill it for both versions.
Tested with 0.9.8, 1.0.0, 1.0.1, 1.0.2, 1.1.0. This fix may be backported
to 1.7 if the boringssl fix is as well.
This function was deprecated in 1.1.0 causing this warning :
src/ssl_sock.c:551:3: warning: 'RAND_pseudo_bytes' is deprecated (declared at /opt/openssl-1.1.0/include/openssl/rand.h:47) [-Wdeprecated-declarations]
The man suggests to use RAND_bytes() instead. While the return codes
differ, it turns out that the function was already misused and was
relying on RAND_bytes() return code instead.
The patch was tested on 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This fix must be backported to 1.7 and the return code check should
be backported to earlier versions if relevant.
In 1.0.0, this function was replaced with ERR_remove_thread_state().
As of openssl 1.1.0, both are now deprecated and do nothing at all.
Thus we simply make this call do nothing in 1.1.0 to silence the
warning.
The change was tested with 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This kills the following warning on 1.1.0 :
src/ssl_sock.c:7266:9: warning: 'ERR_remove_state' is deprecated (declared at /dev/shm/openssl-1.1.0b/include/openssl/err.h:247) [-Wdeprecated-declarations]
This fix should be backported to 1.7.
After the code was ported to support 1.1.0, this one broke on 1.0.0 :
src/shctx.c:406: undefined reference to `SSL_SESSION_set1_id_context'
The function was indeed introduced only in 1.0.1. The build was validated
with 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This fix must be backported to 1.7.
Limitations:
. disable force-ssl/tls (need more work)
should be set earlier with SSL_CTX_new (SSL_CTX_set_ssl_version is removed)
. disable generate-certificates (need more work)
introduce SSL_NO_GENERATE_CERTIFICATES to disable generate-certificates.
Cleanup some #ifdef and type related to boringssl env.
This change adds possibility to change agent-addr and agent-send directives
by CLI/socket. Now you can replace server's and their configuration without
reloading/restarting whole haproxy, so it's a step in no-reload/no-restart
direction.
Depends on #e9602af - agent-addr is implemented there.
Can be backported to 1.7.
This directive add possibility to set different address for agent-checks.
With this you can manage server status and weight from central place.
Can be backported to 1.7.
crt-list is extend to support ssl configuration. You can now have
such line in crt-list <file>:
mycert.pem [npn h2,http/1.1]
Support include "npn", "alpn", "verify", "ca_file", "crl_file",
"ecdhe", "ciphers" configuration and ssl options.
"crt-base" is also supported to fetch certificates.
When the stream's backend was defined, the request's analyzers flag was always
set to 0 if the stream had no listener. This bug was introduced with the filter
API but never triggered (I think so).
Because of the commit 5820a366, it is now possible to encountered it. For
example, this happens when the trace filter is enabled on a SPOE backend. The
fix is pretty trivial.
This fix must be backported to 1.7.
The previous version used an O(number of proxies)^2 algo to get the sum of
the number of maxconns of frontends which reference a backend at least once.
This new version adds the frontend's maxconn number to the backend's
struct proxy member 'tot_fe_maxconn' when the backend name is resolved
for switching rules or default_backend statment. At the end, the final
backend's fullconn is computed looping only one time for all on proxies O(n).
The load of a configuration using a large amount of backends (10 thousands)
without configured fullconn was reduced from several minutes to few seconds.
The output of whether prefer-server-ciphers is supported by OpenSSL
actually always show yes in 1.8, because SSL_OP_CIPHER_SERVER_PREFERENCE
is redefined before the actual check in src/ssl_sock.c, since it was
moved from here from src/haproxy.c.
Since this is not really relevant anymore as we don't support OpenSSL
< 0.9.7 anyway, this change just removes this output.
When haproxy is compiled without zlib or slz, the output of
haproxy -vv shows (null).
Make haproxy -vv output great again by providing the proper
information (which is what we did before).
This is for 1.8 only.
[wt: this one is in fact emulated using http-request deny. This
patch can thus be backported to 1.7, 1.6 and 1.5 so that users
of older versions do not add this keyword in their configs]
With BoringSSL:
SSL_set_shutdown: Assertion `(SSL_get_shutdown(ssl) & mode) == SSL_get_shutdown(ssl)' failed.
"SSL_set_shutdown causes ssl to behave as if the shutdown bitmask (see SSL_get_shutdown)
were mode. This may be used to skip sending or receiving close_notify in SSL_shutdown by
causing the implementation to believe the events already happened.
It is an error to use SSL_set_shutdown to unset a bit that has already been set.
Doing so will trigger an assert in debug builds and otherwise be ignored.
Use SSL_CTX_set_quiet_shutdown instead."
Change logic to not notify on SSL_shutdown when connection is not clean.
"X509_get_pubkey() attempts to decode the public key for certificate x.
If successful it returns the public key as an EVP_PKEY pointer with its
reference count incremented: this means the returned key must be freed
up after use."
This prevents DNS from resolving IPv6-only servers in 1.7. Note, this
patch depends on the previous series :
1. BUG/MINOR: tools: fix off-by-one in port size check
2. BUG/MEDIUM: server: consider AF_UNSPEC as a valid address family
3. MEDIUM: server: split the address and the port into two different fields
4. MINOR: tools: make str2sa_range() return the port in a separate argument
5. MINOR: server: take the destination port from the port field, not the addr
6. MEDIUM: server: disable protocol validations when the server doesn't resolve
This fix (hence the whole series) must be backported to 1.7.
When a server doesn't resolve we don't know the address family so we
can't perform the basic protocol validations. However we know that we'll
ultimately resolve to AF_INET4 or AF_INET6 so the controls are OK. It is
important to proceed like this otherwise it will not be possible to start
with unresolved addresses.
Next patch will cause the port to disappear from the address field when servers
do not resolve so we need to take it from the separate field provided by
str2sa_range().
Keeping the address and the port in the same field causes a lot of problems,
specifically on the DNS part where we're forced to cheat on the family to be
able to keep the port. This causes some issues such as some families not being
resolvable anymore.
This patch first moves the service port to a new field "svc_port" so that the
port field is never used anymore in the "addr" field (struct sockaddr_storage).
All call places were adapted (there aren't that many).
The DNS code is written so as to support AF_UNSPEC to decide on the
server family based on responses, but unfortunately snr_resolution_cb()
considers it as invalid causing a DNS storm to happen when a server
arrives with this family.
This situation is not supposed to happen as long as unresolved addresses
are forced to AF_INET, but this will change with the upcoming fixes and
it's possible that it's not granted already when changing an address on
the CLI.
This fix must be backported to 1.7 and 1.6.