This patch change the names prefixing it by a "_". So "end" becomes "_end".
The backward compatibility with names without the prefix "_" is assured.
In other way, another the keyword "end" can be used like this: Map['end'].
Thanks Robin H. Johnson for the bug repport
This should be backported in version 1.6 and 1.7
Right now not only we're limited to 8 bits, but it's mentionned nowhere
and the limit was already reached. In addition, pp_opts (proxy protocol
options) were set to 32 bits while only 3 are needed. So let's swap
these two and group them together to avoid leaving two holes in the
structure, saving 64 bits on 64-bit machines.
There's a test after a successful synchronous connect() consisting
in waking the data layer up asap if there's no more handshake.
Unfortunately this test is run before setting the CO_FL_SEND_PROXY
flag and before the transport layer adds its own flags, so it can
indicate a willingness to send data while it's not the case and it
will have to be handled later.
This has no visible effect except a useless call to a function in
case of health checks making use of the proxy protocol for example.
Additionally a corner case where EALREADY was returned and considered
equivalent to EISCONN was fixed so that it's considered equivalent to
EINPROGRESS given that the connection is not complete yet. But this
code should never return on the first call anyway so it's mostly a
cleanup.
This fix should be backported to 1.7 and 1.6 at least to avoid
headaches during some debugging.
While testing a tcp_fastopen related change, it appeared that in the rare
case where connect() can immediately succeed, we still subscribe to write
notifications on the socket, causing the conn_fd_handler() to immediately
be called and a second call to connect() to be attempted to double-check
the connection.
In fact this issue had already been met with unix sockets (which often
respond immediately) and partially addressed but incorrect so another
patch will follow. But for TCP nothing was done.
The fix consists in removing the WAIT_L4_CONN flag if connect() succeeds
and to subscribe for writes only if some handshakes or L4_CONN are still
needed. In addition in order not to fail raw TCP health checks, we have
to continue to enable polling for data when nothing is scheduled for
leaving and the connection is already established, otherwise the caller
will never be notified.
This fix should be backported to 1.7 and 1.6.
A recent patch to support BoringSSL caused this warning to appear on
OpenSSL 1.1.0 :
src/ssl_sock.c:3062:4: warning: statement with no effect [-Wunused-value]
It's caused by SSL_CTX_set_ecdh_auto() which is now only a macro testing
that the last argument is zero, and the result is not used here. Let's
just kill it for both versions.
Tested with 0.9.8, 1.0.0, 1.0.1, 1.0.2, 1.1.0. This fix may be backported
to 1.7 if the boringssl fix is as well.
This function was deprecated in 1.1.0 causing this warning :
src/ssl_sock.c:551:3: warning: 'RAND_pseudo_bytes' is deprecated (declared at /opt/openssl-1.1.0/include/openssl/rand.h:47) [-Wdeprecated-declarations]
The man suggests to use RAND_bytes() instead. While the return codes
differ, it turns out that the function was already misused and was
relying on RAND_bytes() return code instead.
The patch was tested on 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This fix must be backported to 1.7 and the return code check should
be backported to earlier versions if relevant.
In 1.0.0, this function was replaced with ERR_remove_thread_state().
As of openssl 1.1.0, both are now deprecated and do nothing at all.
Thus we simply make this call do nothing in 1.1.0 to silence the
warning.
The change was tested with 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This kills the following warning on 1.1.0 :
src/ssl_sock.c:7266:9: warning: 'ERR_remove_state' is deprecated (declared at /dev/shm/openssl-1.1.0b/include/openssl/err.h:247) [-Wdeprecated-declarations]
This fix should be backported to 1.7.
After the code was ported to support 1.1.0, this one broke on 1.0.0 :
src/shctx.c:406: undefined reference to `SSL_SESSION_set1_id_context'
The function was indeed introduced only in 1.0.1. The build was validated
with 0.9.8, 1.0.0, 1.0.1, 1.0.2 and 1.1.0.
This fix must be backported to 1.7.
Limitations:
. disable force-ssl/tls (need more work)
should be set earlier with SSL_CTX_new (SSL_CTX_set_ssl_version is removed)
. disable generate-certificates (need more work)
introduce SSL_NO_GENERATE_CERTIFICATES to disable generate-certificates.
Cleanup some #ifdef and type related to boringssl env.
This change adds possibility to change agent-addr and agent-send directives
by CLI/socket. Now you can replace server's and their configuration without
reloading/restarting whole haproxy, so it's a step in no-reload/no-restart
direction.
Depends on #e9602af - agent-addr is implemented there.
Can be backported to 1.7.
This directive add possibility to set different address for agent-checks.
With this you can manage server status and weight from central place.
Can be backported to 1.7.
crt-list is extend to support ssl configuration. You can now have
such line in crt-list <file>:
mycert.pem [npn h2,http/1.1]
Support include "npn", "alpn", "verify", "ca_file", "crl_file",
"ecdhe", "ciphers" configuration and ssl options.
"crt-base" is also supported to fetch certificates.
When the stream's backend was defined, the request's analyzers flag was always
set to 0 if the stream had no listener. This bug was introduced with the filter
API but never triggered (I think so).
Because of the commit 5820a366, it is now possible to encountered it. For
example, this happens when the trace filter is enabled on a SPOE backend. The
fix is pretty trivial.
This fix must be backported to 1.7.
The previous version used an O(number of proxies)^2 algo to get the sum of
the number of maxconns of frontends which reference a backend at least once.
This new version adds the frontend's maxconn number to the backend's
struct proxy member 'tot_fe_maxconn' when the backend name is resolved
for switching rules or default_backend statment. At the end, the final
backend's fullconn is computed looping only one time for all on proxies O(n).
The load of a configuration using a large amount of backends (10 thousands)
without configured fullconn was reduced from several minutes to few seconds.
The output of whether prefer-server-ciphers is supported by OpenSSL
actually always show yes in 1.8, because SSL_OP_CIPHER_SERVER_PREFERENCE
is redefined before the actual check in src/ssl_sock.c, since it was
moved from here from src/haproxy.c.
Since this is not really relevant anymore as we don't support OpenSSL
< 0.9.7 anyway, this change just removes this output.
When haproxy is compiled without zlib or slz, the output of
haproxy -vv shows (null).
Make haproxy -vv output great again by providing the proper
information (which is what we did before).
This is for 1.8 only.
[wt: this one is in fact emulated using http-request deny. This
patch can thus be backported to 1.7, 1.6 and 1.5 so that users
of older versions do not add this keyword in their configs]
With BoringSSL:
SSL_set_shutdown: Assertion `(SSL_get_shutdown(ssl) & mode) == SSL_get_shutdown(ssl)' failed.
"SSL_set_shutdown causes ssl to behave as if the shutdown bitmask (see SSL_get_shutdown)
were mode. This may be used to skip sending or receiving close_notify in SSL_shutdown by
causing the implementation to believe the events already happened.
It is an error to use SSL_set_shutdown to unset a bit that has already been set.
Doing so will trigger an assert in debug builds and otherwise be ignored.
Use SSL_CTX_set_quiet_shutdown instead."
Change logic to not notify on SSL_shutdown when connection is not clean.
"X509_get_pubkey() attempts to decode the public key for certificate x.
If successful it returns the public key as an EVP_PKEY pointer with its
reference count incremented: this means the returned key must be freed
up after use."
This prevents DNS from resolving IPv6-only servers in 1.7. Note, this
patch depends on the previous series :
1. BUG/MINOR: tools: fix off-by-one in port size check
2. BUG/MEDIUM: server: consider AF_UNSPEC as a valid address family
3. MEDIUM: server: split the address and the port into two different fields
4. MINOR: tools: make str2sa_range() return the port in a separate argument
5. MINOR: server: take the destination port from the port field, not the addr
6. MEDIUM: server: disable protocol validations when the server doesn't resolve
This fix (hence the whole series) must be backported to 1.7.
When a server doesn't resolve we don't know the address family so we
can't perform the basic protocol validations. However we know that we'll
ultimately resolve to AF_INET4 or AF_INET6 so the controls are OK. It is
important to proceed like this otherwise it will not be possible to start
with unresolved addresses.
Next patch will cause the port to disappear from the address field when servers
do not resolve so we need to take it from the separate field provided by
str2sa_range().
Keeping the address and the port in the same field causes a lot of problems,
specifically on the DNS part where we're forced to cheat on the family to be
able to keep the port. This causes some issues such as some families not being
resolvable anymore.
This patch first moves the service port to a new field "svc_port" so that the
port field is never used anymore in the "addr" field (struct sockaddr_storage).
All call places were adapted (there aren't that many).
The DNS code is written so as to support AF_UNSPEC to decide on the
server family based on responses, but unfortunately snr_resolution_cb()
considers it as invalid causing a DNS storm to happen when a server
arrives with this family.
This situation is not supposed to happen as long as unresolved addresses
are forced to AF_INET, but this will change with the upcoming fixes and
it's possible that it's not granted already when changing an address on
the CLI.
This fix must be backported to 1.7 and 1.6.
port_to_str() checks that the port size is at least 5 characters instead
of at least 6. While in theory it could permit a buffer overflow, it's
harmless because all callers have at least 6 characters here.
This fix needs to be backported to 1.7, 1.6 and 1.5.
http-reuse should normally not be used in conjunction with the proxy
protocol or with "usesrc clientip". While there's nothing fundamentally
wrong with this, whenever these options are used, the server expects the
IP address to be the source address for all requests, which doesn't make
sense with http-reuse.
fc_rcvd_proxy : boolean
Returns true if the client initiated the connection with a PROXY protocol
header.
A flag is added on the struct connection if a PROXY header is successfully
parsed.
The older 'rsprep' directive allows modification of the status reason.
Extend 'http-response set-status' to take an optional string of the new
status reason.
http-response set-status 418 reason "I'm a coffeepot"
Matching updates in Lua code:
- AppletHTTP.set_status
- HTTP.res_set_status
Signed-off-by: Robin H. Johnson <robbat2@gentoo.org>
Commits 5f10ea3 ("OPTIM: http: improve parsing performance of long
URIs") and 0431f9d ("OPTIM: http: improve parsing performance of long
header lines") introduced a bug in the HTTP parser : when a partial
request is read, the first part ends up on a 8-bytes boundary (or 4-byte
on 32-bit machines), the end lies in the header field value part, and
the buffer used to contain a CR character exactly after the last block,
then the parser could be confused and read this CR character as being
part of the current request, then switch to a new state waiting for an
LF character. Then when the next part of the request appeared, it would
read the character following what was erroneously mistaken for a CR,
see that it is not an LF and fail on a bad request. In some cases, it
can even be worse and the header following the hole can be improperly
indexed causing all sort of unexpected behaviours like a content-length
being ignored or a header appended at the wrong position. The reason is
that there's no control of and of parsing just after breaking out of the
loop.
One way to reproduce it is with this config :
global
stats socket /tmp/sock1 mode 666 level admin
stats timeout 1d
frontend px
bind :8001
mode http
timeout client 10s
redirect location /
And sending requests this way :
$ tcploop 8001 C P S:"$(dd if=/dev/zero bs=16384 count=1 2>/dev/null | tr '\000' '\r')"
$ tcploop 8001 C P S:"$(dd if=/dev/zero bs=16384 count=1 2>/dev/null | tr '\000' '\r')"
$ tcploop 8001 C P \
S:"GET / HTTP/1.0\r\nX-padding: 0123456789.123456789.123456789.123456789.123456789.123456789.1234567" P \
S:"89.123456789\r\n\r\n" P
Then a "show errors" on the socket will report :
$ echo "show errors" | socat - /tmp/sock1
Total events captured on [04/Jan/2017:15:09:15.755] : 32
[04/Jan/2017:15:09:13.050] frontend px (#2): invalid request
backend <NONE> (#-1), server <NONE> (#-1), event #31
src 127.0.0.1:59716, session #91, session flags 0x00000080
HTTP msg state 17, msg flags 0x00000000, tx flags 0x00000000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x00808002, out 0 bytes, total 111 bytes
pending 111 bytes, wrapping at 16384, error at position 107:
00000 GET / HTTP/1.0\r\n
00016 X-padding: 0123456789.123456789.123456789.123456789.123456789.12345678
00086+ 9.123456789.123456789\r\n
00109 \r\n
This fix must be backported to 1.7.
Many thanks to Aleksey Gordeev and Axel Reinhold for providing detailed
network captures and configurations exhibiting the issue.
debug_hexdump() prints to the requested output stream (typically stdout
or stderr) an hex dump of the blob passed in argument. This is useful
to help debug binary protocols.
The stats page proudly displays "Updates (v1.5)". This version is inherited
from version.h which has not been updated since 1.5, so let's teach the
create-release script about it.
This must be backported to 1.7. 1.6 now uses the same script (externally)
for the release and will automatically benefit from it.
Error captures almost always report a state 26 (MSG_ERROR) making it
very hard to know what the parser was expecting. The reason is that
we have to switch to MSG_ERROR to trigger the dump, and then during
the dump we capture the current state which is already MSG_ERROR. With
this change we now copy the current state into an err_state field that
will be reported as the faulty state.
This patch looks a bit large because the parser doesn't update the
current state until it runs out of data so the current state is never
known when jumping to ther error label! Thus the code had to be updated
to take copies of the current state before switching to MSG_ERROR based
on the switch/case values.
As a bonus, it now shows the current state in human-readable form and
not only in numeric form ; in the past it was not an issue since it was
always 26 (MSG_ERROR).
At least now we can get exploitable invalid request/response reports :
[05/Jan/2017:19:28:57.095] frontend f (#2): invalid request
backend <NONE> (#-1), server <NONE> (#-1), event #1
src 127.0.0.1:39894, session #4, session flags 0x00000080
HTTP msg state MSG_RQURI(4), msg flags 0x00000000, tx flags 0x00000000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x00908002, out 0 bytes, total 20 bytes
pending 20 bytes, wrapping at 16384, error at position 5:
00000 GET /\e HTTP/1.0\r\n
00017 \r\n
00019 \n
[05/Jan/2017:19:28:33.827] backend b (#3): invalid response
frontend f (#2), server s1 (#1), event #0
src 127.0.0.1:39718, session #0, session flags 0x000004ce
HTTP msg state MSG_HDR_NAME(17), msg flags 0x00000000, tx flags 0x08300000
HTTP chunk len 0 bytes, HTTP body len 0 bytes
buffer flags 0x80008002, out 0 bytes, total 59 bytes
pending 59 bytes, wrapping at 16384, error at position 31:
00000 HTTP/1.1 200 OK\r\n
00017 Content-length : 10\r\n
00038 \r\n
00040 0a\r\n
00044 0123456789\r\n
00056 0\r\n
This should be backported to 1.7 and 1.6 at least to help with bug
reports.
It is important to defined analyzers (AN_REQ_* and AN_RES_*) in the same order
they are evaluated in process_stream. This order is really important because
during analyzers evaluation, we run them in the order of the lower bit to the
higher one. This way, when an analyzer adds/removes another one during its
evaluation, we know if it is located before or after it. So, when it adds an
analyzer which is located before it, we can switch to it immediately, even if it
has already been called once but removed since.
With the time, and introduction of new analyzers, this order was broken up. the
main problems come from the filter analyzers. We used values not related with
their evaluation order. Furthermore, we used same values for request and response
analyzers.
So, to fix the bug, filter analyzers have been splitted in 2 distinct lists to
have different analyzers for the request channel than those for the response
channel. And of course, we have moved them to the right place.
Some other analyzers have been reordered to respect the evaluation order:
* AN_REQ_HTTP_TARPIT has been moved just before AN_REQ_SRV_RULES
* AN_REQ_PRST_RDP_COOKIE has been moved just before AN_REQ_STICKING_RULES
* AN_RES_STORE_RULES has been moved just after AN_RES_WAIT_HTTP
Note today we have 29 analyzers, all stored into a 32 bits bitfield. So we can
still add 4 more analyzers before having a problem. A good way to fend off the
problem for a while could be to have a different bitfield for request and
response analyzers.
[wt: all of this must be backported to 1.7, and part of it must be backported
to 1.6 and 1.5]
The registered output type for the sample fetches sc*_get_gpt0
is a boolean, but the value returned is an integer.
This patch fixs the default type to SINT in place of BOOL.
This patch should be backported in 1.6 and 1.7
when using "option prefer-last-server", we may not always stay on
the same backend if option balance told us otherwise.
For example, backend may change in the following cases:
balance hdr()
balance rdp-cookie
balance source
balance uri
balance url_param
[wt: backport this to 1.7 and 1.6]
this adds a support of the newest pcre2 library,
more secure than its older sibling in a cost of a
more complex API.
It works pretty similarly to pcre's part to keep
the overall change smooth, except :
- we define the string class supported at compile time.
- after matching the ovec data is properly sized, althought
we do not take advantage of it here.
- the lack of jit support is treated less 'dramatically'
as pcre2_jit_compile in this case is 'no-op'.
In systemd mode (-Ds), the master haproxy process is waiting for each
child to exit in a specific order. If a process die when it's not his
turn, it will become a zombie process until every processes exit.
The master is now waiting for any process to exit in any order.
This patch should be backported to 1.7, 1.6 and 1.5.
Calling SSL_set_tlsext_host_name() on the current SSL ctx has no effect
if the session is being resumed because the hostname is already stored
in the session and is not advertised again in subsequent connections.
It's visible when enabling SNI and health checks at the same time because
checks do not send an SNI and regular traffic reuses the same connection,
resulting in no SNI being sent.
The only short-term solution is to reset the reused session when the
SNI changes compared to the previous one. It can make the server-side
performance suffer when SNIs are interleaved but it will work. A better
long-term solution would be to keep a small cache of a few contexts for
a few SNIs.
Now with SSL_set_session(ctx, NULL) it works. This needs to be double-
checked though. The man says that SSL_set_session() frees any previously
existing context. Some people report a bit of breakage when calling
SSL_set_session(NULL) on openssl 1.1.0a (freed session not reusable at
all though it's not an issue for now).
This needs to be backported to 1.7 and 1.6.
According to nbsrv() documentation this fetcher should return "an
integer value corresponding to the number of usable servers".
In case backend is disabled none of servers is usable, so I believe
fetcher should return 0.
This patch should be backported to 1.7, 1.6, 1.5.