A bug lied in the parsing of DNS CNAME response, leading HAProxy to
think the CNAME was improperly resolved in the response.
This should be backported into 1.6 branch
The status DNS_UPD_NAME_ERROR returned by dns_get_ip_from_response and
which means the queried name can't be found in the response was
improperly processed (fell into the default case).
This lead to a loop where HAProxy simply resend a new query as soon as
it got a response for this status and in the only case where such type
of response is the very first one received by the process.
This should be backported into 1.6 branch
It was accidently discovered that limiting haproxy to 5000 MB leads to
an effective limit of 904 MB. This is because the computation for the
size limit is performed by multiplying rlimit_memmax by 1048576, and
doing so causes the operation to be performed on an int instead of a
long or long long. Just switch to 1048576ULL as is done at other places
to fix this.
This bug affects all supported versions, the backport is desired, though
it rarely affects users since few people apply memory limits.
When DEBUG_MEMORY_POOLS is used, we now use the link pointer at the end
of the pool to store a pointer to the pool, and to control it during
pool_free2() in order to serve four purposes :
- at any instant we can know what pool an object was allocated from
when examining memory, hence how we should possibly decode it ;
- it serves to detect double free when they happen, as the pointer
cannot be valid after the element is linked into the pool ;
- it serves to detect if an element is released in the wrong pool ;
- it serves as a canary, to detect if some buffers experienced an
overflow before being release.
All these elements will definitely help better troubleshoot strange
situations, or at least confirm that certain conditions did not happen.
When debugging a core file, it's sometimes convenient to be able to
visit the released entries in the pools (typically last released
session). Unfortunately the first bytes of these entries are destroyed
by the link elements of the pool. And of course, most structures have
their most accessed elements at the beginning of the structure (typically
flags). Let's add a build-time option DEBUG_MEMORY_POOLS which allocates
an extra pointer in each pool to put the link at the end of each pool
item instead of the beginning.
Sometimes analysing a core file isn't easy due to shared memory pools.
Let's add a build option to disable this. It's not enabled by default,
it could be backported to older versions.
This commit adds support for setting a per-server maxconn from the stats
socket. The only really notable part of this commit is that we need to
check if maxconn == minconn before changing things, as this indicates
that we are NOT using dynamic maxconn. When we are not using dynamic
maxconn, we should update maxconn/minconn in lockstep.
It was reported that an example was manipulating a "Referrer" header instead
of the known "Referer" one. Even if it's an example wich doesn't break things,
the typo can be fixed.
The fix should be backported in 1.4/1.5/1.6 branches.
The function 'EVP_PKEY_get_default_digest_nid()' was introduced in OpenSSL
1.0.0. So for older version of OpenSSL, compiled with the SNI support, the
HAProxy compilation fails with the following error:
src/ssl_sock.c: In function 'ssl_sock_do_create_cert':
src/ssl_sock.c:1096:7: warning: implicit declaration of function 'EVP_PKEY_get_default_digest_nid'
if (EVP_PKEY_get_default_digest_nid(capkey, &nid) <= 0)
[...]
src/ssl_sock.c:1096: undefined reference to `EVP_PKEY_get_default_digest_nid'
collect2: error: ld returned 1 exit status
Makefile:760: recipe for target 'haproxy' failed
make: *** [haproxy] Error 1
So we must add a #ifdef to check the OpenSSL version (>= 1.0.0) to use this
function. It is used to get default signature digest associated to the private
key used to sign generated X509 certificates. It is called when the private key
differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. It should be enough for
most of cases.
Basically, it's ill-defined and shouldn't really be used going forward.
We can't guarantee that resolvers will do the 'legwork' for us and
actually resolve CNAMES when we request the ANY query-type. Case in point
(obfuscated, clearly):
PRODUCTION! ahayworth@secret-hostname.com:~$
dig @10.11.12.53 ANY api.somestartup.io
; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> @10.11.12.53 ANY api.somestartup.io
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62454
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0
;; QUESTION SECTION:
;api.somestartup.io. IN ANY
;; ANSWER SECTION:
api.somestartup.io. 20 IN CNAME api-somestartup-production.ap-southeast-2.elb.amazonaws.com.
;; AUTHORITY SECTION:
somestartup.io. 166687 IN NS ns-1254.awsdns-28.org.
somestartup.io. 166687 IN NS ns-1884.awsdns-43.co.uk.
somestartup.io. 166687 IN NS ns-440.awsdns-55.com.
somestartup.io. 166687 IN NS ns-577.awsdns-08.net.
;; Query time: 1 msec
;; SERVER: 10.11.12.53#53(10.11.12.53)
;; WHEN: Mon Oct 19 22:02:29 2015
;; MSG SIZE rcvd: 242
HAProxy can't handle that response correctly.
Rather than try to build in support for resolving CNAMEs presented
without an A record in an answer section (which may be a valid
improvement further on), this change just skips ANY record types
altogether. A and AAAA are much more well-defined and predictable.
Notably, this commit preserves the implicit "Prefer IPV6 behavior."
Furthermore, ANY query type by default is a bad idea: (from Robin on
HAProxy's ML):
Using ANY queries for this kind of stuff is considered by most people
to be a bad practice since besides all the things you named it can
lead to incomplete responses. Basically a resolver is allowed to just
return whatever it has in cache when it receives an ANY query instead
of actually doing an ANY query at the authoritative nameserver. Thus
if it only received queries for an A record before you do an ANY query
you will not get an AAAA record even if it is actually available since
the resolver doesn't have it in its cache. Even worse if before it
only got MX queries, you won't get either A or AAAA
Kim Seri reported that haproxy 1.6.0 crashes after a few requests
when a bind line has SSL enabled with more than one certificate. This
was caused by an insufficient condition to free generated certs during
ssl_sock_close() which can also catch other certs.
Christopher Faulet analysed the situation like this :
-------
First the LRU tree is only initialized when the SSL certs generation is
configured on a bind line. So, in the most of cases, it is NULL (it is
not the same thing than empty).
When the SSL certs generation is used, if the cache is not NULL, a such
certificate is pushed in the cache and there is no need to release it
when the connection is closed.
But it can be disabled in the configuration. So in that case, we must
free the generated certificate when the connection is closed.
Then here, we have really a bug. Here is the buggy part:
3125) if (conn->xprt_ctx) {
3126) #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
3127) if (!ssl_ctx_lru_tree && objt_listener(conn->target)) {
3128) SSL_CTX *ctx = SSL_get_SSL_CTX(conn->xprt_ctx);
3129) if (ctx != 3130)
SSL_CTX_free(ctx);
3131) }
3133) SSL_free(conn->xprt_ctx);
3134) conn->xprt_ctx = NULL;
3135) sslconns--;
3136) }
The check on the line 3127 is not enough to determine if this is a
generated certificate or not. Because ssl_ctx_lru_tree is NULL,
generated certificates, if any, must be freed. But here ctx should also
be compared to all SNI certificates and not only to default_ctx. Because
of this bug, when a SNI certificate is used for a connection, it is
erroneously freed when this connection is closed.
-------
Christopher provided this reliable reproducer :
----------
global
tune.ssl.default-dh-param 2048
daemon
listen ssl_server
mode tcp
bind 127.0.0.1:4443 ssl crt srv1.test.com.pem crt srv2.test.com.pem
timeout connect 5000
timeout client 30000
timeout server 30000
server srv A.B.C.D:80
You just need to generate 2 SSL certificates with 2 CN (here
srv1.test.com and srv2.test.com).
Then, by doing SSL requests with the first CN, there is no problem. But
with the second CN, it should segfault on the 2nd request.
openssl s_client -connect 127.0.0.1:4443 -servername srv1.test.com // OK
openssl s_client -connect 127.0.0.1:4443 -servername srv1.test.com // OK
But,
openssl s_client -connect 127.0.0.1:4443 -servername srv2.test.com // OK
openssl s_client -connect 127.0.0.1:4443 -servername srv2.test.com // KO
-----------
A long discussion led to the following proposal which this patch implements :
- the cert is generated. It gets a refcount = 1.
- we assign it to the SSL. Its refcount becomes two.
- we try to insert it into the tree. The tree will handle its freeing
using SSL_CTX_free() during eviction.
- if we can't insert into the tree because the tree is disabled, then
we have to call SSL_CTX_free() ourselves, then we'd rather do it
immediately. It will more closely mimmick the case where the cert
is added to the tree and immediately evicted by concurrent activity
on the cache.
- we never have to call SSL_CTX_free() during ssl_sock_close() because
the SSL session only relies on openssl doing the right thing based on
the refcount only.
- thus we never need to know how the cert was created since the
SSL_CTX_free() is either guaranteed or already done for generated
certs, and this protects other ones against any accidental call to
SSL_CTX_free() without having to track where the cert comes from.
This patch also reduces the inter-dependence between the LRU tree and
the SSL stack, so it should cause less sweating to migrate to threads
later.
This bug is specific to 1.6.0, as it was introduced after dev7 by
this fix :
d2cab92 ("BUG/MINOR: ssl: fix management of the cache where forged certificates are stored")
Thus a backport to 1.6 is required, but not to 1.5.
Susheel Jalali reported a confusing bug in namespaces implementation.
If namespaces are enabled at build time (USE_NS=1) and *no* namespace
is used at all in the whole config file, my_socketat() returns -1 and
all socket bindings fail. This is because of a wrong condition in this
function. A possible workaround consists in creating some namespaces.
The function which parses a DNS response buffer did not move properly a
pointer when reading a packet where records does not use DNS "message
compression" techniques.
Thanks to 0yvind Johnsen for the help provided during the troubleshooting
session.
doc/haproxy-{en,fr}.txt have been removed recently but they were still
referenced in the Makefile. Many other documents have also been
added. Instead of hard-coding a list of documents to install, install
all those in doc/ with some exceptions:
- coding-style.txt is more for developers
- gpl.txt and lgpl.txt are usually present at other places (and I would
have to remove them in the Debian packaging, less work for me)
The documentation in the subdirectories is not installed as it is more
targeted to developers.
Commit 44aed90ce1 moved the stats socket
documentation from config to management but the remaining references to
section 9.2 were not updated; improve it to be less confusing.
Signed-off-by: Kevin Decherf <kevin@kdecherf.com>
Released version 1.6.0 with the following main changes :
- BUG/MINOR: Handle interactive mode in cli handler
- DOC: global section missing parameters
- DOC: backend section missing parameters
- DOC: stats paramaters available in frontend
- MINOR: lru: do not allocate useless memory in lru64_lookup
- BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth)
- BUG/MINOR: ssl: fix management of the cache where forged certificates are stored
- MINOR: ssl: Release Servers SSL context when HAProxy is shut down
- MINOR: ssl: Read the file used to generate certificates in any order
- MINOR: ssl: Add support for EC for the CA used to sign generated certificates
- MINOR: ssl: Add callbacks to set DH/ECDH params for generated certificates
- BUG/MEDIUM: logs: fix time zone offset format in RFC5424
- BUILD: Fix the build on OSX (htonll/ntohll)
- BUILD: enable build on Linux/s390x
- BUG/MEDIUM: lua: direction test failed
- MINOR: lua: fix a spelling error in some error messages
- CLEANUP: cli: ensure we can never double-free error messages
- BUG/MEDIUM: lua: force server-close mode on Lua services
- MEDIUM: init: support more command line arguments after pid list
- MEDIUM: init: support a list of files on the command line
- MINOR: debug: enable memory poisonning to use byte 0
- BUILD: ssl: fix build error introduced by recent commit
- BUG/MINOR: config: make the stats socket pass the correct proxy to the parsers
- MEDIUM: server: implement TCP_USER_TIMEOUT on the server
- DOC: mention the "namespace" options for bind and server lines
- DOC: add the "management" documentation
- DOC: move the stats socket documentation from config to management
- MINOR: examples: update haproxy.spec to mention new docs
- DOC: mention management.txt in README
- DOC: remove haproxy-{en,fr}.txt
- BUILD: properly report when USE_ZLIB and USE_SLZ are used together
- MINOR: init: report use of libslz instead of "no compression"
- CLEANUP: examples: remove some obsolete and confusing files
- CLEANUP: examples: remove obsolete configuration file samples
- CLEANUP: examples: fix the example file content-sw-sample.cfg
- CLEANUP: examples: update sample file option-http_proxy.cfg
- CLEANUP: examples: update sample file ssl.cfg
- CLEANUP: tests: move a test file from examples/ to tests/
- CLEANUP: examples: shut up warnings in transparent proxy example
- CLEANUP: tests: removed completely obsolete test files
- DOC: update ROADMAP to remove what was done in 1.6
- BUG/MEDIUM: pattern: fixup use_after_free in the pat_ref_delete_by_id
A number of config files were present in the tests/ directory and which
would either test features that are easier to test using more recent files
or test obsolete features. All of them emit tons of useless warnings, and
instead of fixing them, better remove them since they have never been used
in the last 10 years or so.
The remaining files may still emit warnings and require some fixing but
they provide some value for some tests.
This removes the obsolete CTTPROXY configuration, the tarpit example,
and the pre-content switching example involving 3 layers and cookie
rewriting to emulate the use_backend feature... (9 years old).
Some files are totally obsolete. The Formilux init scripts and packaging
scripts for haproxy 1.1.21 should go. Linux 2.4 kernel patch to enable
epoll() on EOLed RHEL3 should go. The tuning script is incomplete and
only suited to older kernels, better stop shipping this one.
This doc explains how to start/stop haproxy, what signals are used
and a few debugging tricks. It's far from being complete but should
already help a number of users.
The stats part will be taken from the config doc.
This is equivalent to commit 2af207a ("MEDIUM: tcp: implement tcp-ut
bind option to set TCP_USER_TIMEOUT") except that this time it works
on the server side. The purpose is to detect dead server connections
even when checks are rare, disabled, or after a soft reload (since
checks are disabled there as well), and to ensure client connections
will get killed faster.
Baptiste reported a segfault when the "id" keyword was passed on the
"stats socket" line. The problem is related to the fact that the stats
parser stats_parse_global() passes curpx instead of global.stats_fe to
the keyword parser. Indeed, curpx being a pointer to the proxy in the
current section, it is not correct here since the global section does
not describe a proxy. It's just by pure luck that only bind_parse_id()
uses the proxy since any other keyword parser could use it as well.
The bug has no impact since the id specified here is not usable at all
and can be discarded from a faulty configuration.
This fix must be backported to 1.5.
Lua needs to known the direction of the http data processed (request or
response). It checks the flag SMP_OPT_DIR_REQ, buf this flag is 0. This patch
correctly checks the flags after applying the SMP_OPT_DIR mask.
I would like to contribute the following fix to enable the Linux s390x
platform. The fix was built against today's git master. I've attached the
patch for review. Depending on your buildbot/jenkins/? requirements I can
set up a virtual machine for automated building/testing of the package in
this environment.
A previous commit broke the interactive stats cli prompt. Specifically,
it was not clear that we could be in STAT_CLI_PROMPT when we get to
the output functions for the cli handler, and the switch statement did
not handle this case. We would then fall through to the default
statement, which was recently changed to set error flags on the socket.
This in turn causes the socket to be closed, which is not what we wanted
in this specific case.
To fix, we add a case for STAT_CLI_PROMPT, and simply break out of the
switch statement.
Testing:
- Connected to unix stats socket, issued 'prompt', observed that I
could issue multiple consecutive commands.
- Connected to unix stats socket, issued 'prompt', observed that socket
timed out after inactivity expired.
- Connected to unix stats socket, issued 'prompt' then 'set timeout cli
5', observed that socket timed out after 5 seconds expired.
- Connected to unix stats socket, issued invalid commands, received
usage output.
- Connected to unix stats socket, issued 'show info', received info
output and socket disconnected.
- Connected to unix stats socket, issued 'show stat', received stats
output and socket disconnected.
- Repeated above tests with TCP stats socket.
[wt: no backport needed, this was introduced during the applet rework in 1.6]