Currently it is possible for the current_rule field to be evaluated before
being set, leading to valgrind complaining:
==16783== Conditional jump or move depends on uninitialised value(s)
==16783== at 0x44E662: http_res_get_intercept_rule (proto_http.c:3730)
==16783== by 0x44E662: http_process_res_common (proto_http.c:6528)
==16783== by 0x4797B7: process_stream (stream.c:1851)
==16783== by 0x414634: process_runnable_tasks (task.c:238)
==16783== by 0x40B02F: run_poll_loop (haproxy.c:1528)
==16783== by 0x407F25: main (haproxy.c:1887)
This was introduced by commit 152b81e7b2.
On platforms where the dl*() functions are not part of the libc, a
program linking Lua also needs to link to libdl.
Moreover, on platforms using a gold linker with the --as-needed flag,
the libdl library needs to be linked after linking Lua, otherwise, it
won't be marked as needed and will be discarded and its symbols won't be
present at the end of the linking phase.
Ubuntu enables the --as-needed flag by default. Other distributions may
advertise its use, like Gentoo.
Released version 1.6-dev3 with the following main changes :
- CLEANUP: sample: generalize sample_fetch_string() as sample_fetch_as_type()
- MEDIUM: http: Add new 'set-src' option to http-request
- DOC usesrc root privileges requirments
- BUG/MINOR: dns: wrong time unit for some DNS default parameters
- MINOR: proxy: bit field for proxy_find_best_match diff status
- MINOR: server: new server flag: SRV_F_FORCED_ID
- MINOR: server: server_find functions: id, name, best_match
- DOC: dns: fix chapters syntax
- BUILD/MINOR: tools: rename popcount to my_popcountl
- BUILD: add netbsd TARGET
- MEDIUM: 51Degrees code refactoring and cleanup
- MEDIUM: 51d: add LRU-based cache on User-Agent string detection
- DOC: add notes about the "51degrees-cache-size" parameter
- BUG/MEDIUM: 51d: possible incorrect operations on smp->data.str.str
- BUG/MAJOR: connection: fix TLV offset calculation for proxy protocol v2 parsing
- MINOR: Add sample fetch to detect Supported Elliptic Curves Extension
- BUG/MINOR: payload: Add volatile flag to smp_fetch_req_ssl_ec_ext
- BUG/MINOR: lua: type error in the arguments wrapper
- CLEANUP: vars: remove unused struct
- BUG/MINOR: http/sample: gmtime/localtime can fail
- MINOR: standard: add 64 bits conversion functions
- MAJOR: sample: converts uint and sint in 64 bits signed integer
- MAJOR: arg: converts uint and sint in sint
- MEDIUM: sample: switch to saturated arithmetic
- MINOR: vars: returns variable content
- MEDIUM: vars/sample: operators can use variables as parameter
- BUG/MINOR: ssl: fix smp_fetch_ssl_fc_session_id
- BUILD/MINOR: lua: fix a harmless build warning
- BUILD/MINOR: stats: fix build warning due to condition always true
- BUG/MAJOR: lru: fix unconditional call to free due to unexpected semi-colon
- BUG/MEDIUM: logs: fix improper systematic use of quotes with a few tags
- BUILD/MINOR: lua: ensure that hlua_ctx_destroy is properly defined
- BUG/MEDIUM: lru: fix possible memory leak when ->free() is used
- MINOR: vars: make the accounting not depend on the stream
- MEDIUM: vars: move the session variables to the session, not the stream
- BUG/MEDIUM: vars: do not freeze the connection when the expression cannot be fetched
- BUG/MAJOR: buffers: make the buffer_slow_realign() function respect output data
- BUG/MAJOR: tcp: tcp rulesets were still broken
- MINOR: stats: improve compression stats reporting
- MINOR: ssl: make self-generated certs also work with raw IPv6 addresses
- CLEANUP: ssl: make ssl_sock_generated_cert_serial() take a const
- CLEANUP: ssl: make ssl_sock_generate_certificate() use ssl_sock_generated_cert_serial()
- BUG/MINOR: log: missing some ARGC_* entries in fmt_directives()
- MINOR: args: add new context for servers
- MINOR: stream: maintain consistence between channel_forward and HTTP forward
- MINOR: ssl: provide ia function to set the SNI extension on a connection
- MEDIUM: ssl: add sni support on the server lines
- CLEANUP: stream: remove a useless call to si_detach()
- CLEANUP: stream-int: fix a few outdated comments about stream_int_register_handler()
- CLEANUP: stream-int: remove stream_int_unregister_handler() and si_detach()
- MINOR: stream-int: only use si_release_endpoint() to release a connection
- MINOR: standard: provide htonll() and ntohll()
- CLEANUP/MINOR: dns: dns_str_to_dn_label() only needs a const char
- BUG/MAJOR: dns: fix the length of the string to be copied
Jan A. Bruder reported that some very specific hostnames on server
lines were causing haproxy to crash on startup. Given that hist
backtrace showed some heap corruption, it was obvious there was an
overflow somewhere. The bug in fact is a typo in dns_str_to_dn_label()
which mistakenly copies one extra byte from the host name into the
output value, thus effectively corrupting the structure.
The bug triggers while parsing the next server of similar length
after the corruption, which generally triggers at config time but
could theorically crash at any moment during runtime depending on
what malloc sizes are needed next. This is why it's tagged major.
No backport is needed, this bug was introduced in 1.6-dev2.
This patch allow the existing operators to take a variable as parameter.
This is useful to add the content of two variables. This patch modify
the behavior of operators.
This patch check calculus for overflow and returns capped values.
This permits to protect against integer overflow in certain operations
involving ratios, percentages, limits or anything. That can sometimes
be critically important with some operations (eg: content-length < X).
This patch removes the 32 bits unsigned integer and the 32 bit signed
integer. It replaces these types by a unique type 64 bit signed.
This makes easy the usage of integer and clarify signed and unsigned use.
With the previous version, signed and unsigned are used ones in place of
others, and sometimes the converter loose the sign. For example, divisions
are processed with "unsigned", if one entry is negative, the result is
wrong.
Note that the integer pattern matching and dotted version pattern matching
are already working with signed 64 bits integer values.
There is one user-visible change : the "uint()" and "sint()" sample fetch
functions which used to return a constant integer have been replaced with
a new more natural, unified "int()" function. These functions were only
introduced in the latest 1.6-dev2 so there's no impact on regular
deployments.
These are the 64-bit equivalent of htonl() and ntohl(). They're a bit
tricky in order to avoid expensive operations.
The principle consists in letting the compiler detect we're playing
with a union and simplify most or all operations. The asm-optimized
htonl() version involving bswap (x86) / rev (arm) / other is a single
operation on little endian, or a NOP on big-endian. In both cases,
this lets the compiler "see" that we're rebuilding a 64-bit word from
two 32-bit quantities that fit into a 32-bit register. In big endian,
the whole code is optimized out. In little endian, with a decent compiler,
a few bswap and 2 shifts are left, which is the minimum acceptable.
This patch adds 3 functions for 64 bit integer conversion.
* lltoa_r : converts signed 64 bit integer to string
* read_uint64 : converts from string to signed 64 bits integer with capping
* read_int64 : converts from string to unsigned 64 bits integer with capping
This patch introduces three new functions which can be used to find a
server in a farm using different server information:
- server unique id (srv->puid)
- server name
- find best match using either name or unique id
When performing best matching, the following applies:
- use the server name first (if provided)
- use the server id if provided
in any case, the function can update the caller about mismatches
encountered.
This flag aims at reporting whether the server unique id (srv->puid) has
been forced by the administrator in HAProxy's configuration.
If not set, it means HAProxy has generated automatically the server's
unique id.
function proxy_find_best_match can update the caller by updating an int
provided in argument.
For now, proxy_find_best_match hardcode bit values 0x01, 0x02 and 0x04,
which is not understandable when reading a code exploiting them.
This patch defines 3 macros with a more explicit wording, so further
reading of a code exploiting the magic bit values will be understandable
more easily.
The man said that gmtime() and localtime() can return a NULL value.
This is not tested. It appears that all the values of a 32 bit integer
are valid, but it is better to check the return of these functions.
However, if the integer move from 32 bits to 64 bits, some 64 values
can be unsupported.
Change si_alloc_conn() to call si_release_endpoint() instead of
open-coding the connection releasing code when reuse is disabled.
This fuses the code with the one already dealing with applets, makes
it shorter and helps centralizing the connection freeing logic at a
single place.
Madison May reported that the timeout applied by the default
configuration is inproperly set up.
This patch fix this:
- hold valid default to 10s
- timeout retry default to 1s
The commit "MEDIUM: vars: move the session variables to the session, not the stream" (ebcd4844e82a4198ea5d98fe491a46267da1d1ec")
moves the variables from the stream to the session. It forgot to remove
the stream definition of the "vars_sess".
The new "sni" server directive takes a sample fetch expression and
uses its return value as a hostname sent as the TLS SNI extension.
A typical use case consists in forwarding the front connection's SNI
value to the server in a bridged HTTPS forwarder :
sni ssl_fc_sni
ssl_sock_set_servername() is used to set the SNI hostname on an
outgoing connection. This function comes from code originally
provided by Christopher Faulet of Qualys.
When the HTTP forwarder is used, it resets msg->sov so that we know that
the parsing pointer has advanced by exactly (msg->eoh + msg->eol - msg->sov)
bytes which may have to be rewound in case we want to perform an HTTP fetch
after forwarding has started (eg: upon connect).
But when the backend is in TCP mode, there may be no HTTP forwarding
analyser installed, still we may want to perform these HTTP fetches in
case we have already ensured at the TCP layer that we have a properly
parsed HTTP transaction.
In order to solve this, we reset msg->sov before doing a channel_forward()
so that we can still compute http_rewind() on the pending data. That ensures
the buffer is always rewindable even in mixed TCP+HTTP mode.
ARGC_CAP was not added to fmt_directives() which is used to format
error messages when failing to parse log format expressions. The
whole switch/case has been reorganized to match the declaration
order making it easier to spot missing values. The default is not
the "log" directive anymore but "undefined" asking to report the
bug.
Backport to 1.5 is not strictly needed but is desirable at least
for code sanity.
Clients that support ECC cipher suites SHOULD send the specified extension
within the SSL ClientHello message according to RFC4492, section 5.1. We
can use this extension to chain-proxy requests so that, on the same IP
address, a ECC compatible clients gets an EC certificate and a non-ECC
compatible client gets a regular RSA certificate. The main advantage of this
approach compared to the one presented by Dave Zhu on the mailing list
is that we can make it work with OpenSSL versions before 1.0.2.
Example:
frontend ssl-relay
mode tcp
bind 0.0.0.0:443
use_backend ssl-ecc if { req.ssl_ec_ext 1 }
default_backend ssl-rsa
backend ssl-ecc
mode tcp
server ecc unix@/var/run/haproxy_ssl_ecc.sock send-proxy-v2 check
backend ssl-rsa
mode tcp
server rsa unix@/var/run/haproxy_ssl_rsa.sock send-proxy-v2 check
listen all-ssl
bind unix@/var/run/haproxy_ssl_ecc.sock accept-proxy ssl crt /usr/local/haproxy/ecc.foo.com.pem user nobody
bind unix@/var/run/haproxy_ssl_rsa.sock accept-proxy ssl crt /usr/local/haproxy/www.foo.com.pem user nobody
Signed-off-by: Nenad Merdanovic <nmerdan@anine.io>
The current method of retrieving the incoming connection's destination
address to hash it is not compatible with IPv6 nor the proxy protocol
because it directly tries to get an IPv4 address from the socket. Instead
we must ask the connection. This is only used when no SNI is provided.
In src/51d.c, the function _51d_conv(), a final '\0' is added into
smp->data.str.str, which can cause a problem if the SMP_F_CONST flag is
set in smp->flags or if smp->data.str.size is not available.
This patch adds a check on smp->flags and smp->data.str.size, and copies
the smp->data.str.str to another buffer by using smp_dup(). If necessary,
the "const" flag is set after device detection. Also, this patch removes
the unnecessary call to chunk_reset() on temp argument.
This option enables overriding source IP address in a HTTP request. It is
useful when we want to set custom source IP (e.g. front proxy rewrites address,
but provides the correct one in headers) or we wan't to mask source IP address
for privacy or compliance.
It acts on any expression which produces correct IP address.
This modification makes possible to use sample_fetch_string() in more places,
where we might need to fetch sample values which are not plain strings. This
way we don't need to fetch string, and convert it into another type afterwards.
When using aliased types, the caller should explicitly check which exact type
was returned (e.g. SMP_T_IPV4 or SMP_T_IPV6 for SMP_T_ADDR).
All usages of sample_fetch_string() are converted to use new function.
Compression stats were not easy to read and could be confusing because
the saving ratio could be taken for global savings while it was only
relative to compressible input. Let's make that a bit clearer using
the new tooltips with a bit more details and also report the effective
ratio over all output bytes.
Commit cc87a11 ("MEDIUM: tcp: add register keyword system.") broke the
TCP ruleset by merging custom rules and accept. It was fixed a first time
by commit e91ffd0 ("BUG/MAJOR: tcp: only call registered actions when
they're registered") but the accept action still didn't work anymore
and was causing the matching rule to simply be ignored.
Since the code introduced a very fragile behaviour by not even mentionning
that accept and custom were silently merged, let's fix this once for all by
adding an explicit check for the accept action. Nevertheless, as previously
mentionned, the action should be changed so that custom is the only action
and the continue vs break indication directly comes from the callee.
No backport is needed, this bug only affects 1.6-dev.
All chapters in the configuration documentation used to follow this syntax :
<chapter number>. <title>
-------------------------
The new chapters introduced to document the dns resolution didn't provide the
dot character after the chapter number, which breaks the parsing for the HTML
converter. Instead of adding new conditions in the converter, we can align the
chapters with this syntax.
Until now, the code assumed that it can get the offset to the first TLV
header just by subtracting the length of the TLV part from the length of
the complete buffer. However, if the buffer contains actual data after
the header, this computation is flawed and leads to haproxy trying to
parse TLV headers from the proxied data.
This change fixes this by making sure that the offset to the first TLV
header is calculated based from the start of the buffer -- simply by
adding the size of the proxy protocol v2 header plus the address
family-dependent size of the address information block.
The function buffer_slow_realign() was initially designed for requests
only and did not consider pending outgoing data. This causes a problem
when called on responses where data remain in the buffer, which may
happen with pipelined requests when the client is slow to read data.
The user-visible effect is that if less than <maxrewrite> bytes are
present in the buffer from a previous response and these bytes cross
the <maxrewrite> boundary close to the end of the buffer, then a new
response will cause a realign and will destroy these pending data and
move the pointer to what's believed to contain pending output data.
Thus the client receives the crap that lies in the buffer instead of
the original output bytes.
This new implementation now properly realigns everything including the
outgoing data which are moved to the end of the buffer while the input
data are moved to the beginning.
This implementation still uses a buffer-to-buffer copy which is not
optimal in terms of performance and which should be replaced by a
buffer switch later.
Prior to this patch, the following script would return different hashes
on each round when run from a 100 Mbps-connected machine :
i=0
while usleep 100000; do
echo round $((i++))
set -- $(nc6 0 8001 < 1kreq5k.txt | grep -v '^[0-9A-Z]' | md5sum)
if [ "$1" != "3861afbb6566cd48740ce01edc426020" ]; then echo $1;break;fi
done
The file contains 1000 times this request with "Connection: close" on the
last one :
GET /?s=5k&R=1 HTTP/1.1
The config is very simple :
global
tune.bufsize 16384
tune.maxrewrite 8192
defaults
mode http
timeout client 10s
timeout server 5s
timeout connect 3s
listen px
bind :8001
option http-server-close
server s1 127.0.0.1:8000
And httpterm-1.7.2 is used as the server on port 8000.
After the fix, 1 million requests were sent and all returned the same
contents.
Many thanks to Charlie Smurthwaite of atechmedia.com for his precious
help on this issue, which would not have been diagnosed without his
very detailed traces and numerous tests.
The patch must be backported to 1.5 which is where the bug was introduced.
This is in order to avoid conflicting with NetBSD popcount* functions
since 6.x release, the final l to mentions the argument is a long like
NetBSD does.
This patch could be backported to 1.5 to fix the build issue there as well.