str2listener() was temporarily hacked to support datagram sockets for
the log-forward listeners. This has has an undesirable side effect that
"bind udp@1.2.3.4:5555" was silently accepted as TCP for a bind line.
We don't need this hack anymore since the only user (log-forward) now
relies on str2receiver(). Now such an address will properly be rejected.
Now str2sa_range() will enforce the caller's port specification passed
using the PA_O_PORT_* flags, and will return an error on failure. For
optional ports, values 0-65535 will be enforced. For mandatory ports,
values 1-65535 are enforced. In case of ranges, it is also verified that
the upper bound is not lower than the lower bound, as this used to result
in empty listeners.
I couldn't find an easy way to test this using VTC since the purpose is
to trigger parse errors, so instead a test file is provided as
tests/ports.cfg with comments about what errors are expected for each
line.
Initially when mt_lists were added, their purpose was to be used with
the scheduler, where anyone may concurrently add the same tasklet, so
it sounded natural to implement a check in MT_LIST_ADD{,Q}. Later their
usage was extended and MT_LIST_ADD{,Q} started to be used on situations
where the element to be added was exclusively owned by the one performing
the operation so a conflict was impossible. This became more obvious with
the idle connections and the new macro was called MT_LIST_ADDQ_NOCHECK.
But this remains confusing and at many places it's not expected that
an MT_LIST_ADD could possibly fail, and worse, at some places we start
by initializing it before adding (and the test is superflous) so let's
rename them to something more conventional to denote the presence of the
check or not:
MT_LIST_ADD{,Q} : inconditional operation, the caller owns the
element, and doesn't care about the element's
current state (exactly like LIST_ADD)
MT_LIST_TRY_ADD{,Q}: only perform the operation if the element is not
already added or in the process of being added.
This means that the previously "safe" MT_LIST_ADD{,Q} are not "safe"
anymore. This also means that in case of backport mistakes in the
future causing this to be overlooked, the slower and safer functions
will still be used by default.
Note that the missing unchecked MT_LIST_ADD macro was added.
The rest of the code will have to be reviewed so that a number of
callers of MT_LIST_TRY_ADDQ are changed to MT_LIST_ADDQ to remove
the unneeded test.
Half of the users of this include only need the type definitions and
not the manipulation macros nor the inline functions. Moves the various
types into mini-clist-t.h makes the files cleaner. The other one had all
its includes grouped at the top. A few files continued to reference it
without using it and were cleaned.
In addition it was about time that we'd rename that file, it's not
"mini" anymore and contains a bit more than just circular lists.
This is where other imported components are located. All files which
used to directly include ebtree were touched to update their include
path so that "import/" is now prefixed before the ebtree-related files.
The ebtree.h file was slightly adjusted to read compiler.h from the
common/ subdirectory (this is the only change).
A build issue was encountered when eb32sctree.h is loaded before
eb32tree.h because only the former checks for the latter before
defining type u32. This was addressed by adding the reverse ifdef
in eb32tree.h.
No further cleanup was done yet in order to keep changes minimal.
Add test-list.c, a stress-test for mt_list, to ensure there's no concurrency
issue.
The number of threads is provided on the command line, and it randomly
add, removes, or parses the list until it made MAX_ACTION actions (currently
5000000).
This is a python wrapper which creates a socketpair and passes it as two
environment variable to haproxy.
It's the easiest way to test the sockpair protocol in haproxy.
This test file covers the various functions provided by ist.h. It allows
both to test them for absence of regression, and to observe the code
emitted at different optimization levels.
When support for passing SNI to the server was added in 1.6-dev3, there
was no way to validate that the certificate presented by the server would
really match the name requested in the SNI, which is quite a problem as
it allows other (valid) certificates to be presented instead (when hitting
the wrong server or due to a man in the middle).
This patch adds the missing check against the value passed in the SNI.
The "verifyhost" value keeps precedence if set. If no SNI is used and
no verifyhost directive is specified, then the certificate name is not
checked (this is unchanged).
In order to extract the SNI value, it was necessary to make use of
SSL_SESSION_get0_hostname(), which appeared in openssl 1.1.0. This is
a trivial function which returns the value of s->tlsext_hostname, so
it was provided in the compat layer for older versions. After some
refinements from Emmanuel, it now builds with openssl 1.0.2, openssl
1.1.0 and boringssl. A test file was provided to ease testing all cases.
After some careful observation period it may make sense to backport
this to 1.7 and 1.6 as some users rightfully consider this limitation
as a bug.
Cc: Emmanuel Hocdet <manu@gandi.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
This config tries to involve the various possible combinations of connection
handshakes, on the accept side and on the connect side. It also produces logs
indicating the handshake time.
May be tested with tcploop as the server, both for TCP and HTTP mode :
- accept new connection
- pause 100ms
- send what looks like an HTTP response
- wait 500ms and close
Starting log server (mainly to check timers) :
$ socat udp-recvfrom:5514,fork -
Starting server :
$ tcploop 8000 L N A W P100 S:"HTTP/1.0 200 OK\r\nConnection: close\r\n\r\n" P500
Testing all combinations with server-speaks-first (tcp) :
$ nc 0 8007
Testing all combinations with client-speaks-first (tcp) :
$ (printf "GET / HTTP/1.0\r\n\r\n";sleep 1) | nc 0 8007
Testing all combinations with client-speaks-first after pause (tcp) :
$ (usleep 0.05 ; printf "GET / HTTP/1.0\r\n\r\n";sleep 1) | nc 0 8007
Testing all combinations with client-speaks-first (http) :
$ (printf "GET / HTTP/1.0\r\n\r\n";sleep 1) | nc 0 8017
Testing all combinations with client-speaks-first after pause (http) :
$ (usleep 0.05 ; printf "GET / HTTP/1.0\r\n\r\n";sleep 1) | nc 0 8017
Same tests must be redone after surrounding connect() in tcp_connect_server()
with fcntl(fd, F_SETFL, 0) and fcntl(fd, F_SETFL, O_NONBLOCK) for sycnhronous
connect().
A problem was reported recently by some users of programs compiled
with Go 1.5 which by default blocks all signals before executing
processes, resulting in haproxy not receiving SIGUSR1 or even SIGTERM.
This program mimmicks this behaviour to make it easier to run tests.
It also displays the current signal mask. A simple test consists in
running it through itself.
In C89, "void *" is automatically promoted to any pointer type. Casting
the result of malloc/calloc to the type of the LHS variable is therefore
unneeded.
Most of this patch was built using this Coccinelle patch:
@@
type T;
@@
- (T *)
(\(lua_touserdata\|malloc\|calloc\|SSL_get_app_data\|hlua_checkudata\|lua_newuserdata\)(...))
@@
type T;
T *x;
void *data;
@@
x =
- (T *)
data
@@
type T;
T *x;
T *data;
@@
x =
- (T *)
data
Unfortunately, either Coccinelle or I is too limited to detect situation
where a complex RHS expression is of type "void *" and therefore casting
is not needed. Those cases were manually examined and corrected.
A number of config files were present in the tests/ directory and which
would either test features that are easier to test using more recent files
or test obsolete features. All of them emit tons of useless warnings, and
instead of fixing them, better remove them since they have never been used
in the last 10 years or so.
The remaining files may still emit warnings and require some fixing but
they provide some value for some tests.
Summary:
Added a document for hashing under internal docs explaining
hashing in haproxy along with the results of tests under the test
folder.
These documents together explain the motivation for adding
options for hashing algorithms with the option of enabling or
disabling of avalanche.
Before it was possible to resize the buffers using global.tune.bufsize,
the trash has always been the size of a buffer by design. Unfortunately,
the recent buffer sizing at runtime forgot to adjust the trash, resulting
in it being too short for content rewriting if buffers were enlarged from
the default value.
The bug was encountered in 1.4 so the fix must be backported there.
make_arg_list() builds an array of typed arguments with their values,
that the caller describes how to parse. This will be used to support
multiple arguments for ACLs and patterns, which is currently problematic
and prevents ACLs and patterns from being merged. Up to 7 arguments types
may be enumerated in a single 32-bit word, including their number of
mandatory parts.
At the moment, these files are not used yet, they're only built. Note that
the 4-bit encoding for the type has left only one unused type!
New option "http-send-name-header" specifies the name of a header which
will hold the server name in outgoing requests. This is the name of the
server the connection is really sent to, which means that upon redispatches,
the header's value is updated so that it always matches the server's name.
The MySQL check has been revamped to be able to send real MySQL data,
and to avoid Aborted connects on MySQL side.
It is however backward compatible with older version, but it is highly
recommended to use the new mode, by adding "user <username>" on the
"mysql-check" line.
The new check consists in sending two MySQL packet, one Client
Authentication packet, with "haproxy" username (by default), and one
QUIT packet, to correctly close MySQL session. We then parse the Mysql
Handshake Initialisation packet and/or Error packet. It is a basic but
useful test which does not produce error nor aborted connect on the
server.
(cherry picked from commit a1e4dcfe5718311b7653d7dabfad65c005d0439b)
The request cookie parser did not allow spaces to appear in cookie
values nor around the equal sign. The various RFCs on the subject
say different things, some suggesting that a space is allowed after
the equal sign and being worded in a way that lets one believe it
is allowed before too. Some spaces may appear inside values and be
part of the values. The quotes allow delimiters to be embedded in
values. The spaces before and after attributes should be trimmed.
The new parser addresses all those points and has been carefully tested.
It fixes misplaced spaces around equal signs before processing the cookies
or forwarding them. It also tries its best to perform clean removals by
always keeping the delimiter after the value being removed and leaving one
space after it.
The variable inside the parser have been renamed to make the code a lot
more understandable, and one multi-function pointer has been eliminated.
Since this patch fixes real possible issues, it should be backported to 1.4
and possibly 1.3, since one (single) case of wrong spaces has been reported
in 1.3.
The code handling the Set-Cookie has not been touched yet.
If the prefix is set to "/", it means the user does not want to alter
the original URI, so we don't want to insert a new slash before the
original URI.
(cherry-picked from commit 02a35c74942c1bce762e996698add1270e6a5030)
It is now possible to set or clear a cookie during a redirection. This
is useful for logout pages, or for protecting against some DoSes. Check
the documentation for the options supported by the "redirect" keyword.
(cherry-picked from commit 4af993822e880d8c932f4ad6920db4c9242b0981)
If "drop-query" is present on a "redirect" line using the "prefix" mode,
then the returned Location header will be the request URI without the
query-string. This may be used on some login/logout pages, or when it
must be decided to redirect the user to a non-secure server.
(cherry-picked from commit f2d361ccd73aa16538ce767c766362dd8f0a88fd)
22 regression tests for state machines are managed by the new
file tests/test-fsm.cfg. Check it, they are all documented
inside. Most of the bugs introduced during the FSM extraction
have been found with these tests.
The new "wait_end" acl delays evaluation of the rule (and the next ones)
to the end of the analysis period. This is intented to be used with TCP
content analysis. A rule referencing such an ACL will not match until
the delay is over. An equivalent default ACL "WAIT_END" has been created.
This new keyword matches an dotted version mapped into an integer.
It permits to match an SSL message protocol version just as if it
was an integer, so that it is easy to map ranges, like this :
acl obsolete_ssl req_ssl_ver lt 3
acl correct_ssl req_ssl_ver 3.0-3.1
acl invalid_ssl req_ssl_ver gt 3.1
Both SSLv2 hello messages and SSLv3 messages are supported. The
test tries to be strict enough to avoid being easily fooled. In
particular, it waits for as many bytes as announced in the message
header if this header looks valid (bound to the buffer size).
The same decoder will be usable with minor changes to check the
response messages.
Some people need to inspect contents of TCP requests before
deciding to forward a connection or not. A future extension
of this demand might consist in selecting a server farm
depending on the protocol detected in the request.
For this reason, a new state CL_STINSPECT has been added on
the client side. It is immediately entered upon accept() if
the statement "tcp-request inspect-delay <xxx>" is found in
the frontend configuration. Haproxy will then wait up to
this amount of time trying to find a matching ACL, and will
either accept or reject the connection depending on the
"tcp-request content <action> {if|unless}" rules, where
<action> is either "accept" or "reject".
Note that it only waits that long if no definitive verdict
can be found earlier. That generally implies calling a fetch()
function which does not have enough information to decode
some contents, or a match() function which only finds the
beginning of what it's looking for.
It is only at the ACL level that partial data may be processed
as such, because we need to distinguish between MISS and FAIL
*before* applying the term negation.
Thus it is enough to add "| ACL_PARTIAL" to the last argument
when calling acl_exec_cond() to indicate that we expect
ACL_PAT_MISS to be returned if some data is missing (for
fetch() or match()). This is the only case we may return
this value. For this reason, the ACL check in process_cli()
has become a lot simpler.
A new ACL "req_len" of type "int" has been added. Right now
it is already possible to drop requests which talk too early
(eg: for SMTP) or which don't talk at all (eg: HTTP/SSL).
Also, the acl fetch() functions have been extended in order
to permit reporting of missing data in case of fetch failure,
using the ACL_TEST_F_MAY_CHANGE flag.
The default behaviour is unchanged, and if no rule matches,
the request is accepted.
As a side effect, all layer 7 fetching functions have been
cleaned up so that they now check for the validity of the
layer 7 pointer before dereferencing it.
The wait queues now rely on 4 trees for past, present and future
timers. The computations are cleaner and more reliable. The
wake_expired_tasks function has become simpler. Also, a bug
previously introduced in task_queue() by the first introduction
of eb_trees has been fixed (the eb->key was never updated).
A new "redirect" keyword adds the ability to send an HTTP 301/302/303
redirection to either an absolute location or to a prefix followed by
the original URI. The redirection is conditionned by ACL rules, so it
becomes very easy to move parts of a site to another site using this.
This work was almost entirely done at Exceliance by Emeric Brun.
A test-case has been added in the tests/ directory.
About: [BUG] Flush buffers also where there are exactly 0 bytes left
I'm also attaching a debug patch that helps to trigger this bug.
Without the fix:
# echo -ne "GET /haproxy?stats;csv;norefresh HTTP/1.0\r\n\r\n"|nc 127.0.0.1
801|wc -c
16384
With the fix:
# echo -ne "GET /haproxy?stats;csv;norefresh HTTP/1.0\r\n\r\n"|nc 127.0.0.1
801|wc -c
33089
Best regards,
Krzysztof Oledzki
If we want to support netmasks for IP address hashing,
we will need something better than a pure modulus, otherwise
people with even numbers of servers will get surprizes.
Bob Jenkins is known for his works on hashing, and his site
has a lot of very interesting researches and algorithms for
integer hashing. He also points to the work of Thomas Wang
who has similar findings.
The program here tests their algorithms in order to find one
well suited for IP address hashing.